Automatically split database yearly (1 Viewer)

Onlylonely

Registered User.
Local time
Today, 15:59
Joined
Jan 18, 2017
Messages
43
Hi Guys,

I was wondering if my database is FULL and need to split it automatically.
I have no idea where to start. Any suggestion?

1) It will help to protect my data as well. If i lost my data , it will only affect maximum one year.
 

jleach

Registered User.
Local time
Today, 03:59
Joined
Jan 4, 2012
Messages
308
Databases don't really get "full" - instead they get to a point where you need to upgrade the database (e.g., if you're reaching the 2GB limit of Access, maybe consider moving the SQL Express, which gives you 10GB for free). 2GB is a lot of data: what information are you storing? Binary data at all that could be moved to a separate storage medium (such as a filesystem, with a link to the file in the db instead of the file itself)?

The idea is that you should not have to segregate your data like this: there should be no reason not to keep in all in the same database. There really is no easy way to do it anyway: in fact it's usually kind of a mess to try (if you have any related records that span different years: an order placed in December with a return item in Jan?). Or if you need to calculate a lifetime value of a customer quickly and easily? Very difficult to do with yearly archives.

So, the goal is not to split/archive your data, but make sure that you can keep it all together.

Regarding data loss, you do of course have a tested backup plan in place, right? I couldn't imagine losing a year of data: most of my clients are set up with DR plans to lose no more than one hours' worth.
 

Galaxiom

Super Moderator
Staff member
Local time
Today, 18:59
Joined
Jan 20, 2009
Messages
12,849
, if you're reaching the 2GB limit of Access, maybe consider moving the SQL Express, which gives you 10GB for free).

And if you have enough data to exceed SQL Express then you are big enough to afford the full version of SQL Server. There your limit is something in the order of PetaBytes, if I remember correctly.

It will be here somewhere I expect.

And then of course you can always get another server which you would already have because you would be running a whole cluster of servers.
 

CJ_London

Super Moderator
Staff member
Local time
Today, 07:59
Joined
Feb 19, 2013
Messages
16,552
As far as the obvious is concerned

a) your database is split front end/back end?
b) the back end has been compacted?
c) the back end does not contain redundant tables or fields?
d) the back end does not contain attachment fields?

once compacted, what is the file size of the back end?
 

jdraw

Super Moderator
Staff member
Local time
Today, 03:59
Joined
Jan 23, 2006
Messages
15,361
OnlyLonely,

You have been given good advice. Perhaps you could clarify your requirement/question by describing the application and the database. What sort of data is involved? Do you have some form of daily/weekly backups for this data?
Many operational database applications would suffer tremendously if 1 whole year's worth of data were lost. Total business failures have occurred for loss of much less data.
 

Mark_

Longboard on the internet
Local time
Today, 00:59
Joined
Sep 12, 2017
Messages
2,111
Hi Guys,

I was wondering if my database is FULL and need to split it automatically.

Can you please define "Full" for us?

While the others have been talking about size of the database, only you can tell us what you mean by "Full". My concern is that what you are asking about may have nothing to do with file size but instead relate to a single procedure that you've been reworking that has hit its maximum number of controls or some other issue.
 

Mark_

Longboard on the internet
Local time
Today, 00:59
Joined
Sep 12, 2017
Messages
2,111
@Galaxiom

Database size 524,272 terabytes

And I still remember when they started putting preheaters on 9-tracks.
 

Cronk

Registered User.
Local time
Today, 18:59
Joined
Jul 4, 2013
Messages
2,770
Re #6
....has hit its maximum number of controls or some other issue

I was contacted about doing a split of a database being used by a winery where the products table had a separate field for each varietal and vintage, and after some years, the database was "full" because no more columns could be added.

Just a matter of fixing improper design and the manager was very happy that all the data could be kept in the one file.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 03:59
Joined
Feb 19, 2002
Messages
42,970
If you are worried about losing data, make a robust backup plan that will protect against data loss. Daily backups that are kept for two weeks, weekly backups that are kept for two months, monthly backups that are kept for two years, annual backups that are kept forever will restrict your data loss to one day. The short term backups should be kept both on site - for ease of recovery and offsite for better physical protection. You don't want your short term backups held only in the cloud. Depending on the size of your database, you could end up being out of commission for a full day because of the slowness of downloading large files.

It is rarely appropriate to split the data into separate databases (or tables) by year. Most applications require some year over year reporting or even rolling 12 months. If you split each year into a separate table or database, this type of reporting becomes very difficult and requires modification EVERY SINGLE YEAR!!! This is not a path you want to go down. In some cases, a company may decide that their reporting requirements go back only 3 or 4 full years. If you then want to archive old data to keep the size of the current database as small as possible, you can run an archive process each year to copy records older than a cutoff date to the archive database and then delete them from the current database. But as long as you are not running into a hard file size limitation, keeping all the data in the same database as long as possible is the best practice.
 
Last edited:

BeeJayEff

Registered User.
Local time
Today, 00:59
Joined
Sep 10, 2013
Messages
198
The OP seems to have disappeared, so there may be little point in responding now. But I will just add that user log files and audit trails can become massive, but are rarely required to be readily available after one/six/whatever weeks/months. My practice here (using a one month online retention requirement as an example) is simply to copy the table to another database, then delete the current month's records from that copy and delete the old months' records from the live db.
 

jleach

Registered User.
Local time
Today, 03:59
Joined
Jan 4, 2012
Messages
308
Or: have a separate backend specifically for audit data, and link to that. It keeps the bulk of the data elsewhere, but requires near to no maintenance (this is how we handle it in SQL Server: a separate database (or even server, depending) for logging audit trail data).

Then if you ever do need to query it, you don't need to reconstruct time frames or pick which db/table you need: it's all there for you. Usually these are write-optimized (very few indexes, accessed via append/forwardonly cursors, etc) so the size of the data set over time doesn't slow down the production data writes, but makes reading them a bit slower. Given the read frequency of most audit systems though, it's a non-issue whether it takes 3 seconds or 30 minutes to generate the results you need.

Cheers
 

Users who are viewing this thread

Top Bottom