Unanswered: Reduce file size after deleting records
We have an application that I *THINK* uses Sybase as the database. Vendor told us that we should not go over 100MB file saize, we are now approaching 200MB, and are starting to have some problems. I used the utility provided to do a massive record deletion by date, and after removing nearly 200,000 records, my file size remained the same.
The last time I played with a database was dBaseIV. There the command to actually delete records after marking them for deletion was "pack" But when we looked at the database through an ODBC link after the deletion, we did not find any records. How do I compact that database file after removing all the records?
The vendor has not been forthcoming with a method to do this, something about "custom built compacting tools" and "we don't maintain the database, only the application"
Since my server runs "Adaptive Server Anywhere" from Sybase, I assumed that the database itself was Sybase.
Come on, I know when I ran Oracle back in v 4 or 5, even then you could compact a database to remove space left by deleted records. If I delete 3/4 of the records, I should be able to reduce the file size by roughly an equivalent amount.
100MB is the recommendation from the vendor of the application. Maybe that's just a selling point to charge us 15K to upgrade to SQL?
Well i have an idea how we would handle this case for Oracle.
Since the DB would be storing the data in datablocks in a contigious.
This is blocksize can be configured when creating the database or later by droping the datafiles.
Since the deletes would not release the blocks which are free in between.
Also a DB access memory in terms of block size which is confiured to be a multiple of the OS block size.
if block size is configured to 16k even if 1 bit of data is present in this block it wont be termed as a free block.
X = used O = free.
Oracle infact provides a colease which would manipulate the pointers and free up some of the blocks.But it cause performance in the long run, as it needs to more time to gain access to this inbetween free blocks.
Seperate the various dataobjects to different tablespaces as per the type of frequency and type of selects,updates,deletes for it.
Also need to configure the PCT Free PCT used factor considering the type of data file.Seperating this logically in tablespaces helps so as to have seperation for transaction data which has high probability of deletion (purging)and static masters low probability of deletion(by seperation you can perform the rebuild task only for transactions and not masters).
So you need to follow the below procedure as a part of a DBA function say once a year.
We export the data to a file( using compress = Y or N depending on extent configuration).
Delete the tablespaces along with the data files.
Recreate the tablespaces with the smaller datafiles with appropriate extents configuration and import the data again.
This is the procedure we perform in Oracle sure theres a similar way for Sybase.
If it is an actual Sybase database, you will have a much easier time of it...
You can do a "database rebuild" using the tools provided in Sybase Central (SQL Central or manually do a dbunload, dbinit and a dbreload into the new file.
Deleting records in a database, like the previous writer drew, does not "reclaim disk space". Once the "dbspace" grows it does not shrink back down unless you unload all records to a file and read that file back into a new/blank database.