Unanswered: data corruption (status 2) on v9 non segmented btrieve files
I decide to post this service ticket i posted to pervasive a few days ago
Read the full history, i cannot believe that pervasive has a similar problem and they haven't informed their customer immediately, but this is the true story, we are using btrieve since 1983
An indipendent consultant told me that Pervasive has a problem with non segmented btrieve files larger than 6 GB.
I converted three of the largest btrieve files we have, changing the page size to 8192 and limiting the segment size to 2 GB
frsop.key 42 Gbytes
frsvsp.key 16 Gbytes
fglodati.key 16 Gbyte
I ran several test, using our application that insert record on those files, we are talking about of 200.000 records for each test, nothing happens no status 2 !
We are happy, because we were running into the 64 gbyte limits of version 8 files, but we are worried about the behaviour when the limit to 2gb segment flag is not checked on the PSQL configuration.
I also want you to pay attention to another strange behaviour of psql 9.0
Last day i rebuilded a 16 gbytes file on a windows server where the psql engine was configurated to do not segment the files.
then i copied the files on another windows server, where the psql was configurated to limit the segment size.
What happened ?, as soon as our application began to insert new record the mkde has created the extension for the file, i.e. whe found in the directory, the 16 GB base file and its 7 extension
a butil -stat operation on that file shows that the file is not extended but when i delete the extension any butil operation on that file failed with status 13 !!!!
"Tiffany Kennedy" <email@example.com> wrote on 03/11/2005 15.26.15:
> 1. Do you see the status 2's with version 9 files (either with or
> without 8192 page size) if you segment the files at 2 gig? Does the
> status 2 only occur if you use unsegmented and greater than 2 gig files?
> 2. Do you have copies of the corrupt files that I can see, possibly
> before and after?
> 3. Do you have any copies of files that are not corrupt - that will
> become corrupt if we insert records?
> If you do upload the files, please put them in a .zip file and let
> me know what the name of the file is on the FTP site.
> Username : uploads
> Password: pervas1ve
> -----Original Message-----
> From: firstname.lastname@example.org [mailto:email@example.com]
> Sent: Wednesday, November 02, 2005 9:24 AM
> To: Tiffany Kennedy
> Subject: Re: Service ticket # 163636
> Let me describe what happened
> on wednesday 19 we got a status 54 on a huge variable page btrieve
> file called fglodati.key , pervasive SQL 8.0 was installed on the
> server and the file version was 8.0
> I assume that the error was like the error we got two years ago with
> psql 8.0, the error was fixed with a pswl hot fix, our incident was A40247351
> I restored the file (that was backed up on tuesday 18) on our test
> server, pervasive sql 9.1 runs on the test server, and we rebuild
> the file using the rebuildcli utility, the rebuild operation was
> completed succesfully, we got a single 17 Gbytes btrieve files with
> the page size changed to 8192 bytes.
> We assumed that the file was ok and we scheduled the rebuild
> operation for Sunday 23
> The rebuild operation failed with a status 2
> We decide to put on line the file rebuilded with the psql 9.1
> engine, since the upgrade of the PSQL engine on the production
> environment was scheduled for the first week of november, we
> scheduled the upgrade one week before.
> We did the upgrade on tuesday 25
> On wednesday 26 we get another status 2
> We start working on our test environment and what we observed was:
> The application starts to insert record into the file and stops
> after a few records.
> a butil -recover operation on the damaged file with /j option hangs
> immediatly with status 2
> If we replace the file with the original one and run the application
> again, the application got a status 2 exactly after the number of
> records inserted in the first run.
> we rebuild the file several times, after two days of test we put
> online an empty v 8.0 btrieve files.
> This morning I want to continue the test on that file, in the
> meantime we converted another huge file (37 Gbyte) it tooks 10 hour
> to get converted.
> As soon as we start the application, it stops with an I/O error on this file
> please let me know which kind of information you need, we are
> several customer with btrieve files closer to the 64 gbyte limit of
> version 8.0 files and the workaround we used for the fglodati.key
> cannot be used for other files.
> Our company number is 9932475
> please let me know asap
> "Tiffany Kennedy" <firstname.lastname@example.org> wrote on 02/11/2005 15.34.08:
> > Mauro,
> > I received your service ticket regarding the status 2 with Pervasive v9.1.
> > Our Application deal with huge btrieve files, our biggest files are
> > very close to the 64 Gbyte limit of version 8 files.
> > We upgraded our system to version 9.1 and we are testing the new
> > feature of v9 files i.e. 8 kb page files and don't limit the segment
> > size to 2 Gbyte.
> > we converted two of the biggest files we have without errors, then
> > we tested our application and they fail with a status code 2 on both files.
> > WE run several test and we are currently running changing the
> > rebuild parameters, but every test takes several hours.
> > 1. Just to verify, you are rebuilding the two data files to the 9x
> > file format.
> > 2. Do you have the C:\PVSW\Bin\rebuildlog.txt files? If so, go
> > ahead and send them.
> > 3. Also include the pvsw.log from the server.
> > Zip them and put them on our ftp site. Call the file 163636.zip.
> > ftp://ftpsupport.pervasive.com
> > Username : uploads
> > Password: pervas1ve
> > Let me do some research and I will let you know what I find.
> > Tiffany Kennedy
> > Lead Software Support Engineer
> > Database Division
> > [image removed]
> > Main: 800.287.4383
> > Fax: 512.459.1309
> > email@example.com
> > 12365-B Riata Trace Parkway
> > Austin, TX 78727
> > http://www.pervasive.com/