Results 1 to 10 of 10
  1. #1
    Join Date
    Dec 2012
    Location
    Huntsville, AL
    Posts
    379
    Provided Answers: 2

    Unanswered: The Best Way to Delete a Data-Containing Field When 100K+ records are Affected

    I have a question for the most PROPER way to delete information that is no longer needed on a form. Assume that I have a field on a form that is currently populated with a number (patient's age). I currently have 105,083 records within that particular folder that contains the 'patient's age'. The 'patient's age' field is no longer needed and I wish to remove all of the information within that field as well as the field itself in the most proper fashion.

    My understanding is that I have two options currently available to do this. Option 1 would be to make a quick query that performs a script for each record to remove the information in that field. Once that query is complete, I could select delete the unwanted field in the form editor. This will be a lengthy and time-consuming process because of the sheer volume of records. Option 2 is to just directly delete the field in the form editor and call it done. Option 2 would be vastly quicker and just take a second to do.

    Given the two methods suggested above, is there a MOST proper way to do this in Brilliant Database (or is there a third option that is the best?)? Or are both methods doing exactly the same thing?
    Daryl G

  2. #2
    Join Date
    Dec 2012
    Posts
    86
    Because there is no third party tool to look at the database and my own testings I would assume to use method 1, even it takes longer.

    Nobody can gurantee that the information on a deleted field will be removed by an defragmentation.

    My own testings on Version 9.42 and a demo of 10.5 has shown, that when you add and remove approx. 1000 records by script the database will be after defragmentation larger as the original empty database.

    Other system, where a third party can look at the database have not this percentage of "empty grown" as Brilliant Database.

    Not only I have found this, some of my customers too.

    Makes now more sense that some of my customers have security concerns and do not want that I use Brilliant Database for their applications and why I am asking for another database engine inside Brilliant Database as the estimated ADB engine.

  3. #3
    Join Date
    Jul 2012
    Location
    Ireland
    Posts
    815
    Provided Answers: 17
    My 2 cents.

    Just delete the field in the form editor. I've never seen any reason to worry about doing it any other way. I've been doing that for years for various databases that change heavily over years of use due to the changes in manufacturing in the factories that use my database.

    Regards
    James

  4. #4
    Join Date
    Dec 2012
    Location
    Huntsville, AL
    Posts
    379
    Provided Answers: 2
    lostdb,
    Thanks for your answer and especially your insight into this seemingly simple question. I was very curious about this because I thought that there may be some 'extra baggage fees' incurred by just deleting the field - just as you postulated. I did create a query that executes a line of script for each record to remove the data prior to removing the field itself.

    James,
    I am a bit leery about doing that as I did something similar and it was almost like it corrupted the database. I opened the untouched version of my database and tried just deleting the no-longer-needed field with the field values left in place. It seemed that this action was okay today. I did not see any difference when I analyzed the database (comparing the original version, the version using queries and scripting to clear the field values and then deleting the field, and the version where I just deleted the field). Something odd did happen earlier which is why I had my trepidation about simply deleting the field with the field values in place (that might have occurred using v9.42?).
    Daryl G

  5. #5
    Join Date
    Jul 2012
    Location
    Ireland
    Posts
    815
    Provided Answers: 17
    Quote Originally Posted by lostdb View Post
    My own testings on Version 9.42 and a demo of 10.5 has shown, that when you add and remove approx. 1000 records by script the database will be after defragmentation larger as the original empty database.
    Yep true. 1000 records deleted can leave between 50Kb to 100Kb of unknown junk behind that cannot be defraged.

    My answer to that is for databases I know that are going to grow to massive sizes and see a lot of record deletion, I keep a master copy of the database always in it's original empty state.

    If I then ever need to remote that "undeletable" junk from the database, I do it by importing the records from the database into a copy of the mater empty database.

    It's very rarely I do this. For it to be worth the effort, you would have to be dealing with a database that sees a lot of records being deleted.

    Regards
    James

  6. #6
    Join Date
    Dec 2012
    Location
    Huntsville, AL
    Posts
    379
    Provided Answers: 2
    I just took a look at the size differences using 'query/clear field script' and 'delete field only' and then I defragged/optimized each database. The size was 161090.3 Kb before and 160272 Kb after for the slower method. Simply deleting the field had a size of 161081.8 Kb before and 160639 Kb after. Not a lot of difference. The most troubling aspect is that after optimizing the database it no longer works - it freezes up Brilliant Database.
    Daryl G

  7. #7
    Join Date
    Jul 2012
    Location
    Ireland
    Posts
    815
    Provided Answers: 17
    I hear ya, and to be honest there is something funky with the v10 editions, but I don't believe it's related to deleting fields.

    I could be wrong, but I've come up with a simply system that keeps me out of trouble.

    In my experience deleting the field poses no risk. It's defragmenting the database under certain condition I fear are the root cause of corruption and things going missing.

    These are new issues introduced to v10 that I've never seen in brilliant database before. In low v10 editions it was very bad and happened often, which I've posted a lot about on the forums, but it's gotten much better in the later v10 (.5) editions.

    That being said, defragmenting is still at this moment unsafe in my opinion under certain conditions. I would never take the change of running it without having a backup copy.

    For example, if you have your database is in folder that's unique to the user currently logged onto the PC like your Documents folder, and your file name contains more than the FAT 16 file naming (8.3), then it's my feeling that there is a high risk of corruption when defragmenting as the program tries to rename your database before optimisation and it seems to get lost as to what the path was at and screws things up.

    Long story short, and I've said this on the forum before. All my database have a single name that's no longer than 8 letters, and I always work from folders like “C:\My Software\My App” and not folders that are unique to profile I'm logged onto my PC under.

    For me, this works perfectly. That being said, I do wish there was a system in place where the effort it takes to submit bugs like this would be acknowledged so that they can be completely fixed and avoided by others.

    Regards
    James

  8. #8
    Join Date
    Jul 2012
    Location
    Ireland
    Posts
    815
    Provided Answers: 17
    The most troubling aspect is that after optimizing the database it no longer works - it freezes up Brilliant Database
    Ouch.
    Sorry to say but this is exactly what I was talking about with naming a database.

    I honestly believe there is something very odd that can happen to database with long file names that can also relate to them being kept in a folder that related to the user logged into Windows.

    For example, keeping a long database name in My Documents is unsafe, where keeping a file with no more than 8 letters is a simply folder like C:\My Software\My App\*.bbd is safer.

    The problem doesn't happen all the time, but it can be completely avoided by using simple folders and simple names in my opinion.

    Regards
    James
    Last edited by Tubbritt; 07-18-14 at 23:19.

  9. #9
    Join Date
    Dec 2012
    Location
    Huntsville, AL
    Posts
    379
    Provided Answers: 2
    James,
    You mentioned that before (short file names) and I have been following that guidance! Before I EVEN attempt to optimize/defrag I ALWAYS save a copy and rename it to a very short name (in my case, I use four letters). Even with those precautions, the optimization causes problems. I did not use a dedicated directory (which may be part of the issue). Running the optimization feature, at least for me, will become a very low priority issue!
    Daryl G

  10. #10
    Join Date
    Jul 2012
    Location
    Ireland
    Posts
    815
    Provided Answers: 17
    Even with those precautions, the optimization causes problems.
    I think it helps reduce the risk of something going wrong greatly, but it's still like playing Russian Roulette for little benefit.

    Most of what we all will create will be a database where records are added, and existing ones get changed. So optimization is probably not going to make a blind bit of difference to most as the database really only gets defragmented when your creating and deleting records all the time.

    For me, I'm sticking a pin in all this until v11 is released in the future. I'm happy to work away with v10 as it stands until then.

    Regards
    James

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •