I need to make a back-up of a table and then do some operations and if those operations fail then I need to go ahead with the backup re-store.
This is just regarding one table. I read somewhere that there is no specific back-up options available at table level granularity - so I need to use unload/load utilities.
Can anyone confirm if that is how I should go or any other strategic ways to go ahead with this? Also, the table is quite huge and operations that I need to do are some updates - so is it possible that I back up only those records that are going to get affected by that update query and then restore only those many records into the table?
I think that a load and unload of the records that you are going to update is actualy the best way to go
Please spcify what wersion you are on and what are you currently using for a backup procedure
I am on Informix Dynamic Server 9.3 and currently I don't have a back-up procedure that I know of. I mean the servers might be being backed up regularly as a back-up process but I am not quite concerned about that. I am concerned about the things I would be doing and if something goes wrong I would be able to get the table back to original state without asking out for help to our DBAs (though I keep that as a last option).
I think I would go ahead with a unload/load of the whole table that has about 10 million records.. and the restore the whole set back to it if something goes wrong.. but I just wished to confirm with this is the right way or are there any optimal solutions considering the bulk of records in the table.
Thanks for your response artemka.. as of now.. I plan to go ahead with unload/load..
One more peculiar thing that I faced was - When I do something like:
UNLOAD TO <filename>
SELECT * FROM <tablename>
And then I do:
LOAD FROM <filename>
INSERT INTO <tablename>
Once in a while it fails saying the number of columns in unload and table do not match or something like that..
Can I be sure that if I do an unload and do not alter the table schema and then delete all records and do a load back from the unload file.. I would not face this issue? What would be my possible ways out if this becomes an issue? Can I use the unload file to work-around this porblem.. In short what could be the possible causes of this and can it be worked around, how?
Thanks a lot for your response once again. Best regards.
There are a couple issues you can face when loading a table that is a fairly big size.
It is possible that you might have "funky" characters in your data and if the load process comes across it, it will bomb out. There is a way to deal with that, use dbload command to load the data back in. Take a look at the syntax, you can commit after n rows and if the row is bad write it to a file and the load will keep going.
if you do decide to do a normal load then
here is how you should do it
lock table "tabname" in exclusive mode -- this will only use one lock
insert into .....
load from ....
load/unload seems to be fine.. I did some rigorous testing and found it working. If something else creeps in later - I would not be knowing of that until it actually happens but one scenario when this failed was because of the environment I work in.
My OS is Windows XP and when I open the file in windows - it added those ^M characters at the end of each line. I got away with them and then they worked. This is one painful thing when working in Windows + Unix .. Next time I get the error in load I will definitely have a look if the line-endings have been messed up.
Thanks artemka, for the suggestion though. I will look into dbload - am not quite familiar with its syntax. Best regards.