Unanswered: handling 150+ million rows on corrupt table
Can anyone give me a headstart on how we could handle large data problem. We need to pullout rows on corrupt table with 150+ rows of data. Atleast we can pullout all clean data and remain the corrupt data. The problem is already on table level data not on index pages where we can drop/recreate the index.Technically we have 1 Unique composite index(3 columns) and 1 Nonclustered index on the corrupt table.
One of these options are not relevant to solve the issue.
-bulk copy out
We are looking to design a script to handle connection between redhat linux and sybase 12.5.4 to do the thing and our candidate is Perl.
Hoping for your expert assistance. Thanks advance
If by accident already posted this question on the forum please let me know so I can delete the thread. Thanks