Unanswered: Adding fields, creating keys; time frame?
I am a programmer, not a dba, so please excuse my ignorance if I use the wrong terms in this question.
We have a table that contains Notes, with a Many to One relationship with a Customer (the CustomerID is in the Notes table as a foreign key). The Customer has a Many to One with a Membership table (the MemberID is in the Customer table as a foreign key).
We send a Notes report to the Member. To do this we use:
Select n.* from Notes n, Customer c
where n.CustomerId = c.CustomerID
and n.downloaded = 0
and c.MemberId = <passed in MemberID>
The n.downloaded is a field that indicates whether the Note had already been sent to the member. When the report is created it sets n.downloaded to = 1. n.downloaded is a 'bit' data type, which IIRC means it cannot be indexed/keyed.
There are over 180,000,000 records in the Notes table. Maybe you can imagine how long this report is taking to be generated. I think it is because Sybase has to do a complete table scan of Notes... would I be correct?
I would like to suggest to management that we make two changes to the Notes table.
1a. Change the n.downloaded to a byte. Then it can be indexed; or
1b. Add a new byte field (n.downloadedbyte) to be used in place of the current n.downloaded bit field (deprecate the bit field). We think adding a field would be less time consuming.
2. Add a MemberId field to Notes.
The resulting sql command could be:
Select n.* from Notes
where n.downloadedbyte = 0
and n.MemberID = <passed in value>
Since both MemberID and n.downloadedbyte fields can be indexed/keyed, the resulting report should be blazingly fast.
However (there are always howevers), I realize putting MemberID in Notes violates some 'laws' of db design (denormalization?), but I personally am fine with it if we can reduce our report generation from the current 4 to 22 hours (for large Members) to under 30 min. Second, and probably the most important, is how long it will take to:
a. Modify or Add the downloaded(byte) fields to the Table.
b. Create the Index's.
c. Then Build the Index's.
Since this will be on a production server that servers over 700 employes (and if it is down the whole company is down) there is concern from management that doing a/b/c on a table that has 180,000,000 records will take a lot of time, causing them downtime/lost revenue.
So I guess my question is, for those that have dealt with these types of changes, could you give me some rough estimates on what kind of time frame the server would be down doing the changes? Hours, days, weeks... ?
Oh, one added twist. Once the central production server is done with the changes, all of those changes must be replicated out to 6 other servers.
Besides go home and get wasted , any advice, opinions and thoughts... and estimates?