I'm using an executable (written in C++ I believe) that has been provided to me to upgrade a database structure to work with a new front-end - all software provided by a third party (who are now out of business).
The conversion process in this executable works on all the tables in my database bar one. It basically creates copies of each table and inserts new key information into fields appended to the tables in order that an SQL script can be run to create the new database.
The table it fails on has some 1,111,491 records and I can get the conversion process to work if I delete records from this table to bring it below 1,048,576 records. It's not lost on me that the figure mentioned is equal in size to 1MB.
What I would like to know is - Is this a problem with the executable itself i.e. some foible of recordset size used in C++ or is this related to settings within SQL Server?
The executable connects to a system DSN using ODBC.
I'm running Windows 2000 Professional and SQL Server 2000.
It would be great if this problem was related to SQL Server and ODBC settings because at least one of you might be able to give me some hints on how I can solve this problem.
I think you misunderstand. I didn't mean the recordset size was equal to 1MB. I meant that 1MB = 1,048,576 bytes i.e. 2 raised to the power 20 and wondered whether this number had some significance? i.e. As you mentioned a datatype that would overflow at this value?
However from my small knowledge of C++ datatypes I can only think of integer types with max values much lower than 1,048,576 or much higher! I therefore wonder whether the problem has some connection with CRecordset class or something similar?
If the 2^20 pointers/indicies were to 4 Kb buffers, this would conveniently make 4Gb, a full 32 bit address. Many applications tend to "break" at that point, and many C++ compilers did funky things with indices that exceeded that size until just a few years ago.
Is it possible to split your table/database/application into two pieces, migrate each of them, then combine the migrated parts back together?
CREATE TABLE TEMP_CONTACTLOG(
ACCEPTABLE VARCHAR(1) NULL,
ACTION VARCHAR(50) NULL,
CONTACTDATE DATETIME NULL,
CONTACTID DATETIME NOT NULL,
CONTACTTIME VARCHAR(5) NULL,
DBNUMBER INT NULL,
EVENTID INT NOT NULL,
EXTNOTES TEXT NULL,
EXTNOTESINDICATOR VARCHAR(2) NULL,
NEXTAPPTDATE DATETIME NULL,
NEXTAPPTTIME VARCHAR(8) NULL,
POCODE VARCHAR(4) NULL,
PODBNUMBER INT NULL,
POPERSONALNO VARCHAR(7) NULL,
POPOSTREF INT NULL,
REF INT NULL,
TEAMCODE VARCHAR(3) NULL,
TIMESPENT INT NULL,
URN INT NOT NULL,
eventtype VARCHAR(10) NULL,
contacttype VARCHAR(2) NULL,
apptkept VARCHAR(2) NULL,
ref_client VARCHAR(32) NULL,
ref_event VARCHAR(32) NULL,
ref_contactlog VARCHAR(32) NULL,
ref_popost VARCHAR(32) NULL
I'm convinced the problem isn't with the table structure since I can migrate the data in the table above if I do each half separately.
I'm going to follow Pat's advice and do the table in two parts and put it back together - it'll just mean working out the SQL to repair the foreign key links with the downside that the conversion will take a lot longer than I had hoped when I do it for real on our live database. Thanks for your suggestions/help.