Results 1 to 7 of 7
  1. #1
    Join Date
    Sep 2003
    Location
    Norwich, England
    Posts
    7

    Unhappy Unanswered: Failing at 1048576 records

    I'm using an executable (written in C++ I believe) that has been provided to me to upgrade a database structure to work with a new front-end - all software provided by a third party (who are now out of business).

    The conversion process in this executable works on all the tables in my database bar one. It basically creates copies of each table and inserts new key information into fields appended to the tables in order that an SQL script can be run to create the new database.

    The table it fails on has some 1,111,491 records and I can get the conversion process to work if I delete records from this table to bring it below 1,048,576 records. It's not lost on me that the figure mentioned is equal in size to 1MB.

    What I would like to know is - Is this a problem with the executable itself i.e. some foible of recordset size used in C++ or is this related to settings within SQL Server?

    The executable connects to a system DSN using ODBC.
    I'm running Windows 2000 Professional and SQL Server 2000.

    It would be great if this problem was related to SQL Server and ODBC settings because at least one of you might be able to give me some hints on how I can solve this problem.

    Thanks in advance!

    Euge

  2. #2
    Join Date
    Nov 2002
    Location
    Jersey
    Posts
    10,322
    A million records equal to 1MB

    I don't think so.

    Without any other details, I would say you have a datatype problem.
    Brett
    8-)

    It's a Great Day for America everybody!

    dbforums Yak CorralRadio 'Rita
    dbForums Member List
    I'm Good Once as I ever was

    The physical order of data in a database has no meaning.

  3. #3
    Join Date
    Sep 2003
    Location
    Norwich, England
    Posts
    7
    I think you misunderstand. I didn't mean the recordset size was equal to 1MB. I meant that 1MB = 1,048,576 bytes i.e. 2 raised to the power 20 and wondered whether this number had some significance? i.e. As you mentioned a datatype that would overflow at this value?

    However from my small knowledge of C++ datatypes I can only think of integer types with max values much lower than 1,048,576 or much higher! I therefore wonder whether the problem has some connection with CRecordset class or something similar?

  4. #4
    Join Date
    Nov 2002
    Location
    Jersey
    Posts
    10,322
    Can you post the DDL of the table?

    How much data are we talking about in total.
    Brett
    8-)

    It's a Great Day for America everybody!

    dbforums Yak CorralRadio 'Rita
    dbForums Member List
    I'm Good Once as I ever was

    The physical order of data in a database has no meaning.

  5. #5
    Join Date
    Sep 2003
    Location
    Norwich, England
    Posts
    7
    I'll see what I can rustle up. It will have to wait until Monday though because I'm now home from work and can't get at the database from here unfortunately as it's on a standalone PC.

  6. #6
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    If the 2^20 pointers/indicies were to 4 Kb buffers, this would conveniently make 4Gb, a full 32 bit address. Many applications tend to "break" at that point, and many C++ compilers did funky things with indices that exceeded that size until just a few years ago.

    Is it possible to split your table/database/application into two pieces, migrate each of them, then combine the migrated parts back together?

    -PatP

  7. #7
    Join Date
    Sep 2003
    Location
    Norwich, England
    Posts
    7
    DDL:

    CREATE TABLE TEMP_CONTACTLOG(
    ACCEPTABLE VARCHAR(1) NULL,
    ACTION VARCHAR(50) NULL,
    CONTACTDATE DATETIME NULL,
    CONTACTID DATETIME NOT NULL,
    CONTACTTIME VARCHAR(5) NULL,
    DBNUMBER INT NULL,
    EVENTID INT NOT NULL,
    EXTNOTES TEXT NULL,
    EXTNOTESINDICATOR VARCHAR(2) NULL,
    NEXTAPPTDATE DATETIME NULL,
    NEXTAPPTTIME VARCHAR(8) NULL,
    POCODE VARCHAR(4) NULL,
    PODBNUMBER INT NULL,
    POPERSONALNO VARCHAR(7) NULL,
    POPOSTREF INT NULL,
    REF INT NULL,
    TEAMCODE VARCHAR(3) NULL,
    TIMESPENT INT NULL,
    URN INT NOT NULL,
    eventtype VARCHAR(10) NULL,
    contacttype VARCHAR(2) NULL,
    apptkept VARCHAR(2) NULL,
    ref_client VARCHAR(32) NULL,
    ref_event VARCHAR(32) NULL,
    ref_contactlog VARCHAR(32) NULL,
    ref_popost VARCHAR(32) NULL
    )

    1,111,491 records.

    I'm convinced the problem isn't with the table structure since I can migrate the data in the table above if I do each half separately.

    I'm going to follow Pat's advice and do the table in two parts and put it back together - it'll just mean working out the SQL to repair the foreign key links with the downside that the conversion will take a lot longer than I had hoped when I do it for real on our live database. Thanks for your suggestions/help.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •