Results 1 to 5 of 5
  1. #1
    Join Date
    Nov 2007
    Posts
    27

    Unanswered: GSE2204N? db2se export_shape error

    i ma trying to export some spatial data into a shape file using db2se export_shape.
    i am getting the following error

    GSE2204N The column name "col1" exceeds the limit of 11
    characters for dBASE files.

    db2 ? GSE2204N


    GSE2204N The column name "<column-name>" exceeds the limit of 11
    characters for dBASE files.

    Explanation:

    The column name "<column-name>" exceeds the restriction of 11
    bytes for dBASE files. No data is exported.

    User Response:

    Specify a column name with 11 or less bytes for the attribute
    data to be exported to the dBASE file (.dbf).

    msgcode : -2204

    sqlstate : 38S94


    but col1 is integer of length 4

    the same thing happens for some other columns of varchar of lengths 5,30 etc.,
    and i think they are less than 254 characters

    could any one please tell me what the possible reason
    any help is appreciated

  2. #2
    Join Date
    Jan 2007
    Location
    Jena, Germany
    Posts
    2,721
    Are you by any chance mixing "column names" with "values stored in a column"? Since you have shown a lower-case name "col1", I assume you renamed this to anonymize things for the post here. If possible, please show us the exact statement because you may be hiding the source for the error from us. Could you please post the exact command that you try to execute?

    The issue is here that the dBase format mandates that column names (not the values in those columns) must not be longer than 11 bytes. If the column names are longer than 11 bytes, we don't know what to do with them. The values in columns can be up to 254 bytes for strings, for example.

    The column names in shape files, specifically the dBase file containing the non-spatial attributes, are derived from the query that you provide for the export command/procedure. You can specify any arbitrary, valid subselect. But the column names used in the outer-most select of your query must adhere to the mentioned restriction. You can accommodate for that by renaming the columns with traditional and standard SQL mechanisms:
    Code:
    SELECT my_very_long_column_name AS my_column
    FROM   my_table
    Knut Stolze
    IBM DB2 Analytics Accelerator
    IBM Germany Research & Development

  3. #3
    Join Date
    Nov 2007
    Posts
    27
    Thank you Knut stolze fro your reply..
    i was able to export the data (i have changed the length of column names using alias names).

    but the data in the source table has columns defined as NOT NULL and the data in those columns is inserted as blank spaces.
    so when i am doing export using
    db2se export_shape <db_name>
    -fileName <file_name with complete path>
    -selectStatement <statement selcting columns with alias names>
    -messagesFile <msg_file with complete path>

    the blank space are beeing exported as null values .instead of getting a blank space i am getting - (dash ie., null value i guess).i think db2 is thinking blankspaces as null values.
    so when i am importing the exported data into target table (which is already created) i am getting the error as: EXPORTING NULL VALUES INTO NON NULL VALUES IS NOT ALLOWED.
    and also when i am importing the exported data into a new table(which is not already present) it is importing the data into the table with the columns accepting NULLS. and also the datatypes of many columns are beeing changed unusual thing for me was the 'timestamp' column in the source has been changed to 'varchar' after importing to new table.

    i am using the following comand
    db2se import_shape db_name -fileName <file_name which has.shp extension> -srsName srs_name -tableSchema table_schema -tableName table_name -spatialColumn spatial_column

    the ddl for the source table is given below
    CREATE TABLE udbts.spat_data (
    PK_ADDRESS_CONTACT_ID INTEGER NOT NULL , STAKEHOLDER_NB BIGINT NOT NULL , ADDRESS_TYPE_CD CHAR(5) NOT NULL , STREET_ADDRESS1_TX CHAR(30) NOT NULL , STREET_ADDRESS2_TX CHAR(30) NOT NULL , STREET_ADDRESS3_TX CHAR(30) NOT NULL , STATE_CD CHAR(2) NOT NULL , ZIP_6 CHAR(6) NOT NULL ,
    ZIP_4 CHAR(4) NOT NULL , CITY_NM CHAR(18) NOT NULL , COUNTY_CD CHAR(5) NOT NULL , COUNTRY_CD CHAR(5) NOT NULL , CONTACT_STATUS_FG CHAR(1) NOT NULL ,
    MONTHS_STAYED_AT_ADDRESS_QY SMALLINT NOT NULL , RECORD_UPDATE_TS TIMESTAMP NOT NULL WITH DEFAULT CURRENT TIMESTAMP , RECORD_INSERT_TS TIMESTAMP NOT NULL WITH DEFAULT CURRENT TIMESTAMP , RECORD_DELETE_FG CHAR(1) NOT NULL WITH DEFAULT 'N' , RECORD_INSERT_PROGRAM_ID CHAR(30) NOT NULL WITH DEFAULT ' ' , GEOCODER_LATITUDE DECIMAL(31,8) ,GEOCODER_LONGITUDE DECIMAL(31,8) , GEOCODER_LOCATION_TYPE VARCHAR(50) , GEOCODER_MESSAGE VARCHAR(50) , GEOCODER_SCORE SMALLINT , GEOCODER_ZIP_PLUS_4 SMALLINT , GEOCODER_ADDRESS1 VARCHAR(50) , GEOCODER_ADDRESS2 VARCHAR(40) , GEOCODER_CITY VARCHAR(50) , GEOCODER_STATE VARCHAR(2) ,GEOCODER_ZIP VARCHAR(7) , OBJECTID INTEGER NOT NULL WITH DEFAULT , SHAPE DB2GSE.ST_POINT ) IN REGTBS INDEX IN IDXTBS LONG IN LOBTS ;
    1) so how should i retain the datatypes after importing with out changing.
    2) and how should i overcome the null values instead of blank spaces.

    i have posted very long questions hope you wont get disappointed with this long sentences.
    thanking you again Knut Stolze in advance .
    Last edited by rajaryan4545; 02-06-08 at 02:47.

  4. #4
    Join Date
    Jan 2007
    Location
    Jena, Germany
    Posts
    2,721
    Quote Originally Posted by rajaryan4545
    but the data in the source table has columns defined as NOT NULL and the data in those columns is inserted as blank spaces.
    so when i am doing export using
    db2se export_shape <db_name>
    -fileName <file_name with complete path>
    -selectStatement <statement selcting columns with alias names>
    -messagesFile <msg_file with complete path>

    the blank space are beeing exported as null values .instead of getting a blank space i am getting - (dash ie., null value i guess).i think db2 is thinking blankspaces as null values.
    so when i am importing the exported data into target table (which is already created) i am getting the error as: EXPORTING NULL VALUES INTO NON NULL VALUES IS NOT ALLOWED.

    and also when i am importing the exported data into a new table(which is not already present) it is importing the data into the table with the columns accepting NULLS.
    I really can't follow you exactly where NULLs occur. The message you quote does not originate from the DB2 Spatial Extender; so you should explain the details for us.

    Dbase files (.DBF) do not have the notion of NULL indicators for values. One can only set a '*' in the first field for a record to mark this record as being deleted.

    NULLs are a serious problem because we somehow have to take care of the loss of information when moving data from DB2 to dBase files. During shape export, we store a value with all-blanks in the DBF file instead. To be consistent, we do the reverse upon shape import, i.e. if a value is blanks-only (or '\0's only), then we interpret this value as being a SQL NULL.

    It would be possible to:
    (a) check if the column is a string column, and if it is
    (b) test whether the column is nullable, and if it is not
    (c) insert the blanks-only in the DB2 table.

    If you need something like that, you should open a PMR with IBM support.

    and also the datatypes of many columns are beeing changed unusual thing for me was the 'timestamp' column in the source has been changed to 'varchar' after importing to new table.
    That has an easy answer: dBase files don't know "timestamps". So we can only export datetime values as strings. When you try to import such a file again, we see the string data type and, therefore, create a column of type VARCHAR in DB2. How else could we know that the values in the dBase file are timestamps?

    Likewise, dBase files store numbers as text. So if we export an INTEGER, we have to account for 10 digits + sign = 11 characters. Again, when we import such a file with ST_shape_import (or db2se import_shape), we only see: numeric and 11 characters. We have no idea whether the 11th character is used for the sign and the 10th is blank, '0', '1', or '2' only. Therefore, we cannot guarantee that only values smaller than 2.14... billion occur, and we create a BIGINT column instead. We could scan over the whole dBase file - but that's a performance problem. To address such issues, you have to create the table first with the data type you want to have and then import into the existing table.

    1) so how should i retain the datatypes after importing with out changing.
    Due to the insufficient information schema information in shapfiles, you will have to take care of this manually, i.e. use db2look or some other mechanism to extract the table definition, then create the target table yourself and import into an existing table.

    2) and how should i overcome the null values instead of blank spaces.
    As stated above, you can either open a PMR or work around this issue. To work around it, you could add a BEFORE INSERT trigger for your table that changes a NULL to a blanks-only string.

    Also, blanks-only is really strange. It may be a good idea to revisit your data model and figure out if NULLs aren't a much better choice.

    p.s: If you open a PMR, you should have a strong case to support your requirement to insert blanks-only as strings. If I recall correctly, I read somewhere when I implemented this that blanks-only indicate the absence of a value in dBase.
    Knut Stolze
    IBM DB2 Analytics Accelerator
    IBM Germany Research & Development

  5. #5
    Join Date
    Oct 2009
    Posts
    5
    Hi folks, if you are still interested, how to repair mssql tabils, I may recommend the program that automatically eliminates dbf errors. It is a good solution, if you experience any difficulties with dbx files. Hope this helps

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •