Results 1 to 4 of 4
  1. #1
    Join Date
    Aug 2005
    Posts
    15

    Unanswered: BLOB, iBatis & PostgreSql 8.1

    Greetings!

    I got lost trying to save and read an image file from a db.

    I get file upload taken care of where a servlet populates a bean with the image file and other info,
    but I can't get to be written to the db.

    What's the best way to procede, starting with the bean?

    Would this be appropriate:
    Code:
    private MultipartFile imageBody;
    Given the following table:
    Code:
    CREATE TABLE story_image 
    (
    	image_name varying ( 30 ),
    	image_description varying (50),
    	image_body oid
    );
    ... waht's the correct way to make the insert?
    I'm using iBatis for handling the jdbc part of the app.

    ... partial iBatis sqlMap:
    Code:
    <insert id="saveImage" parameterClass="storyImage" >
        
        INSERT INTO images 
            (
                image_name,
                image_description,
                image_file
            )
            VALUES
            (   #imageName#,
                #imageDescription#,
                lo_import( #imageBody# )
            );
        
    </insert>
    ... from what I gathered, pg uses a path to the image file for importing it, so should I store to a temp dir on
    the server and feed pg the path to it?
    Would the bean then require a String for the imageBody variable instead of the MultipartFile?

    Am I using the datatype for this operation (oid)?

    Before this I was able to store the image file in the db as 'bytea' but ran into problems trying to retrieve it:

    INFO: Loading XML bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml]
    org.springframework.dao.DataIntegrityViolationExce ption: SqlMapClient operation; SQL [];
    --- The error occurred in com/persistance/sqlmaps_xml/imagesSqlMap.xml.
    --- The error occurred while applying a result map.
    --- Check the getImageResult.
    --- Check the result mapping for the 'imageBody' property.
    --- Cause: org.postgresql.util.PSQLException: Bad value for type int : \377 ... rest omitted...


    ... so I'll appreciate any feedback you might have.

  2. #2
    Join Date
    Jun 2004
    Location
    Arizona, USA
    Posts
    1,848
    Quote Originally Posted by dbtomek
    ... from what I gathered, pg uses a path to the image file for importing it
    This is incorrect. pg stores large objects in a separate table, the pg_largeobject system table. Hence the reason for the oid pointer in your application table. The lo_import function moves the file into the large object table and returns the oid of the record where it was inserted; the lo_export function copies the file at the supplied oid record to the file location specified. The path specified in the lo_import and lo_export functions are server referenced locations. If you specified the root folder, it would mean the root folder on the server, not at the client.

    Note that the pg_largeobject table does NOT store the file name; if you wish to maintain the file name, you will need to store that separately in the parent table . (pg_largeobject's contains the oid number, the page number (int32) and the data (bytea)
    Last edited by loquin; 07-18-07 at 15:24.
    Lou
    使大吃一惊
    "Lisa, in this house, we obey the laws of thermodynamics!" - Homer Simpson
    "I have my standards. They may be low, but I have them!" - Bette Middler
    "It's a book about a Spanish guy named Manual. You should read it." - Dilbert


  3. #3
    Join Date
    Aug 2005
    Posts
    15
    ... that's what I was suspecting. Does the file have to be stored in a 'real' directory or can it be accessed from the temp dir where it resides after upload?
    Then during retrieval, can it sit in the temp directory or once again does it have to be saved into some permanent directory?

    Could the byte[] data be saved in a bytea column and reconstructed back into a file?
    That would make things a whole lot easier me wants to hope
    The column type would have to be a bytea and Java type wuld be a byte[], this way dealing with BLOB could be voided. ... or am I completely wrong in avoiding the BLOB?

    ... thanks in advance, I just might be getting this
    t

  4. #4
    Join Date
    Jun 2004
    Location
    Arizona, USA
    Posts
    1,848
    Quote Originally Posted by dbtomek
    ... that's what I was suspecting. Does the file have to be stored in a 'real' directory or can it be accessed from the temp dir where it resides after upload?
    So long as the postgres user has read access to the temp folder, (or, you write a stored function, and assign IT rights to the folder) then you should be OK

    Quote Originally Posted by dbtomek
    Then during retrieval, can it sit in the temp directory or once again does it have to be saved into some permanent directory?
    Again, if the postgres user has write access to that folder, you should be OK. (or, like above, use a stored function with appropriate rights to do the extract)

    Quote Originally Posted by dbtomek
    Could the byte[] data be saved in a bytea column and reconstructed back into a file?
    So long as the file is within the limits of the Bytea column (less than 1GB), you should be able to use it directly. I'm not a java coder, so I couldn't tell you the particulars, though.

    EDIT - I ran some tests using ByteA fields. They are truly a pain to work with. The 'A' stands for ASCII. ByteA contains an ascii string, and NOT raw binary data. In order to store raw binary data in a ByteA field, you must first pre-process every byte of the binary data, so that only "printable" characters (with a couple of exceptions - the apostrophe ('), and the backslash (\)) must be escaped, or converted to octal and preceeded by a backslash.) are stored. All non-printable characters characters, including the null character (ascii code 0) must be converted to octal and preceeded with a backslash. In the worst case therefore, the 1 GB ByteA field might hold only 250 megabytes of pre-converted data. Of course, what comes around must also go around; when retrieving the ByteA data, you must reverse the ascii encoding to restore the original data format before you can actually use it. In addition, there seem to be some issues with the pg oledb and odbc drivers in conjunction with ByteA fields.

    When using the large-object. all this conversion and restore bit twiddling is handled by the built-in conversion functions, so the large object approach is actually MORE compatible with other DMS's. Not to mention that network file transfer protocols are much more efficient than database data transfer, so that in my testing, large object processing took about 1/10 the time of ByteA processing...
    Last edited by loquin; 07-25-07 at 02:15.
    Lou
    使大吃一惊
    "Lisa, in this house, we obey the laws of thermodynamics!" - Homer Simpson
    "I have my standards. They may be low, but I have them!" - Bette Middler
    "It's a book about a Spanish guy named Manual. You should read it." - Dilbert


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •