I have a DB2 CLI program that reads/writes UTF8 data from/to a DB2
database on NT and Solaris. Since my application deals with UTF8, I have
set DB2CODEPAGE to 1208.

I have a problem in a particular scenario:

Suppose I have a 10 character column in a Japanese database containing
the characters ",,,,". This is stored properly in the database via
the command center. When retrieving this data via my application, the
memory allocation is as follows:

m_psResult[i] = new char[(col_len + 1)*6];
memset( m_psResult[i],'\0',((col_len + 1)*6));
ret = SQLBindCol( m_stmtHandle,( SQLUSMALLINT )( i +
1),SQL_C_CHAR,(SQLPOINTER) m_psResult[i], (col_len + 1)*6,&m_rlenp[i]);

(I am allocating enough memory I guess , number of characters * 6; In my
case col_len would be 10, and the Japanese characters require 3 bytes
per character in UTF8 so that's 4*3= 12 bytes in all and I am allocating
10*6 = 60 bytes in all)

Yet when I execute the SQLFetch statement, I get a truncation error and
only the first three of the 4 characters are fetched in m_psResult[i] as
UTF8. Also m_rlenp[i] contains 10 after this call. So that means even
though I allocated 6 bytes, it is fetching only 10 bytes.

Any suggestions?