I will just rephrase my query again:

I have a remote DB2 database that stores data in some character set that my application doesnot know of (May be Japanese, English etc.). So I want to access this database (transfer data to / from this DB) as UTF8.

So my application running on NT writes UTF8 data to the DB2 client using CLI APIs, which then transfers it across to the DB2 server. However the DB2 client interprets this data as normal single byte characters as per the Windows CP 1252 character set. Therefore some junk values are flown to/from the DB.

So how can I configure the DB2 client to interpret this data as UTF8? Also what is the corresponding solution on Solaris?

Another related question - Is it true that we can have DB2 client and server to have different character sets as it is the case with Oracle (i.e. by appropriately setting the NLS_LANG variable) ?

Thanks and regards,