Hi,
We have seven languages in a single table in Oracle 9i db in Sun Solaris.
And we insert seven language texts into db by having a single script(.sql) file and run the file thru telnet by connecting to Oracle server in Sun Solaris.

The seven languages are:
1) English
2) French
3) German
4) Portuguese
5) Spanish
6) Turkish
7) Greek

For the first five languages listed above, we don't find any problem and while on retrieval, the texts of the first five languages come very fine from db and we are able to see it in our J2ee application.

When we try to retrieve Turkish or Greek, what we get on the console is:

java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv

I don't know why?

Usually we have all language texts in excel sheets. And from excel sheets, we copy the language texts and paste them in a unicode editor such as unipad.

Finally we save the script with encoding set to UTF8.

Could anyone let me know the best way of creating scripts for languages other than those belong to Western European character set?

The problem arises only when the Oracle server is residing on Sun Solaris.

When the same script is run on Windows Oracle, Turkish and Greek characters, on retrieval, come very fine.

I feel that somewhere I have gone wrong in one of the following and I am not able to trace out the issue.

A) could be wrong in creating scripts
B) could be wrong in running scripts thru telnet
C) could be wrong in Java coding ( But I am sure I am not as exception is thrown on executeQuery("") itself)


Why there is such a deviation on Oracle's behaviour as the same script work fines on Windows but not in Solaris?
Could anyone provide me a solid solution ?

Earnestly awaiting your valuable responses.