Wish to have some advice from experts on NLS/Globalisation features of Oracle.
We have our 3-tier application developed in J2EE which is now to be implemented for a customer where the users will enter data in diferrent languages - Korean, Japanese & English. The obvious choice that comes to my mind is having the DB charset as Unicode (preferably UTF-16 since it's 2 byte fixed-width) and multiplying the column widths for all tables by 2. Since UTF-8 is variable width upto 3 bytes I don't want multply column widths by 3 becoz some of the fields currently are at varchar(2000)
So, is this the right approach ? any better way of doing it ?
Second, at the user entry level i.e. at the browser we will select encoding to UTF8 (becoz that's the only choice available). So in that case will this combination of UTF8 at browser and UTF16 at DB work ?
I'd appreciate if anyone can share their epxeriences and knowledge in this matter.