I'm programming a Japanese Web application in Cold Fusion for a client. This app will be collected texbox and textarea data from people in Japan.
What's going to happen when somebody types in some Japanese characters and the survey tries to save them to the SQL 2000 database? Will it accept them? Do I need to do something special to the server so it will accept them? This is all brand new to me. I've never even seen a Japanese keyboard.
As long as you use unicode data types ( e.g. ntext, nvarchar etc ) for the columns to store the characters, all should be fine. Note you can have a combination of unicode and non-unicode columns in a table.
Also, make sure the database collation ( combination of sort order and character set ) will handle the characters. Some collations are Kana sensitive.
The char(n) datatype stores fixed-length strings, and the varchar(n) datatype stores variable-length strings, in single-byte character sets such as English.
Their national character counterparts, nchar(n) and nvarchar(n), store fixed- and variable-length strings in multibyte character sets such as Japanese. You can specify the maximum number of characters with n or use the default column length of one character. For strings longer than 255 bytes, use the text datatype.
No, the TEXT datatype will handle all of the characters in the server's NT (usually the OEM) character set. The NTEXT datatype will handle unicode characters. Both have some silly limit on length, but it is [b]way[/] out there (we are talking book size columns).
With an NTEXT column, you could store Kanji, Hiragana, Greek, Russian (Cyrillic), Arabic, and English translations of the same response in a single column.