天天看点

What's the difference between UTF-8 and Unicode?

If asked the question, "What is the difference between UTF-8 and Unicode?", would you confidently reply with a short and precise answer? In these days of internationalization all developers should be able to do that. I suspect many of us do not understand these concepts as well as we should. If you feel you belong to this group, you should read this ultra short introduction to character sets and encodings.

Actually, comparing UTF-8 and Unicode is like comparing apples and oranges:

UTF-8 is an encoding - Unicode is a character set

A character set is a list of characters with unique numbers (these numbers are sometimes referred to as "code points"). For example, in the Unicode character set, the number for A is 41.

An encoding on the other hand, is an algorithm that translates a list of numbers to binary so it can be stored on disk. For example UTF-8 would translate the number sequence 1, 2, 3, 4 like this:

00000001 00000010 00000011 00000100 
           

Our data is now translated into binary and can now be saved to disk.

All together now

Say an application reads the following from the disk:

1101000 1100101 1101100 1101100 1101111 
           

The app knows this data represent a Unicode string encoded with UTF-8 and must show this as text to the user. First step, is to convert the binary data to numbers. The app uses the UTF-8 algorithm to decode the data. In this case, the decoder returns this:

104 101 108 108 111 
           

Since the app knows this is a Unicode string, it can assume each number represents a character. We use the Unicode character set to translate each number to a corresponding character. The resulting string is "hello".

Conclusion

So when somebody asks you "What is the difference between UTF-8 and Unicode?", you can now confidently answer short and precise:

UTF-8 and Unicode cannot be compared. UTF-8 is an encoding used to translate numbers into binary data. Unicode is a character set used to translate characters into numbers.

文章转自:http://stackoverflow.com/questions/3951722/whats-the-difference-between-unicode-and-utf8

Java中采用的是unicode标准字符集

Java语言使用unicode标准字符集,最多可以识别65535个字符,unicode字符表的前128个字符刚好是ASCII表。每个国家的“字母表”的字母都是unicode表中的一个字符,比如汉字中的“你”字就是unicode表中的第20320字符。

Java所谓的字母包括了世界上任何语言中的“字母表”,因此,Java所使用的字母不仅包括通常的拉丁字母,a,b,c等,也包括汉语中的汉字,日文里的片假名,平假名,朝鲜文以及其他许多语言中的文字。

维基百科:

目前实际应用的统一码版本是UCS-2,使用16位的编码空间。也就是每个字符(character,即char)占用2个字节(byte)。这样理论上一共最多可以表示2^16(即65536)个字符。基本满足各种语言的使用。实际上当前版本的统一码并未完全使用这16位编码,而是保留了大量空间以作为特殊使用或将来扩展。

Java的字节码环境采用UTF-16作为内部表示,UTF-16继承自UCS-2,使用16位的编码空间。所以Java中基本类型char的大小是16-bit,范围是:Unicode 0 ~ Unicode 2^16-1。

继续阅读