Table of Contents
This chapter specifies the lexical structure of the Java programming language.
Programs are written in Unicode (§3.1), but lexical translations are provided (§3.2) so that Unicode escapes (§3.3) can be used to include any Unicode character using only ASCII characters. Line terminators are defined (§3.4) to support the different conventions of existing host systems while maintaining consistent line numbers.
The Unicode characters resulting from the lexical translations are reduced to a sequence of input elements (§3.5), which are white space (§3.6), comments (§3.7), and tokens. The tokens are the identifiers (§3.8), keywords (§3.9), literals (§3.10), separators (§3.11), and operators (§3.12) of the syntactic grammar.
Programs are
written using the Unicode character set. Information about this
character set and its associated character encodings may be found at
http://www.unicode.org/
.
The Java SE platform tracks the
Unicode Standard as it evolves. The precise version of Unicode used by
a given release is specified in the documentation of the class
Character
.
Versions of the Java programming language prior to JDK 1.1 used Unicode 1.1.5. Upgrades to newer versions of the Unicode Standard occurred in JDK 1.1 (to Unicode 2.0), JDK 1.1.7 (to Unicode 2.1), Java SE 1.4 (to Unicode 3.0), Java SE 5.0 (to Unicode 4.0), Java SE 7 (to Unicode 6.0), and Java SE 8 (to Unicode 6.2).
The Unicode standard was originally designed as a fixed-width 16-bit character encoding. It has since been changed to allow for characters whose representation requires more than 16 bits. The range of legal code points is now U+0000 to U+10FFFF, using the hexadecimal U+n notation. Characters whose code points are greater than U+FFFF are called supplementary characters. To represent the complete range of characters using only 16-bit units, the Unicode standard defines an encoding called UTF-16. In this encoding, supplementary characters are represented as pairs of 16-bit code units, the first from the high-surrogates range, (U+D800 to U+DBFF), the second from the low-surrogates range (U+DC00 to U+DFFF). For characters in the range U+0000 to U+FFFF, the values of code points and UTF-16 code units are the same.
The Java programming language represents text in sequences of 16-bit code units, using the UTF-16 encoding.
Some APIs of the Java SE platform, primarily in the
Character
class, use 32-bit integers to represent code points as
individual entities. The Java SE platform provides methods to convert
between 16-bit and 32-bit representations.
This specification uses the terms code point and UTF-16 code unit where the representation is relevant, and the generic term character where the representation is irrelevant to the discussion.
Except for comments (§3.7), identifiers, and the contents of character and string literals (§3.10.4, §3.10.5), all input elements (§3.5) in a program are formed only from ASCII characters (or Unicode escapes (§3.3) which result in ASCII characters).
ASCII (ANSI X3.4) is the American Standard Code for Information Interchange. The first 128 characters of the Unicode UTF-16 encoding are the ASCII characters.
A raw Unicode character stream is translated into a sequence of tokens, using the following three lexical translation steps, which are applied in turn:
A
translation of Unicode escapes (§3.3) in
the raw stream of Unicode characters to the corresponding
Unicode character. A Unicode escape of the
form \uxxxx
,
where xxxx
is a
hexadecimal value, represents the UTF-16 code unit whose
encoding is xxxx
. This
translation step allows any program to be expressed using only
ASCII characters.
A translation of the Unicode stream resulting from step 1 into a stream of input characters and line terminators (§3.4).
A translation of the stream of input characters and line terminators resulting from step 2 into a sequence of input elements (§3.5) which, after white space (§3.6) and comments (§3.7) are discarded, comprise the tokens (§3.5) that are the terminal symbols of the syntactic grammar (§2.3).
The longest
possible translation is used at each step, even if the result does not
ultimately make a correct program while another lexical translation
would. There is one exception: if lexical translation occurs in a type
context (§4.11) and the input stream has two or
more consecutive >
characters that are followed by a
non->
character, then each >
character must be
translated to the token for the numerical comparison operator
>
.
The input characters a--b
are
tokenized (§3.5)
as a
, --
, b
,
which is not part of any grammatically correct program, even though
the tokenization a
, -
,
-
, b
could be part of a
grammatically correct program.
Without the rule for >
characters, two
consecutive >
brackets in a type such
as List
would be tokenized as the signed right shift operator <
List<
String>
>
>>
, while
three consecutive >
brackets in a type such as
List
would be tokenized as the unsigned right shift operator
<
List<
List<
String>
>
>
>>>
. Worse, the tokenization of four or more consecutive
>
brackets in a type such
as List
would be ambiguous, as various combinations of <
List<
List<
List<
String>
>
>
>
>
,
>>
, and >>>
tokens could represent the
>
>
>
>
characters.
A compiler
for the Java programming language ("Java compiler") first recognizes Unicode escapes
in its input, translating the ASCII characters \u
followed by four hexadecimal digits to the UTF-16 code unit
(§3.1) for the indicated hexadecimal value, and
passing all other characters unchanged. Representing supplementary
characters requires two consecutive Unicode escapes. This translation
step results in a sequence of Unicode input characters.
The \
, u
, and
hexadecimal digits here are all ASCII characters.
In addition
to the processing implied by the grammar, for each raw input character
that is a backslash \
, input processing must
consider how many other \
characters contiguously
precede it, separating it from a non-\
character or
the start of the input stream. If this number is even, then
the \
is eligible to begin a Unicode escape; if the
number is odd, then the \
is not eligible to begin
a Unicode escape.
For example, the raw input
"\\u2122=\u2122"
results in the eleven
characters " \ \ u 2 1 2 2 = ™ "
(\u2122
is the Unicode encoding of the character
™
).
If an
eligible \
is not followed by u
,
then it is treated as a RawInputCharacter and
remains part of the escaped Unicode stream.
If an
eligible \
is followed by u
, or
more than one u
, and the last u
is not followed by four hexadecimal digits, then a compile-time error
occurs.
The character produced by a Unicode escape does not participate in further Unicode escapes.
For example, the raw
input \u005cu005a
results in the six
characters \ u 0 0 5 a
,
because 005c
is the Unicode value
for \
. It does not result in the character
Z, which is Unicode character 005a
, because
the \
that resulted from
the \u005c
is not interpreted as the start of a
further Unicode escape.
The
Java programming language specifies a standard way of transforming a program written
in Unicode into ASCII that changes a program into a form that can be
processed by ASCII-based tools. The transformation involves converting
any Unicode escapes in the source text of the program to ASCII by
adding an extra u
- for
example, \uxxxx
becomes \uuxxxx
- while
simultaneously converting non-ASCII characters in the source text to
Unicode escapes containing a single u
each.
This
transformed version is equally acceptable to a Java compiler and
represents the exact same program. The exact Unicode source can later
be restored from this ASCII form by converting each escape sequence
where multiple u
's are present to a sequence of
Unicode characters with one fewer u
, while
simultaneously converting each escape sequence with a
single u
to the corresponding single Unicode
character.
A Java compiler should use
the \uxxxx
notation as an
output format to display Unicode characters when a suitable font is
not available.
A Java compiler next divides the sequence of Unicode input characters into lines by recognizing line terminators.
Lines are terminated by the ASCII characters CR, or LF, or CR LF. The two characters CR immediately followed by LF are counted as one line terminator, not two.
A line
terminator specifies the termination of the //
form
of a comment (§3.7).
The lines defined by line terminators may determine the line numbers produced by a Java compiler.
The result is a sequence of line terminators and input characters, which are the terminal symbols for the third step in the tokenization process.
The input characters and line terminators that result from escape processing (§3.3) and then input line recognition (§3.4) are reduced to a sequence of input elements.
Those input elements that are not white space or comments are tokens. The tokens are the terminal symbols of the syntactic grammar (§2.3).
White space
(§3.6) and comments (§3.7)
can serve to separate tokens that, if adjacent, might be tokenized in
another manner. For example, the ASCII characters -
and =
in the input can form the operator token -=
(§3.12) only if there is no intervening white
space or comment.
As a special
concession for compatibility with certain operating systems, the ASCII
SUB character (\u001a
, or control-Z) is ignored if
it is the last character in the escaped input stream.
Consider two
tokens x
and y
in the resulting input
stream. If x
precedes y
, then we say
that x
is to the
left of y
and
that y
is to the
right of x
.
For example, in this simple piece of code:
class Empty { }
we say that the }
token is to the
right of the {
token, even though it appears, in
this two-dimensional representation, downward and to the left of
the {
token. This convention about the use of the
words left and right allows us to speak, for example, of the
right-hand operand of a binary operator or of the left-hand side of an
assignment.
White space is defined as the ASCII space character, horizontal tab character, form feed character, and line terminator characters (§3.4).
There are two kinds of comments:
These productions imply all of the following properties:
As a result, the following text is a single complete comment:
/* this comment /* // /** ends here: */
The lexical grammar implies that comments do not occur within character literals (§3.10.4) or string literals (§3.10.5).
An identifier is an unlimited-length sequence of Java letters and Java digits, the first of which must be a Java letter.
A "Java
letter" is a character for which the
method Character.isJavaIdentifierStart(int)
returns
true.
A "Java
letter-or-digit" is a character for which the
method Character.isJavaIdentifierPart(int)
returns
true.
The "Java letters" include uppercase and lowercase
ASCII Latin letters A-Z
(\u0041-\u005a
), and a-z
(\u0061-\u007a
), and, for historical reasons, the
ASCII underscore (_
, or \u005f
) and
dollar sign ($
, or \u0024
). The $
sign should be used only in mechanically generated source code or,
rarely, to access pre-existing names on legacy systems.
The "Java digits" include the ASCII
digits 0-9
(\u0030-\u0039
).
Letters and digits may be drawn from the entire Unicode character set, which supports most writing scripts in use in the world today, including the large sets for Chinese, Japanese, and Korean. This allows programmers to use identifiers in their programs that are written in their native languages.
An identifier cannot have the same spelling (Unicode character sequence) as a keyword (§3.9), boolean literal (§3.10.3), or the null literal (§3.10.7), or a compile-time error occurs.
Two identifiers are the same only if they are identical, that is, have the same Unicode character for each letter or digit. Identifiers that have the same external appearance may yet be different.
For example, the identifiers consisting of the
single letters LATIN CAPITAL LETTER A
(A
, \u0041
), LATIN SMALL LETTER
A (a
, \u0061
), GREEK CAPITAL
LETTER ALPHA (A
, \u0391
),
CYRILLIC SMALL LETTER A
(a
, \u0430
) and MATHEMATICAL
BOLD ITALIC SMALL A (a
,
\ud835\udc82
) are all different.
Unicode composite characters are different from
their canonical equivalent decomposed characters. For example, a LATIN
CAPITAL LETTER A ACUTE
(Á
, \u00c1
) is different from a
LATIN CAPITAL LETTER A
(A
, \u0041
) immediately followed
by a NON-SPACING ACUTE
(´
, \u0301
) in identifiers. See
The Unicode Standard, Section 3.11 "Normalization Forms".
Examples of identifiers are:
String
i3
αρετη
MAX_VALUE
isLetterOrDigit
50 character sequences, formed from ASCII letters, are reserved for use as keywords and cannot be used as identifiers (§3.8).
abstract continue for new switch
assert default if package synchronized
boolean do goto private this
break double implements protected throw
byte else import public throws
case enum instanceof return transient
catch extends int short try
char final interface static void
class finally long strictfp volatile
const float native super while
The keywords const
and goto
are reserved, even
though they are not currently used. This may allow a Java compiler to
produce better error messages if these C++ keywords incorrectly appear
in programs.
While true
and false
might appear to be
keywords, they are technically boolean literals
(§3.10.3). Similarly, while null
might appear
to be a keyword, it is technically the null literal
(§3.10.7).
A
literal is the source code representation of a
value of a primitive type (§4.2), the String
type (§4.3.3), or the null type
(§4.1).
An integer literal may be expressed in decimal (base 10), hexadecimal (base 16), octal (base 8), or binary (base 2).
An
integer literal is of type long
if it is suffixed with an ASCII
letter L
or l
(ell); otherwise it is of type int
(§4.2.1).
The suffix L
is preferred, because the letter
l
(ell) is often hard to distinguish from the
digit 1
(one).
Underscores are allowed as separators between digits that denote the integer.
In a hexadecimal
or binary literal, the integer is only denoted by the digits after
the 0x
or 0b
characters and
before any type suffix. Therefore, underscores may not appear
immediately after 0x
or 0b
, or
after the last digit in the numeral.
In a decimal or
octal literal, the integer is denoted by all the
digits in the literal before any type suffix. Therefore, underscores
may not appear before the first digit or after the last digit in the
numeral. Underscores may appear after the initial 0
in an octal numeral (since 0
is a digit that
denotes part of the integer) and after the initial non-zero digit in a
non-zero decimal literal.
A decimal
numeral is either the single ASCII digit 0
,
representing the integer zero, or consists of an ASCII digit
from 1
to 9
optionally followed
by one or more ASCII digits from 0
to 9
interspersed with underscores, representing a
positive integer.
A hexadecimal
numeral consists of the leading ASCII characters 0x
or 0X
followed by one or more ASCII hexadecimal
digits interspersed with underscores, and can represent a positive,
zero, or negative integer.
Hexadecimal digits with values 10 through 15
are represented by the ASCII letters a
through f
or A
through F
, respectively; each letter used as a
hexadecimal digit may be uppercase or lowercase.
The HexDigit production above comes from §3.3.
An octal numeral
consists of an ASCII digit 0
followed by one or more of the ASCII
digits 0
through 7
interspersed with
underscores, and can represent a positive, zero, or negative
integer.
Note that octal numerals always consist of two or
more digits, as 0
alone is always considered to be a decimal
numeral - not that it matters much in practice, for the numerals
0
, 00
, and 0x0
all
represent exactly the same integer value.
A binary numeral
consists of the leading ASCII characters 0b
or 0B
followed by one or more of the ASCII
digits 0
or 1
interspersed with
underscores, and can represent a positive, zero, or negative
integer.
The
largest decimal literal of type int
is 2147483648
(231).
All
decimal literals from 0
to 2147483647
may
appear anywhere an int
literal may appear. The decimal
literal 2147483648
may appear only as the operand
of the unary minus operator -
(§15.15.4).
It is a compile-time error if the decimal literal
2147483648
appears anywhere other than as the
operand of the unary minus operator; or if a decimal literal of type
int
is larger than 2147483648
(231).
The largest positive hexadecimal, octal, and binary literals of type
int
- each of which represents the decimal
value 2147483647
(231-1)
- are respectively:
The most negative hexadecimal, octal, and binary literals of type
int
- each of which represents the decimal value
-2147483648
(-231) - are
respectively:
The following hexadecimal, octal, and binary literals represent the
decimal value -1
:
It is a
compile-time error if a hexadecimal, octal, or binary int
literal
does not fit in 32 bits.
The
largest decimal literal of type long
is
9223372036854775808L
(263).
All
decimal literals from 0L
to 9223372036854775807L
may appear anywhere a
long
literal may appear. The decimal
literal 9223372036854775808L
may appear only as the
operand of the unary minus operator -
(§15.15.4).
It is a compile-time error if the decimal
literal 9223372036854775808L
appears anywhere other
than as the operand of the unary minus operator; or if a decimal
literal of type long
is larger than 9223372036854775808L
(263).
The largest positive hexadecimal, octal, and binary literals of type
long
- each of which represents the decimal value
9223372036854775807L
(263-1) - are respectively:
The most negative hexadecimal, octal, and binary literals of type
long
- each of which represents the decimal
value -9223372036854775808L
(-263) - are respectively:
The following
hexadecimal, octal, and binary literals represent the decimal
value -1L
:
It is a
compile-time error if a hexadecimal, octal, or binary long
literal
does not fit in 64 bits.
Examples of int
literals:
0 2 0372 0xDada_Cafe 1996 0x00_FF__00_FF
Examples of long
literals:
0l 0777L 0x100000000L 2_147_483_648L 0xC0B0L
A floating-point literal has the following parts: a whole-number part, a decimal or hexadecimal point (represented by an ASCII period character), a fraction part, an exponent, and a type suffix.
A floating-point literal may be expressed in decimal (base 10) or hexadecimal (base 16).
For
decimal floating-point literals, at least one digit (in either the
whole number or the fraction part) and either a decimal point, an
exponent, or a float type suffix are required. All other parts are
optional. The exponent, if present, is indicated by the ASCII
letter e
or E
followed by an
optionally signed integer.
For
hexadecimal floating-point literals, at least one digit is required
(in either the whole number or the fraction part), and the exponent is
mandatory, and the float type suffix is optional. The exponent is
indicated by the ASCII letter p
or P
followed by an optionally signed
integer.
Underscores are allowed as separators between digits that denote the whole-number part, and between digits that denote the fraction part, and between digits that denote the exponent.
A
floating-point literal is of type float
if it is suffixed with an
ASCII letter F
or f
; otherwise
its type is double
and it can optionally be suffixed with an ASCII
letter D
or d
(§4.2.3).
The elements of the types
float
and double
are those values that can be represented using
the IEEE 754 32-bit single-precision and 64-bit double-precision
binary floating-point formats, respectively.
The details of proper input conversion from a
Unicode string representation of a floating-point number to the
internal IEEE 754 binary floating-point representation are described
for the methods valueOf
of class Float
and class
Double
of the package java.lang
.
The
largest positive finite literal of type float
is 3.4028235e38f
.
The
smallest positive finite non-zero literal of type float
is 1.40e-45f
.
The
largest positive finite literal of type double
is 1.7976931348623157e308
.
The
smallest positive finite non-zero literal of type double
is 4.9e-324
.
It is a compile-time error if a non-zero floating-point literal is too large, so that on rounded conversion to its internal representation, it becomes an IEEE 754 infinity.
A program can represent
infinities without producing a compile-time error by using constant
expressions such as 1f/0f
or -1d/0d
or by using the predefined
constants POSITIVE_INFINITY
and NEGATIVE_INFINITY
of the classes Float
and
Double
.
It is a compile-time error if a non-zero floating-point literal is too small, so that, on rounded conversion to its internal representation, it becomes a zero.
A compile-time error does not occur if a non-zero floating-point literal has a small value that, on rounded conversion to its internal representation, becomes a non-zero denormalized number.
Predefined constants representing Not-a-Number
values are defined in the classes Float
and Double
as Float.NaN
and Double.NaN
.
Examples of float
literals:
1e1f 2.f .3f 0f 3.14f 6.022137e+23f
Examples of double
literals:
1e1 2. .3 0.0 3.14 1e-9d 1e137
The
boolean
type has two values, represented by the boolean
literals true
and false
, formed from ASCII
letters.
A boolean
literal is always of type boolean
(§4.2.5).
A character literal is
expressed as a character or an escape sequence
(§3.10.6), enclosed in ASCII single quotes. (The
single-quote, or apostrophe, character
is \u0027
.)
See §3.10.6 for the definition of EscapeSequence.
Character
literals can only represent UTF-16 code units
(§3.1), i.e., they are limited to values
from \u0000
to \uffff
.
Supplementary characters must be represented either as a surrogate
pair within a char
sequence, or as an integer, depending on the API
they are used with.
A
character literal is always of type char
(§4.2.1).
It is a
compile-time error for the character following
the SingleCharacter
or EscapeSequence to be other than
a '
.
It is a
compile-time error for a line terminator (§3.4)
to appear after the opening '
and before the
closing '
.
As specified in §3.4, the characters CR and LF are never an InputCharacter; each is recognized as constituting a LineTerminator.
The following are examples of char
literals:
'a'
'%'
'\t'
'\\'
'\''
'\u03a9'
'\uFFFF'
'\177'
'™'
Because Unicode escapes are processed very early, it
is not correct to write '\u000a'
for a character
literal whose value is linefeed (LF); the Unicode
escape \u000a
is transformed into an actual
linefeed in translation step 1 (§3.3) and the
linefeed becomes a LineTerminator in step 2
(§3.4), and so the character literal is not valid
in step 3. Instead, one should use the escape
sequence '\n'
(§3.10.6).
Similarly, it is not correct to write '\u000d'
for
a character literal whose value is carriage return (CR). Instead,
use '\r'
.
In C and C++, a character literal may contain representations of more than one character, but the value of such a character literal is implementation-defined. In the Java programming language, a character literal always represents exactly one character.
A string literal consists of zero or more characters enclosed in double quotes. Characters may be represented by escape sequences (§3.10.6) - one escape sequence for characters in the range U+0000 to U+FFFF, two escape sequences for the UTF-16 surrogate code units of characters in the range U+010000 to U+10FFFF.
See §3.10.6 for the definition of EscapeSequence.
A string
literal is always of type String
(§4.3.3).
It is a
compile-time error for a line terminator to appear after the
opening "
and before the closing
matching "
.
As specified in §3.4, the characters CR and LF are never an InputCharacter; each is recognized as constituting a LineTerminator.
A long string literal can always be broken up into
shorter pieces and written as a (possibly parenthesized) expression
using the string concatenation operator +
(§15.18.1).
The following are examples of string literals:
"" // the empty string "\"" // a string containing " alone "This is a string" // a string containing 16 characters "This is a " + // actually a string-valued constant expression, "two-line string" // formed from two string literals
Because Unicode escapes are processed very
early, it is not correct to write "\u000a"
for a
string literal containing a single linefeed (LF); the Unicode
escape \u000a
is transformed into an actual
linefeed in translation step 1 (§3.3) and the
linefeed becomes a LineTerminator in step 2
(§3.4), and so the string literal is not valid in
step 3. Instead, one should write "\n"
(§3.10.6). Similarly, it is not correct to
write "\u000d"
for a string literal containing a
single carriage return (CR). Instead, use "\r"
.
Finally, it is not possible to write
"\u0022"
for a string literal containing a double
quotation mark ("
).
A string
literal is a reference to an instance of class String
(§4.3.1, §4.3.3).
Moreover,
a string literal always refers to the same
instance of class String
. This is because string literals - or, more
generally, strings that are the values of constant expressions
(§15.28) - are "interned" so as to share unique
instances, using the method String.intern
.
Example 3.10.5-1. String Literals
The program consisting of the compilation unit (§7.3):
package testPackage; class Test { public static void main(String[] args) { String hello = "Hello", lo = "lo"; System.out.print((hello == "Hello") + " "); System.out.print((Other.hello == hello) + " "); System.out.print((other.Other.hello == hello) + " "); System.out.print((hello == ("Hel"+"lo")) + " "); System.out.print((hello == ("Hel"+lo)) + " "); System.out.println(hello == ("Hel"+lo).intern()); } } class Other { static String hello = "Hello"; }
and the compilation unit:
package other; public class Other { public static String hello = "Hello"; }
produces the output:
true true true true false true
This example illustrates six points:
Literal strings within the same class
(§8 (Classes)) in the same package
(§7 (Packages)) represent references to the same
String
object (§4.3.1).
Literal strings within different classes in the
same package represent references to the same String
object.
Literal strings within different classes in
different packages likewise represent references to the same
String
object.
Strings computed by constant expressions (§15.28) are computed at compile time and then treated as if they were literals.
Strings computed by concatenation at run time are newly created and therefore distinct.
The result of explicitly interning a computed string is the same string as any pre-existing literal string with the same contents.
The character and string escape sequences allow for the representation of some nongraphic characters without using Unicode escapes, as well as the single quote, double quote, and backslash characters, in character literals (§3.10.4) and string literals (§3.10.5).
\ b
(backspace BS, Unicode \u0008
) \ t
(horizontal tab HT, Unicode \u0009
) \ n
(linefeed LF, Unicode \u000a
) \ f
(form feed FF, Unicode \u000c
) \ r
(carriage return CR, Unicode \u000d
) \ "
(double quote "
, Unicode \u0022
) \ '
(single quote '
, Unicode \u0027
) \ \
(backslash \
, Unicode \u005c
) \u0000
to \u00ff
)
0 1 2 3 4 5 6 7
The OctalDigit production above comes from §3.10.1.
It is a
compile-time error if the character following a backslash in an escape
sequence is not an ASCII b
, t
,
n
, f
, r
,
"
, '
, \
, 0
, 1
, 2
,
3
, 4
, 5
,
6
, or 7
. The Unicode
escape \u
is processed earlier
(§3.3).
Octal escapes are provided for compatibility with C,
but can express only Unicode values \u0000
through \u00FF
, so Unicode escapes are usually
preferred.
The null
type has one value, the null reference, represented by
the null literal null
, which is formed from
ASCII characters.
A null literal is always of the null type (§4.1).