Java Core Libraries Developer Guide
Java Core Libraries Developer Guide
Java Core Libraries Developer Guide
Core Libraries
Release 17
F40864-02
July 2022
Java Platform, Standard Edition Core Libraries, Release 17
F40864-02
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software,
any programs embedded, installed or activated on delivered hardware, and modifications of such programs)
and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end
users are "commercial computer software" or "commercial computer software documentation" pursuant to the
applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use,
reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or
adaptation of i) Oracle programs (including any operating system, integrated software, any programs
embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle
computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the
license contained in the applicable contract. The terms governing the U.S. Government’s use of Oracle cloud
services are defined by the applicable contract for such services. No other rights are granted to the U.S.
Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle, Java, and MySQL are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc,
and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Preface
Audience vii
Documentation Accessibility vii
Diversity and Inclusion vii
Related Documents vii
Conventions vii
2 Serialization Filtering
Addressing Deserialization Vulnerabilities 2-1
Java Serialization Filters 2-2
Filter Factories 2-3
Allow-Lists and Reject-Lists 2-4
Creating Pattern-Based Filters 2-4
Creating Custom Filters 2-6
Reading a Stream of Serialized Objects 2-6
Setting a Custom Filter for an Individual Stream 2-7
Setting a JVM-Wide Custom Filter 2-7
Setting a Custom Filter Using a Pattern 2-8
Setting a Custom Filter as a Class 2-8
Setting a Custom Filter as a Method 2-9
Creating a Filter with ObjectInputFilter Methods 2-10
Setting a Filter Factory 2-11
Setting a Filter Factory with setSerialFilterFactory 2-11
Specifying a Filter Factory in a System or Security Property 2-13
Built-in Filters 2-14
Logging Filter Actions 2-16
iii
3 Enhanced Deprecation
Deprecation in the JDK 3-1
How to Deprecate APIs 3-1
Notifications and Warnings 3-3
Running jdeprscan 3-5
6 Process API
Process API Classes and Interfaces 6-1
ProcessBuilder Class 6-2
Process Class 6-3
ProcessHandle Interface 6-3
ProcessHandle.Info Interface 6-4
iv
Creating a Process 6-4
Getting Information About a Process 6-5
Redirecting Output from a Process 6-6
Filtering Processes with Streams 6-7
Handling Processes When They Terminate with the onExit Method 6-8
Controlling Access to Sensitive Process Information 6-10
7 Preferences API
Comparing the Preferences API to Other Mechanisms 7-1
Usage Notes 7-2
Obtain Preferences Objects for an Enclosing Class 7-2
Obtain Preferences Objects for a Static Method 7-3
Atomic Updates 7-3
Determine Backing Store Status 7-4
Design FAQ 7-4
9 Java NIO
Grep NIO Example 9-4
Checksum NIO Example 9-6
Time Query NIO Example 9-7
Time Server NIO Example 9-8
Non-Blocking Time Server NIO Example 9-9
Internet Protocol and UNIX Domain Sockets NIO Example 9-11
Chmod File NIO Example 9-18
Copy File NIO Example 9-24
Disk Usage File NIO Example 9-27
User-Defined File Attributes File NIO Example 9-28
10 Java Networking
Networking System Properties 10-1
v
11 Pseudorandom Number Generators
Characteristics of PRNGs 11-1
Generating Pseudorandom Numbers with RandomGenerator Interface 11-2
Generating Pseudorandom Numbers in Multithreaded Applications 11-3
Dynamically Creating New Generators 11-3
Creating Stream of Generators 11-4
Choosing a PRNG Algorithm 11-4
vi
Preface
Audience
This document is for Java developers who develop applications that require functionality such
as threading, process control, I/O, monitoring and management of the Java Virtual Machine
(JVM), serialization, concurrency, and other functionality close to the JVM.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Related Documents
See JDK 17 Documentation.
Conventions
The following text conventions are used in this document:
vii
Preface
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.
viii
1
Java Core Libraries
The core libraries consist of classes which are used by many portions of the JDK. They
include functionality which is close to the VM and is not explicitly included in other areas,
such as security. Here you will find current information that will help you use some of the core
libraries.
1-1
2
Serialization Filtering
You can use the Java serialization filtering mechanism to help prevent deserialization
vulnerabilities. You can define pattern-based filters or you can create custom filters.
Topics:
• Addressing Deserialization Vulnerabilities
• Java Serialization Filters
• Filter Factories
• Allow-Lists and Reject-Lists
• Creating Pattern-Based Filters
• Creating Custom Filters
• Setting a Filter Factory
• Built-in Filters
• Logging Filter Actions
2-1
Chapter 2
Java Serialization Filters
Serialization Filters
A serialization filter enables you to specify which classes are acceptable to an
application and which should be rejected. Filters also enable you to control the object
graph size and complexity during deserialization so that the object graph doesn’t
exceed reasonable limits. You can configure filters as properties or implement them
programmatically.
Note:
A serialization filter is not enabled or configured by default. Serialization
filtering doesn't occur unless you have specified the filter in a system
property or a Security Property or set it with the ObjectInputFilter
class.
Besides creating filters, you can take the following actions to help prevent
deserialization vulnerabilities:
• Do not deserialize untrusted data.
• Use SSL to encrypt and authenticate the connections between applications.
• Validate field values before assignment, for example, checking object invariants by
using the readObject method.
Note:
Built-in filters are provided for RMI. However, you should use these built-in
filters as starting points only. Configure reject-lists and/or extend the allow-list
to add additional protection for your application that uses RMI. See Built-in
Filters.
For more information about these and other strategies, see "Serialization and
Deserialization" in Secure Coding Guidelines for Java SE.
2-2
Chapter 2
Filter Factories
• Provide a way to narrow the classes that can be deserialized down to a context-
appropriate set of classes.
• Provide metrics to the filter for graph size and complexity during deserialization to
validate normal graph behaviors.
• Allow RMI-exported objects to validate the classes expected in invocations.
There are two kinds of filters:
• JVM-wide filter: Is applied to every deserialization in the JVM. However, whether and
how a JVM-wide filter validates classes in a particular deserialization depends on how it's
combined with other filters.
• Stream-specific filter: Validates classes from one specific ObjectInputStream.
You can implement a serialization filter in the following ways:
• Specify a JVM-wide, pattern-based filter with the jdk.serialFilter property: A
pattern-based filter consists of a sequence of patterns that can accept or reject the name
of specific classes, packages, or modules. It can place limits on array sizes, graph depth,
total references, and stream size. A typical use case is to add classes that have been
identified as potentially compromising the Java runtime to a reject-list. If you specify a
pattern-based filter with the jdk.serialFilter property, then you don't have to modify
your application.
• Implement a custom or pattern-based stream-specific filter with the
ObjectInputFilter API: You can implement a filter with the ObjectInputFilter API,
which you then set on an ObjectInputStream. You can create a pattern-based filter
with the ObjectInputFilter API by calling the Config.createFilter(String) method.
Note:
A serialization filter is not enabled or configured by default. Serialization filtering
doesn't occur unless you have specified the filter in a system property or a Security
Property or set it with the ObjectInputFilter class.
For every new object in the stream, the filter mechanism applies only one filter to it. However,
this filter might be a combination of filters.
In most cases, a stream-specific filter should check if a JVM-wide filter is set, especially if you
haven't specified a filter factory. If a JVM-wide filter does exist, then the stream-specific filter
should invoke it and use the JVM-wide filter’s result unless the status is UNDECIDED.
Filter Factories
A filter factory selects, chooses, or combines filters into a single filter to be used for a stream.
When you specify one, a deserialization operation uses it when it encounters a class for the
first time to determine whether to allow it. (Subsequent instances of the same class aren't
filtered.) It's implemented as a BinaryOperator<ObjectInputFilter> and specified
with the ObjectInputFilter.Config.setSerialFilterFactory method or in a
system or Security property; see Setting a Filter Factory. Whenever an
ObjectInputStream is created, the filter factory selects an ObjectInputFilter.
However, you can have a different filter created based on the characteristics of the stream
and the filter that the filter factory previously created.
2-3
Chapter 2
Allow-Lists and Reject-Lists
pattern1.*;pattern2.*
!pattern1.*;pattern2.*
• Use the wildcard symbol (*) to represent unspecified class names in a pattern as
shown in the following examples:
– To match every class name, use *
– To match every class name in mypackage, use mypackage.*
2-4
Chapter 2
Creating Pattern-Based Filters
– To match every class name in mypackage and its subpackages, use mypackage.**
– To match every class name that starts with text, use text*
If a class name doesn’t match any filter, then it is allowed. If you want to allow only certain
class names, then your filter must reject everything that doesn’t match. To reject all class
names other than those specified, include !* as the last pattern in a class filter.
For a complete description of the syntax for the patterns, see JEP 290.
Note:
A pattern-based filter doesn't check interfaces that are implemented by classes
being deserialized. The filter is invoked for interfaces explicitly referenced in the
stream; it isn't invoked for interfaces implemented by classes for objects being
deserialized.
The following example shows how to limit resource usage for an individual application:
java -
Djdk.serialFilter=maxarray=100000;maxdepth=20;maxrefs=500 com.example.test.Ap
plication
2-5
Chapter 2
Creating Custom Filters
In the following example, the filter rejects one class name from a package (!
example.somepackage.SomeClass), and allows all other class names in the package:
jdk.serialFilter=!example.somepackage.SomeClass;example.somepackage.*;
The previous example filter allows all other class names, not just those in
example.somepackage.*. To reject all other class names, add !*:
jdk.serialFilter=!example.somepackage.SomeClass;example.somepackage.*;!
*
Topics
• Reading a Stream of Serialized Objects
• Setting a Custom Filter for an Individual Stream
• Setting a JVM-Wide Custom Filter
• Setting a Custom Filter Using a Pattern
• Setting a Custom Filter as a Class
• Setting a Custom Filter as a Method
• Creating a Filter with ObjectInputFilter Methods
2-6
Chapter 2
Creating Custom Filters
• For each class in the stream, the filter is called with the resolved class. It is called
separately for each supertype and interface in the stream.
• The filter can examine each class referenced in the stream, including the class of objects
to be created, supertypes of those classes, and their interfaces.
• For each array in the stream, whether it is an array of primitives, array of strings, or array
of objects, the filter is called with the array class and the array length.
• For each reference to an object already read from the stream, the filter is called so it can
check the depth, number of references, and stream length. The depth starts at 1 and
increases for each nested object and decreases when each nested call returns.
• The filter is not called for primitives or for java.lang.String instances that are
encoded concretely in the stream.
• The filter returns a status of accept, reject, or undecided.
• Filter actions are logged if logging is enabled.
Unless a filter rejects the object, the object is accepted.
2-7
Chapter 2
Creating Custom Filters
In the following example, the JVM-wide filter is set by using a lambda expression.
ObjectInputFilter.Config.setSerialFilter(
info -> info.depth() > 10 ? Status.REJECTED : Status.UNDECIDED);
In the following example, the JVM-wide filter is set by using a method reference:
ObjectInputFilter.Config.setSerialFilter(FilterClass::dateTimeFilter);
ObjectInputFilter filesOnlyFilter =
ObjectInputFilter.Config.createFilter("example.File;!
example.Directory");
This example allows only example.File. All other class names are rejected.
ObjectInputFilter filesOnlyFilter =
ObjectInputFilter.Config.createFilter("example.File;!*");
A filter is typically stateless and performs checks solely on the input parameters.
However, you may implement a filter that, for example, maintains state between calls
to the checkInput method to count artifacts in the stream.
2-8
Chapter 2
Creating Custom Filters
In the following example, the FilterNumber class allows any object that is an instance of the
Number class and rejects all others.
In the example:
• The checkInput method accepts an ObjectInputFilter.FilterInfo object. The object’s
methods provide access to the class to be checked, array size, current depth, number of
references to existing objects, and stream size read so far.
• If serialClass is not null, then the value is checked to see if the class of the object is
Number. If so, it is accepted and returns ObjectInputFilter.Status.ALLOWED. Otherwise,
it is rejected and returns ObjectInputFilter.Status.REJECTED.
• Any other combination of arguments returns ObjectInputFilter.Status.UNDECIDED.
Deserialization continues, and any remaining filters are run until the object is accepted or
rejected. If there are no other filters, the object is accepted.
This custom filter allows only the classes found in the base module of the JDK:
static ObjectInputFilter.Status
baseFilter(ObjectInputFilter.FilterInfo info) {
2-9
Chapter 2
Creating Custom Filters
ObjectInputFilter rejectUndecidedFilter =
ObjectInputFilter.rejectUndecidedClass(intFilter);
2-10
Chapter 2
Setting a Filter Factory
The merge method creates a new filter by merging two filters. The following merges the filters
intFilter and f. It accepts the Integer class but rejects any class loaded from the
application class loader:
Note:
It's a good idea to merge the JVM-wide filter with the requested, stream-specific
filter in your filter factory. If you just return the requested filter, then you effectively
disable the JVM-wide filter, which will lead to security gaps.
Note:
You can set a filter factory exactly once, either with the method
setSerialFilterFactory, in the system property jdk.serialFilterFactory,
or in the Security Property jdk.serialFilterFactory.
Topics:
• Setting a Filter Factory with setSerialFilterFactory
• Specifying a Filter Factory in a System or Security Property
2-11
Chapter 2
Setting a Filter Factory
then when apply is first invoked, u will be the JVM-wide filter. Otherwise, u will be null.
The apply method (which you must implement yourself) returns the filter to be used
for the stream. If apply is invoked again, then the parameter t will be this returned
filter. When you set a filter with the method
ObjectInputStream.setObjectInputFilter(ObjectInputFilter), then
parameter u will be this filter.
The following example implements a simple filter factory that prints its
ObjectInputFilter parameters every time its apply method is invoked, merges
these parameters into one combined filter, then returns this merged filter.
ObjectInputFilter.Config.setSerialFilterFactory(contextFilterFactory);
ObjectInputFilter filter1 =
ObjectInputFilter.Config.createFilter("example.*;java.base/*;!*");
ObjectInputFilter.Config.setSerialFilter(filter1);
2-12
Chapter 2
Setting a Filter Factory
ObjectInputFilter.Status.UNDECIDED);
try {
Object obj = ois.readObject();
System.out.println("Read obj: " + obj);
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
}
This example prints output similar to the following (line breaks have been added for clarity):
The apply method is invoked twice: when the ObjectInputStream ois is created and
when the method setObjectInputFilter is called.
Note:
2-13
Chapter 2
Built-in Filters
The value of jdk.serialFilterFactory is the class name of the filter factory to be set
before the first deserialization. The class must be public and accessible to the
application class loader (which the method
java.lang.ClassLoader.getSystemClassLoader() returns).
You can set a JVM-wide filter factory that affects every application run with a Java
runtime from $JAVA_HOME by specifying it in a Security Property. Note that a system
property supersedes a Security Property value. Edit the file $JAVA_HOME/conf/
security/java.security and specify the filter factory's class name in the
jdk.serialFilterFactory Security Property.
Built-in Filters
The Java Remote Method Invocation (RMI) Registry, the RMI Distributed Garbage
Collector, and Java Management Extensions (JMX) all have filters that are included in
the JDK. You should specify your own filters for the RMI Registry and the RMI
Distributed Garbage Collector to add additional protection.
Note:
Use these built-in filters as starting points only. Edit the
sun.rmi.registry.registryFilter system property to configure reject-lists
and/or extend the allow-list to add additional protection for the RMI Registry.
To protect the whole application, add the patterns to the jdk.serialFilter
global system property to increase protection for other serialization users that
do not have their own custom filters.
The RMI Registry has a built-in allow-list filter that allows objects to be bound in the
registry. It includes instances of the java.rmi.Remote, java.lang.Number,
java.lang.reflect.Proxy, java.rmi.server.UnicastRef, java.rmi.server.UID,
java.rmi.server.RMIClientSocketFactory, and
java.rmi.server.RMIServerSocketFactory classes.
maxarray=1000000;maxdepth=20
2-14
Chapter 2
Built-in Filters
Note:
Use these built-in filters as starting points only. Edit the
sun.rmi.transport.dgcFilter system property to configure reject-lists and/or
extend the allow-list to add additional protection for Distributed Garbage Collector.
To protect the whole application, add the patterns to the jdk.serialFilter global
system property to increase protection for other serialization users that do not have
their own custom filters.
The RMI Distributed Garbage Collector has a built-in allow-list filter that accepts a limited set
of classes. It includes instances of the java.rmi.server.ObjID, java.rmi.server.UID,
java.rmi.dgc.VMID, and java.rmi.dgc.Lease classes.
maxarray=1000000;maxdepth=20
Note:
Use these built-in filters as starting points only. Edit the
jmx.remote.rmi.server.serial.filter.pattern management property to
configure reject-lists and/or extend the allow-list to add additional protection for
JMX. To protect the whole application, add the patterns to the jdk.serialFilter
global system property to increase protection for other serialization users that do
not have their own custom filters.
JMX has a built-in filter to limit a set of classes allowed to be sent as a deserializing
parameters over RMI to the server. That filter is disabled by default. To enable the filter,
define the jmx.remote.rmi.server.serial.filter.pattern management property with a
pattern.
The pattern must include the types that are allowed to be sent as parameters over RMI to the
server and all types they depends on, plus javax.management.ObjectName and
java.rmi.MarshalledObject types. For example, to limit the allowed set of classes to Open
MBean types and the types they depend on, add the following line to
management.properties file.
com.sun.management.jmxremote.serial.filter.pattern=java.lang.*;java.math.BigI
nteger;java.math.BigDecimal;java.util.*;javax.management.openmbean.*;javax.ma
nagement.ObjectName;java.rmi.MarshalledObject;!*
2-15
Chapter 2
Logging Filter Actions
java.io.serialization.level = FINE
java.io.serialization.level = FINEST
2-16
3
Enhanced Deprecation
The semantics of what deprecation means includes whether an API may be removed in the
near future.
If you are a library maintainer, you can take advantage of the updated deprecation syntax to
inform users of your library about the status of APIs provided by your library.
If you are a library or application developer, you can use the jdeprscan tool to find uses of
deprecated JDK API elements in your applications or libraries.
Topics
• Deprecation in the JDK
• How to Deprecate APIs
• Notifications and Warnings
• Running jdeprscan
The @Deprecated annotation marks an API in a way that is recorded in the class file and is
available at runtime. This allows various tools, such as javac and jdeprscan, to detect and
flag usage of deprecated APIs. The @deprecated JavaDoc tag is used in documentation of
3-1
Chapter 3
How to Deprecate APIs
deprecated APIs, for example, to describe the reason for deprecation, and to suggest
alternative APIs.
Note the capitalization: the annotation starts with an uppercase D and the JavaDoc tag
starts with a lowercase d.
• @Deprecated(since="<version>")
– <version> identifies the version in which the API was deprecated. This is for
informational purposes. The default is the empty string ("").
• @Deprecated(forRemoval=<boolean>)
– forRemoval=true indicates that the API is subject to removal in a future
release.
– forRemoval=false recommends that code should no longer use this API;
however, there is no current intent to remove the API. This is the default value.
For example: @Deprecated(since="9", forRemoval=true)
Semantics of Deprecation
The two elements of the @Deprecated annotation give developers the opportunity to
clarify what deprecation means for their exported APIs (which are APIs that are
provided by a library that are accessible to code outside of that library, such as
applications or other libraries).
For the JDK platform:
• @Deprecated(forRemoval=true) indicates that the API is eligible to be
removed in a future release of the JDK platform.
3-2
Chapter 3
Notifications and Warnings
When it encounters an @deprecated tag, the javadoc tool moves the text following the
@deprecated tag to the front of the description and precedes it with a warning. For example,
this source:
/**
* ...
* @deprecated This method does not properly convert bytes into
* characters. As of JDK 1.1, the preferred way to do this is via the
* {@code String} constructors that take a {@link
* java.nio.charset.Charset}, charset name, or that use the platform's
* default charset.
* ...
*/
@Deprecated(since="1.1")
public String(byte ascii[], int hibyte) {
...
@Deprecated(since="1.1")
public String(byte[] ascii,
int hibyte)
Deprecated. This method does not properly convert bytes into characters. As
of
JDK 1.1, the preferred way to do this is via the String constructors that
take a
Charset, charset name, or that use the platform's default charset.
If you use the @deprecated JavaDoc tag without the corresponding @Deprecated
annotation, a warning is generated.
3-3
Chapter 3
Notifications and Warnings
The Java compiler generates warnings about deprecated APIs. There are options to
generate more information about warnings, and you can also suppress deprecation
warnings.
The warnings can also be turned off independently (note the "–"): -Xlint:-
deprecation and -Xlint:-removal.
$ javac src/example/DeprecationExample.java
Note: src/example/DeprecationExample.java uses or overrides a
deprecated API.
Note: Recompile with -Xlint:deprecation for details.
3-4
Chapter 3
Running jdeprscan
warnings that you no longer want to see. You can use the @SuppressWarnings annotation to
suppress warnings whenever that code is compiled. Place the @SuppressWarnings annotation
at the declaration of the class, method, field, or local variable that uses a deprecated API.
The @SuppressWarnings options are:
• @SuppressWarnings("deprecation") — Suppresses only the ordinary deprecation
warnings.
• @SuppressWarnings("removal") — Suppresses only the removal warnings.
• @SuppressWarnings({"deprecation","removal"}) — Suppresses both types of
warnings.
Here’s an example of suppressing a warning.
@SuppressWarnings("deprecation")
Object[] values = jlist.getSelectedValues();
With the @SuppressWarnings annotation, no warnings are issued for this line, even if
warnings are enabled on the command line.
Running jdeprscan
jdeprscan is a static analysis tool that reports on an application’s use of deprecated JDK API
elements. Run jdeprscan to help identify possible issues in compiled class files or jar files.
You can find out about deprecated JDK APIs from the compiler notifications. However, if you
don’t recompile with every JDK release, or if the warnings were suppressed, or if you depend
on third-party libraries that are distributed as binary artifacts, then you should run jdeprscan.
It’s important to discover dependencies on deprecated APIs before the APIs are removed
from the JDK. If the binary uses an API that is deprecated for removal in the current JDK
release, and you don’t recompile, then you won’t get any notifications. When the API is
removed in a future JDK release, then the binary will simply fail at runtime. jdeprscan lets
you detect such usage now, well before the API is removed.
For the complete syntax of how to run the tool and how to interpret the output, see The
jdeprscan Command in the Java Development Kit Tool Specifications.
3-5
4
XML Catalog API
Use the XML Catalog API to implement a local XML catalog.
Java SE 9 introduced a new XML Catalog API to support the Organization for the
Advancement of Structured Information Standards (OASIS) XML Catalogs, OASIS Standard
V1.1, 7 October 2005. This chapter of the Core Libraries Guide describes the API, its support
by the Java XML processors, and usage patterns.
The XML Catalog API is a straightforward API for implementing a local catalog, and the
support by the JDK XML processors makes it easier to configure your processors or the
entire environment to take advantage of the feature.
4-1
Chapter 4
XML Catalog API Interfaces
CatalogFeatures f = CatalogFeatures.builder().with(Feature.RESOLVE,
"continue").build();
4-2
Chapter 4
Using the XML Catalog API
To set this continue functionality system-wide, use the Java command line or
System.setProperty method:
System.setProperty(Feature.RESOLVE.getPropertyName(), "continue");
To set this continue functionality for the whole JVM instance, enter a line in the
jaxp.properties file:
javax.xml.catalog.resolve = "continue"
The resolve property, as well as the prefer and defer properties, can be set as an attribute
of the catalog or group entry in a catalog file. For example, in the following catalog, the
resolve attribute is set with the value continue. The attribute can also be set on the group
entry as follows:
Properties set in a narrower scope override those that are set in a wider one. Therefore, a
property set through the API always takes preference.
System Reference
Use a CatalogResolver object to locate a local resource.
4-3
Chapter 4
Using the XML Catalog API
<?xml version="1.0"?>
<!DOCTYPE catalogtest PUBLIC "-//OPENJDK//XML CATALOG DTD//1.0"
"http://openjdk.java.net/xml/catalog/dtd/example.dtd">
<catalogtest>
Test &example; entry
</catalogtest>
However, the URI to the example.dtd file in the XML file doesn't need to exist. The
purpose is to provide a unique identifier for the CatalogResolver object to locate a
local resource. To do this, create a catalog entry file called catalog.xml with a system
entry to refer to the local resource:
With this catalog entry file and the system entry, all you need to do is get a default
CatalogFeatures object and set the URI to the catalog entry file to create a
CatalogResolver object:
CatalogResolver cr =
CatalogManager.catalogResolver(CatalogFeatures.defaults(),
catalogUri);
URI.create("file:///users/auser/catalog/catalog.xml")
The CatalogResolver object can now be used as a JDK XML resolver. In the following
example, it’s used as a SAX EntityResolver:
Notice that in the example the system identifier is given an absolute URI. That makes
it easy for the resolver to find the match with exactly the same systemId in the
catalog's system entry.
4-4
Chapter 4
Using the XML Catalog API
If the system identifier in the XML is relative, then it may complicate the matching process
because the XML processor may have made it absolute with a specified base URI or the
source file's URI. In that situation, the systemId of the system entry would need to match the
anticipated absolute URI. An easier solution is to use the systemSuffix entry, for example:
The systemSuffix entry matches any reference that ends with example.dtd in an XML
source and resolves it to a local example.dtd file as specified in the uri attribute. You may
add more to the systemId to ensure that it’s unique or the correct reference. For example,
you may set the systemIdSuffix to xml/catalog/dtd/example.dtd, or rename the id in both
the XML source file and the systemSuffix entry to make it a unique match, for example
my_example.dtd.
The URI of the system entry can be absolute or relative. If the external resources have a fixed
location, then an absolute URI is more likely to guarantee uniqueness. If the external
resources are placed relative to your application or the catalog entry file, then a relative URI
may be more effective, allowing the deployment of your application without knowing where it’s
installed. Such a relative URI then is resolved using the base URI or the catalog file’s URI if
the base URI isn’t specified. In the previous example, example.dtd is assumed to have been
placed in the same directory as the catalog file.
Public Reference
Use a public entry instead of a system entry to find a desired resource.
If no system entry matches the desired resource, and the PREFER property is specified to
match public, then a public entry can do the same as a system entry. Note that public is
the default setting for the PREFER property.
When you create and use a CatalogResolver object with this entry file, the example.dtd
resolves through the publicId property. See System Reference for an example of creating a
CatalogResolver object.
URI Reference
Use a uri entry to find a desired resource.
The URI type entries, including uri, rewriteURI, and uriSuffix, can be used in a similar
way as the system type entries.
4-5
Chapter 4
Using the XML Catalog API
make that distinction. This is because the specifications for the existing Java XML
Resolvers, such as XMLResolver and LSResourceResolver, doesn’t give a preference.
The uri type entries, including uri, rewriteURI, and uriSuffix, can be used in a
similar way as the system type entries. The uri elements are defined to associate an
alternate URI reference with a URI reference. In the case of system reference, this is
the systemId property.
You may therefore replace the system entry with a uri entry in the following example,
although system entries are more generally used for DTD references.
<system
systemId="http://openjdk.java.net/xml/catalog/dtd/example.dtd"
uri="example.dtd"/>
<uri name="http://openjdk.java.net/xml/catalog/dtd/example.dtd"
uri="example.dtd"/>
While system entries are frequently used for DTDs, uri entries are preferred for URI
references such as XSD and XSL import and include. The next example uses a uri
entry to resolve a XSL import.
As described in XML Catalog API Interfaces, the XML Catalog API defines the
CatalogResolver interface that extends Java XML Resolvers including
EntityResolver, XMLResolver, URIResolver, and LSResolver. Therefore, a
CatalogResolver object can be used by SAX, DOM, StAX, Schema Validation, as well
as XSLT Transform. The following code creates a CatalogResolver object with default
feature settings:
CatalogResolver cr =
CatalogManager.catalogResolver(CatalogFeatures.defaults(),
catalogUri);
Alternatively the code can register the CatalogResolver object on the Transformer
object:
Assuming the XSL source file contains an import element to import the
xslImport.xsl file into the XSL source:
<xsl:import href="pathto/xslImport.xsl"/>
4-6
Chapter 4
Java XML Processors Support
To resolve the import reference to where the import file is actually located, a
CatalogResolver object should be set on the TransformerFactory class before creating the
Transformer object, and a uri entry such as the following must be added to the catalog entry
file:
The discussion about absolute or relative URIs and the use of systemSuffix or uriSuffix
entries with the system reference applies to the uri entries as well.
This means that you don’t need to create a CatalogResolver object outside an XML
processor. Catalog files can be registered directly to the Java XML processor, or specified
through system properties, or in the jaxp.properties file. The XML processors perform
the mappings through the catalogs automatically.
USE_CATALOG
A Java XML processor determines whether the XML Catalogs feature is supported based on
the value of the USE_CATALOG feature. By default, USE_CATALOG is set to true for all JDK XML
Processors. The Java XML processor further checks for the availability of a catalog file, and
attempts to use the XML Catalog API only when the USE_CATALOG feature is true and a
catalog is available.
The USE_CATALOG feature is supported by the XML Catalog API, the system property, and the
jaxp.properties file. For example, if USE_CATALOG is set to true and it’s desirable to disable
the catalog support for a particular processor, then this can be done by setting the
USE_CATALOG feature to false through the processor's setFeature method. The following
code sets the USE_CATALOG feature to the specified value useCatalog for an XMLReader object:
4-7
Chapter 4
Java XML Processors Support
On the other hand, if the entire environment must have the catalog turned off, then this
can be done by configuring the jaxp.properties file with a line:
javax.xml.useCatalog = false;
javax.xml.catalog.files
The javax.xml.catalog.files property is defined by the XML Catalog API and
supported by the JDK XML processors, along with other catalog features. To employ
the catalog feature on a parsing, validating, or transforming process, all that’s needed
is to set the FILES property on the processor, through its system property or using the
jaxp.properties file.
Catalog URI
The catalog file reference must be a valid URI, such as file:///users/auser/
catalog/catalog.xml.
The URI reference in a system or a URI entry in the catalog file can be absolute or
relative. If they’re relative, then they are resolved using the catalog file's URI or a base
URI if specified.
4-8
Chapter 4
Java XML Processors Support
Note that catalog is a URI to a catalog file. For example, it could be something like
"file:///users/auser/catalog/catalog.xml".
It’s best to deploy resolving target files along with the catalog entry file, so that the files can
be resolved relative to the catalog file. For example, if the following is a uri entry in the
catalog file, then the XSLImport_html.xsl file will be located at /users/auser/catalog/
XSLImport_html.xsl.
In the prior sample code, note the statement spf.setXIncludeAware(true). When this is
enabled, any XInclude is resolved using the catalog as well.
<simple>
<test xmlns:xinclude="http://www.w3.org/2001/XInclude">
<latin1>
<firstElement/>
<xinclude:include href="pathto/XI_text.xml" parse="text"/>
<insideChildren/>
<another>
<deeper>text</deeper>
</another>
</latin1>
<test2>
<xinclude:include href="pathto/XI_test2.xml"/>
</test2>
</test>
</simple>
<?xml version="1.0"?>
<!-- comment before root -->
<!DOCTYPE red SYSTEM "pathto/XI_red.dtd">
<red xmlns:xinclude="http://www.w3.org/2001/XInclude">
<blue>
<xinclude:include href="pathto/XI_text.xml" parse="text"/>
</blue>
</red>
4-9
Chapter 4
Java XML Processors Support
Assume another text file, XI_text.xml, contains a simple string, and the file
XI_red.dtd is as follows:
In these XML files, there is an XInclude element inside an XInclude element, and a
reference to a DTD. Assuming they are located in the same folder along with the
catalog file CatalogSupport.xml, add the following catalog entries to map them:
When the parser.parse method is called to parse the XI_simple.xml file, it’s able to
locate the XI_test2.xml file in the XI_simple.xml file, and the XI_text.xml file and
the XI_red.dtd file in the XI_test2.xml file through the specified catalog.
When the XMLStreamReader streamReader object is used to parse the XML source,
external references in the source are then resolved in accordance with the specified
entries in the catalog.
Note that unlike the DocumentBuilderFactory class that has both setFeature and
setAttribute methods, the XMLInputFactory class defines only a setProperty
method. The XML Catalog API features including XMLConstants.USE_CATALOG are all
set through this setProperty method. For example, to disable USE_CATALOG on a
XMLStreamReader object, you can do the following:
factory.setProperty(XMLConstants.USE_CATALOG, false);
SchemaFactory factory =
SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
factory.setProperty(CatalogFeatures.Feature.FILES.getPropertyName(),
catalog);
Schema schema = factory.newSchema(schemaFile);
4-10
Chapter 4
Java XML Processors Support
<xs:import
namespace="http://www.w3.org/XML/1998/namespace"
schemaLocation="http://www.w3.org/2001/pathto/xml.xsd">
<xs:annotation>
<xs:documentation>
Get access to the xml: attribute groups for xml:lang
as declared on 'schema' and 'documentation' below
</xs:documentation>
</xs:annotation>
</xs:import>
Following along with this example, to use local resources to improve your application
performance by reducing calls to the W3C server:
• Include these entries in the catalog set on the SchemaFactory object:
• Download the source files XMLSchema.dtd, datatypes.dtd, and xml.xsd and save them
along with the catalog file.
As already discussed, the XML Catalog API lets you use any of the entry types that you
prefer. In the prior case, instead of the uri entry, you could also use either one of the
following:
• A public entry, because the namespace attribute in the import element is treated as the
publicId element:
• A system entry:
4-11
Chapter 4
Java XML Processors Support
Note:
When experimenting with the XML Catalog API, it might be useful to ensure
that none of the URIs or system IDs used in your sample files points to any
actual resources on the internet, and especially not to the W3C server. This
lets you catch mistakes early should the catalog resolution fail, and avoids
putting a burden on W3C servers, thus freeing them from any unnecessary
connections. All the examples in this topic and other related topics about the
XML Catalog API, have an arbitrary string "pathto" added to any URI for
that purpose, so that no URI could possibly resolve to an external W3C
resource.
To use the catalog to resolve any external resources in an XML source to be validated,
set the catalog on the Validator object:
SchemaFactory schemaFactory =
SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
Schema schema = schemaFactory.newSchema();
Validator validator = schema.newValidator();
validator.setProperty(CatalogFeatures.Feature.FILES.getPropertyName(),
catalog);
StreamSource source = new StreamSource(new File(xml));
validator.validate(source);
If the XSL source that the factory is using to create the Transformer object contains
DTD, import, and include statements similar to these:
Then the following catalog entries can be used to resolve these references:
<system
systemId="http://openjdk.java.net/xml/catalog/dtd/XSLDTD.dtd"
uri="XSLDTD.dtd"/>
<uri name="pathto/XSLImport_html.xsl" uri="XSLImport_html.xsl"/>
<uri name="pathto/XSLInclude_header.xsl" uri="XSLInclude_header.xsl"/>
4-12
Chapter 4
Calling Order for Resolvers
Detecting Errors
Detect configuration issues by isolating the problem.
The XML Catalogs Standard requires that the processors recover from any resource
failures and continue, therefore the XML Catalog API ignores any failed catalog entry files
without issuing an error, which makes it harder to detect configuration issues.
4-13
5
Creating Unmodifiable Lists, Sets, and Maps
Convenience static factory methods on the List, Set, and Map interfaces let you easily
create unmodifiable lists, sets, and maps.
A collection is considered unmodifiable if elements cannot be added, removed, or replaced.
After you create an unmodifiable instance of a collection, it holds the same data as long as a
reference to it exists.
A collection that is modifiable must maintain bookkeeping data to support future
modifications. This adds overhead to the data that is stored in the modifiable collection. A
collection that is unmodifiable does not need this extra bookkeeping data. Because the
collection never needs to be modified, the data contained in the collection can be packed
much more densely. Unmodifiable collection instances generally consume much less memory
than modifiable collection instances that contain the same data.
Topics
• Use Cases
• Syntax
• Creating Unmodifiable Copies of Collections
• Creating Unmodifiable Collections from Streams
• Randomized Iteration Order
• About Unmodifiable Collections
• Space Efficiency
• Thread Safety
Use Cases
Whether to use an unmodifiable collection or a modifiable collection depends on the data in
the collection.
An unmodifiable collection provides space efficiency benefits and prevents the collection from
accidentally being modified, which might cause the program to work incorrectly. An
unmodifiable collection is recommended for the following cases:
• Collections that are initialized from constants that are known when the program is written
• Collections that are initialized at the beginning of a program from data that is computed or
is read from something such as a configuration file
For a collection that holds data that is modified throughout the course of the program, a
modifiable collection is the best choice. Modifications are performed in-place, so that
incremental additions or deletions of data elements are quite inexpensive. If this were done
with an unmodifiable collection, a complete copy would have to be made to add or remove a
single element, which usually has unacceptable overhead.
5-1
Chapter 5
Syntax
Syntax
The API for these collections is simple, especially for small numbers of elements.
Topics
• Unmodifiable List Static Factory Methods
• Unmodifiable Set Static Factory Methods
• Unmodifiable Map Static Factory Methods
List.of()
List.of(e1)
List.of(e1, e2) // fixed-argument form overloads up to 10
elements
List.of(elements...) // varargs form supports an arbitrary number of
elements or an array
5-2
Chapter 5
Syntax
Set.of()
Set.of(e1)
Set.of(e1, e2) // fixed-argument form overloads up to 10 elements
Set.of(elements...) // varargs form supports an arbitrary number of
elements or an array
Map.of()
Map.of(k1, v1)
Map.of(k1, v1, k2, v2) // fixed-argument form overloads up to 10 key-
value pairs
Map.ofEntries(entry(k1, v1), entry(k2, v2),...)
// varargs form supports an arbitrary number of Entry objects or an array
5-3
Chapter 5
Creating Unmodifiable Copies of Collections
For example, suppose you have some code that gathers elements from several
places:
There are corresponding static factory methods for Set and Map called Set.copyOf
and Map.copyOf. Because the parameter of List.copyOf and Set.copyOf is
Collection, you can create an unmodifiable List that contains the elements of a
Set and an unmodifiable Set that contains the elements of a List. If you use
Set.copyOf to create a Set from a List, and the List contains duplicate elements,
an exception is not thrown. Instead, an arbitrary one of the duplicate elements is
included in the resulting Set.
If the collection you want to copy is modifiable, then the copyOf method creates an
unmodifiable collection that is a copy of the original. That is, the result contains all the
same elements as the original. If elements are added to or removed from the original
collection, that won't affect the copy.
If the original collection is already unmodifiable, then the copyOf method simply
returns a reference to the original collection. The point of making a copy is to isolate
5-4
Chapter 5
Creating Unmodifiable Collections from Streams
the returned collection from changes to the original one. But if the original collection cannot
be changed, there is no need to make a copy of it.
In both of these cases, if the elements are mutable, and an element is modified, that change
causes both the original collection and the copy to appear to have changed.
Collectors.toUnmodifiableList()
Collectors.toUnmodifiableSet()
Collectors.toUnmodifiableMap(keyMapper, valueMapper)
Collectors.toUnmodifiableMap(keyMapper, valueMapper, mergeFunction)
For example, to transform the elements of a source collection and place the results into an
unmodifiable set, you can do the following:
Set<Item> unmodifiableSet =
sourceCollection.stream()
.map(...)
.collect(Collectors.toUnmodifiableSet());
5-5
Chapter 5
About Unmodifiable Collections
The following example shows how the iteration order is different after jshell is
restarted.
jshell> /exit
| Goodbye
C:\Program Files\Java\jdk\bin>jshell
Randomized iteration order applies to the collection instances created by the Set.of,
Map.of, and Map.ofEntries methods and the toUnmodifiableSet and
toUnmodifiableMap collectors. The iteration ordering of collection implementations
such as HashMap and HashSet is unchanged.
However, if the contained elements are mutable, then this may cause the collection to
behave inconsistently or make its contents to appear to change.
Let’s look at an example where an unmodifiable collection contains mutable elements.
Using jshell, create two lists of String objects using the ArrayList class, where
the second list is a copy of the first. Trivial jshell output was removed.
Next, using the List.of method, create unmodlist1 and unmodlist2 that point to the
first lists. If you try to modify unmodlist1, then you see an exception error because
unmodlist1 is unmodifiable. Any modification attempt throws an exception.
5-6
Chapter 5
About Unmodifiable Collections
| at ImmutableCollections.uoe (ImmutableCollections.java:71)
| at ImmutableCollections$AbstractImmutableList.add
(ImmutableCollections
.java:75)
| at (#8:1)
But if you modify the original list1, the contents of unmodlist1 changes, even though
unmodlist1 is unmodifiable.
jshell> list1.add("c")
jshell> list1
list1 ==> [a, b, c]
jshell> unmodlist1
ilist1 ==> [[a, b, c], [a, b, c]]
jshell> unmodlist2
ilist2 ==> [[a, b], [a, b]]
jshell> unmodlist1.equals(unmodlist2)
$14 ==> false
jshell> unmodlist1.add("c")
| Exception java.lang.UnsupportedOperationException
| at Collections$UnmodifiableCollection.add (Collections.java:1058)
| at (#8:1)
5-7
Chapter 5
Space Efficiency
Note that unmodlist1 is a view of list1. You cannot change the view directly, but you
can change the original list, which changes the view. If you change the original list1,
no error is generated, and the unmodlist1 list has been modified.
jshell> list1.add("c")
$19 ==> true
jshell> list1
list1 ==> [a, b, c]
jshell> unmodlist1
unmodlist1 ==> [a, b, c]
The reason for an unmodifiable view is that the collection cannot be modified by calling
methods on the view. However, anyone with a reference to the underlying collection,
and the ability to modify it, can cause the unmodifiable view to change.
Space Efficiency
The collections returned by the convenience factory methods are more space efficient
than their modifiable equivalents.
All of the implementations of these collections are private classes hidden behind a
static factory method. When it is called, the static factory method chooses the
implementation class based on the size of the collection. The data may be stored in a
compact field-based or array-based layout.
Let’s look at the heap space consumed by two alternative implementations. First,
here’s an unmodifiable HashSet that contains two strings:
The set includes six objects: the unmodifiable wrapper; the HashSet, which contains a
HashMap; the table of buckets (an array); and two Node instances (one for each
element). On a typical VM, with a 12–byte header per object, the total overhead comes
to 96 bytes + 28 * 2 = 152 bytes for the set. This is a large amount of overhead
compared to the amount of data stored. Plus, access to the data unavoidably requires
multiple method calls and pointer dereferences.
Instead, we can implement the set using Set.of:
Because this is a field-based implementation, the set contains one object and two
fields. The overhead is 20 bytes. The new collections consume less heap space, both
in terms of fixed overhead and on a per-element basis.
Not needing to support mutation also contributes to space savings. In addition, the
locality of reference is improved, because there are fewer objects required to hold the
data.
5-8
Chapter 5
Thread Safety
Thread Safety
If multiple threads share a modifiable data structure, steps must be taken to ensure that
modifications made by one thread do not cause unexpected side effects for other threads.
However, because an immutable object cannot be changed, it is considered thread safe
without requiring any additional effort.
When several parts of a program share data structures, a modification to a structure made by
one part of the program is visible to the other parts. If the other parts of the program aren't
prepared for changes to the data, then bugs, crashes, or other unexpected behavior could
occur. However, if different parts of a program share an immutable data structure, such
unexpected behavior can never happen, because the shared structure cannot be changed.
Similarly, when multiple threads share a data structure, each thread must take precautions
when modifying that data structure. Typically, threads must hold a lock while reading from or
writing to any shared data structure. Failing to lock properly can lead to race conditions or
inconsistencies in the data structure, which can result in bugs, crashes, or other unexpected
behavior. However, if multiple threads share an immutable data structure, these problems
cannot occur, even in the absence of locking. Therefore, an immutable data structure is said
to be thread safe without requiring any additional effort such as adding locking code.
A collection is considered unmodifiable if elements cannot be added, removed, or replaced.
However, an unmodifiable collection is only immutable if the elements contained in the
collection are immutable. To be considered thread safe, collections created using the static
factory methods and toUnmodifiable- collectors must contain only immutable elements.
5-9
6
Process API
The Process API lets you start, retrieve information about, and manage native operating
system processes.
With this API, you can work with operating system processes as follows:
• Run arbitrary commands:
– Filter running processes
– Redirect output
– Connect heterogeneous commands and shells by scheduling tasks to start when
another ends
– Clean up leftover processes
• Test the running of commands:
– Run a series of tests
– Log output
• Monitor commands:
– Monitor long-running processes and restart them if they terminate
– Collect usage statistics
Topics
• Process API Classes and Interfaces
• Creating a Process
• Getting Information About a Process
• Redirecting Output from a Process
• Filtering Processes with Streams
• Handling Processes When They Terminate with the onExit Method
• Controlling Access to Sensitive Process Information
Topics
• ProcessBuilder Class
• Process Class
• ProcessHandle Interface
6-1
Chapter 6
Process API Classes and Interfaces
• ProcessHandle.Info Interface
ProcessBuilder Class
The ProcessBuilder class lets you create and start operating system processes.
See Creating a Process for examples on how to create and start a process. The
ProcessBuilder class manages various process attributes, which the following
table summarizes:
6-2
Chapter 6
Process API Classes and Interfaces
Process Class
The methods in the Process class let you to control processes started by the methods
ProcessBuilder.start and Runtime.exec. The following table summarizes these
methods:
The following table summarizes the methods of the Process class.
ProcessHandle Interface
The ProcessHandle interface lets you identify and control native processes. The Process
class is different from ProcessHandle because it lets you control processes started only by
the methods ProcessBuilder.start and Runtime.exec; however, the Process class
lets you access process input, output, and error streams.
See Filtering Processes with Streams for an example of the ProcessHandle interface. The
following table summarizes the methods of this interface:
6-3
Chapter 6
Creating a Process
ProcessHandle.Info Interface
The ProcessHandle.Info interface lets you retrieve information about a process,
including processes created by the ProcessBuilder.start method and native
processes.
See Getting Information About a Process for an example of the
ProcessHandle.Info interface. The following table summarizes the methods in this
interface:
Method Description
arguments() Returns the arguments of the process as a
String array.
command() Returns the executable path name of the
process.
commandLine() Returns the command line of the process.
startInstant() Returns the start time of the process.
totalCpuDuration() Returns the process's total accumulated CPU
time.
user() Returns the user of the process.
Creating a Process
To create a process, first specify the attributes of the process, such as the command's
name and its arguments, with the ProcessBuilder class. Then, start the process
with the ProcessBuilder.start method, which returns a Process instance.
6-4
Chapter 6
Getting Information About a Process
In the following excerpt, the setEnvTest method sets two environment variables, horse and
doc, then prints the value of these environment variables (as well as the system environment
variable HOME) with the echo command:
This method prints the following (assuming that your home directory is /home/admin):
System.out.printf("Arguments: %s%n",
info.arguments().map(
(String[] a) -> Stream.of(a).collect(Collectors.joining("
")))
.orElse(na));
6-5
Chapter 6
Redirecting Output from a Process
Note:
The excerpt redirects standard output to the file out.tmp. It redirects standard error to
the standard error of the invoking process; the value Redirect.INHERIT specifies
that the subprocess I/O source or destination is the same as that of the current
6-6
Chapter 6
Filtering Processes with Streams
// ...
// ...
}
Note that the allProcesses method is limited by native operating system access controls.
Also, because all processes are created and terminated asynchronously, there is no
guarantee that a process in the stream is alive or that no other processes may have been
created since the call to the allProcesses method.
6-7
Chapter 6
Handling Processes When They Terminate with the onExit Method
In the following excerpt, the method startProcessesTest creates three processes and
then starts them. Afterward, it calls onExit().thenAccept(onExitMethod) on each of
the processes; onExitMethod prints the process ID (PID), exit status, and output of the
process.
// ...
6-8
Chapter 6
Handling Processes When They Terminate with the onExit Method
// ...
}
The output of the method startProcessesTest is similar to the following. Note that the
processes might exit in a different order than the order in which they were started.
Number of processes: 3
Start 12401, /bin/sh -c grep -c "java" *
Start 12403, /bin/sh -c grep -c "Process" *
Start 12404, /bin/sh -c grep -c "onExit" *
This method calls the System.in.read() method to prevent the program from terminating
before all the processes have exited (and have run the method specified by the thenAccept
method).
If you want to wait for a process to terminate before proceeding with the rest of the program,
then call onExit().get():
6-9
Chapter 6
Controlling Access to Sensitive Process Information
pBList.stream().forEach(
pb -> {
try {
Process p = pb.start();
System.out.printf("Start %d, %s%n",
p.pid(), p.info().commandLine().orElse("<na>"));
p.onExit().get();
printGrepResults(p);
} catch (IOException|InterruptedException|ExecutionException
e ) {
System.err.println("Exception caught");
e.printStackTrace();
}
}
);
}
The ComputableFuture class contains a variety of methods that you can call to
schedule tasks when a process exits including the following:
• thenApply: Similar to thenAccept, except that it takes a lambda expression of
type Function (a lambda expression that returns a value).
• thenRun: Takes a lambda expression of type Runnable (no formal parameters or
return value).
• thenApplyAsyc: Runs the specified Function with a thread from
ForkJoinPool.commonPool().
Because ComputableFuture implements the Future interface, this class also contains
synchronous methods:
• get(long timeout, TimeUnit unit): Waits, if necessary, at most the time
specified by its arguments for the process to complete.
• isDone: Returns true if the process is completed.
6-10
Chapter 6
Controlling Access to Sensitive Process Information
WARNING:
The Security Manager and APIs related to it have been deprecated and are subject
to removal in a future release. There is no replacement for the Security Manager.
See JEP 411 for discussion and alternatives.
6-11
7
Preferences API
The Preferences API enables applications to manage preference and configuration data.
Applications require preference and configuration data to adapt to the needs of different users
and environments. The java.util.prefs package provides a way for applications to store
and retrieve user and system preference and configuration data. The data is stored
persistently in an implementation-dependent backing store. There are two separate trees of
preference nodes: one for user preferences and one for system preferences.
All of the methods that modify preference data are permitted to operate asynchronously. They
may return immediately, and changes will eventually propagate to the persistent backing
store. The flush method can be used to force changes to the backing store.
The methods in the Preferences class may be invoked concurrently by multiple threads in
a single JVM without the need for external synchronization, and the results will be equivalent
to some serial execution. If this class is used concurrently by multiple JVMs that store their
preference data in the same backing store, the data store will not be corrupted, but no other
guarantees are made concerning the consistency of the preference data.
Topics:
• Comparing the Preferences API to Other Mechanisms
• Usage Notes
• Design FAQ
7-1
Chapter 7
Usage Notes
policy to prevent name clashes, foster consistency, and encourage robustness in the
face of inaccessibility of the backing data store.
Usage Notes
The information in this section is not part of the Preferences API specification. It is
intended to provide some examples of how the Preferences API might be used.
Topics:
• Obtain Preferences Objects for an Enclosing Class
• Obtain Preferences Objects for a Static Method
• Atomic Updates
• Determine Backing Store Status
package com.greencorp.widget;
import java.util.prefs.*;
void getPrefs() {
Preferences prefs =
Preferences.userNodeForPackage(Gadget.class);
...
}
7-2
Chapter 7
Usage Notes
...
}
It is always acceptable to obtain a system preferences object once, in a static initializer, and
use it whenever system preferences are required:
In general, it is acceptable to do the same thing for a user preferences object, but not if the
code in question is to be used in a server, wherein multiple users are running concurrently or
serially. In such a system, userNodeForPackage and userRoot return the appropriate node for
the calling user, thus it's critical that calls to userNodeForPackage or userRoot be made from
the appropriate thread at the appropriate time. If a piece of code may eventually be used in
such a server environment, it is a good, conservative practice to obtain user preferences
objects immediately before they are used, as in the prior example.
Atomic Updates
The Preferences API does not provide database-like "transactions" wherein multiple
preferences are modified atomically. Occasionally, it is necessary to modify two or more
preferences as a unit.
For example, suppose you are storing the x and y coordinates where a window is to be
placed. The only way to achieve atomicity is to store both values in a single preference. Many
encodings are possible. Here's a simple one:
int x, y;
...
prefs.put(POSITION, x + "," + y);
7-3
Chapter 7
Design FAQ
x = Integer.parseInt(coordinates.substring(0, i));
y = Integer.parseInt(position.substring(i + 1));
} catch(Exception e) {
// Value was corrupt, just use defaults
x = X_DEFAULT;
y = Y_DEFAULT;
}
...
}
Design FAQ
This section provides answers to frequently asked questions about the design of the
Preferences API.
Topics:
• How does this Preferences API relate to the Properties API?
• How does the Preferences API relate to JNDI?
• Why do all of the get methods require the caller to pass in a default?
• How was it decided which methods should throw BackingStoreException?
• Why doesn't this API provide stronger guarantees concerning concurrent access
by multiple VMs? Similarly, why doesn't the API allow multiple Preferences
updates to be combined into a single "transaction", with all or nothing semantics?
7-4
Chapter 7
Design FAQ
• Why does this API have case-sensitive keys and node-names, while other APIs playing
in a similar space (such as the Microsoft Windows Registry and LDAP) do not?
• Why doesn't this API use the Java 2 Collections Framework?
• Why don't the put and remove methods return the old values?
• Why does the API permit, but not require, stored defaults?
• Why doesn't this API contain methods to read and write arbitrary serializable objects?
• Why is Preferences an abstract class rather than an interface?
• Where is the default backing store?
Why do all of the get methods require the caller to pass in a default?
This forces the application authors to provide reasonable default values, so that applications
have a reasonable chance of running even if the repository is unavailable.
Why doesn't this API provide stronger guarantees concerning concurrent access by
multiple VMs? Similarly, why doesn't the API allow multiple Preferences updates to be
combined into a single "transaction", with all or nothing semantics?
While the API does provide rudimentary persistent data storage, it is not intended as a
substitute for a database. It is critical that it be possible to implement this API atop standard
preference/configuration repositories, most of which do not provide database-like guarantees
and functionality. Such repositories have proven adequate for the purposes for which this API
is intended.
Why does this API have case-sensitive keys and node-names, while other APIs playing
in a similar space (such as the Microsoft Windows Registry and LDAP) do not?
In the Java programming universe, case-sensitive String keys are ubiquitous. In particular,
they are provided by the Properties class, which this API is intended to replace. It is not
7-5
Chapter 7
Design FAQ
uncommon for people to use Properties in a fashion that demands case-sensitivity. For
example, Java package names (which are case-sensitive) are sometimes used as
keys. It is recognized that this design decision complicates the life of the systems
programmer who implements Preferences atop a backing store with case-insensitive
keys, but this is considered an acceptable price to pay, as far more programmers will
use the Preferences API than will implement it.
Why don't the put and remove methods return the old values?
It is desirable that both of these methods be executable even if the backing store is
unavailable. This would not be possible if they were required to return the old value.
Further, it would have negative performance impact if the API were implemented atop
some common back-end data stores.
Why does the API permit, but not require, stored defaults?
This functionality is required in enterprise settings for scalable, cost-effective
administration of preferences across the enterprise, but would be overkill in a self-
administered single-user setting.
Why doesn't this API contain methods to read and write arbitrary serializable
objects?
Serialized objects are somewhat fragile: if the version of the program that reads such a
property differs from the version that wrote it, the object may not deserialize properly
(or at all). It is not impossible to store serialized objects using this API, but we do not
encourage it, and have not provided a convenience method.
7-6
Chapter 7
Design FAQ
precedence. The system preferences location can be overridden by setting the system
property java.util.prefs.systemRoot. The user preferences are typically stored at user-
home/.java/.userPrefs. The user preferences location can be overridden by setting the
system property java.util.prefs.userRoot.
7-7
8
Java Logging Overview
The Java Logging APIs, contained in the package java.util.logging, facilitate software
servicing and maintenance at customer sites by producing log reports suitable for analysis by
end users, system administrators, field service engineers, and software development teams.
The Logging APIs capture information such as security failures, configuration errors,
performance bottlenecks, and/or bugs in the application or platform.
The core package includes support for delivering plain text or XML-formatted log records to
memory, output streams, consoles, files, and sockets. In addition, the logging APIs are
capable of interacting with logging services that already exist on the host operating system.
Topics
• Overview of Control Flow
• Log Levels
• Loggers
• Logging Methods
• Handlers
• Formatters
• The LogManager
• Configuration File
• Default Configuration
• Dynamic Configuration Updates
• Native Methods
• XML DTD
• Unique Message IDs
• Security
• Configuration Management
• Packaging
• Localization
• Remote Access and Serialization
• Java Logging Examples
• Appendix A: DTD for XMLFormatter Output
8-1
Chapter 8
These Logger objects allocate LogRecord objects which are passed to Handler
objects for publication. Both Logger and Handler objects may use logging Level
objects and (optionally) Filter objects to decide if they are interested in a particular
LogRecord object. When it is necessary to publish a LogRecord object externally, a
Handler object can (optionally) use a Formatter object to localize and format the
message before publishing it to an I/O stream.
Each Logger object keeps track of a set of output Handler objects. By default all
Logger objects also send their output to their parent Logger. But Logger objects
may also be configured to ignore Handler objects higher up the tree.
Some Handler objects may direct output to other Handler objects. For example, the
MemoryHandler maintains an internal ring buffer of LogRecord objects, and on
trigger events, it publishes its LogRecord object through a target Handler. In such
cases, any formatting is done by the last Handler in the chain.
The APIs are structured so that calls on the Logger APIs can be cheap when logging
is disabled. If logging is disabled for a given log level, then the Logger can make a
cheap comparison test and return. If logging is enabled for a given log level, the
Logger is still careful to minimize costs before passing the LogRecord to the
Handler. In particular, localization and formatting (which are relatively expensive) are
deferred until the Handler requests them. For example, a MemoryHandler can
maintain a circular buffer of LogRecord objects without having to pay formatting
costs.
Log Levels
Each log message has an associated log Level object. The Level gives a rough
guide to the importance and urgency of a log message. Log Level objects
encapsulate an integer value, with higher values indicating higher priorities.
The Level class defines seven standard log levels, ranging from FINEST (the lowest
priority, with the lowest value) to SEVERE (the highest priority, with the highest value).
8-2
Chapter 8
Loggers
As stated earlier, client code sends log requests to Logger objects. Each logger keeps track
of a log level that it is interested in, and discards log requests that are below this level.
Logger objects are normally named entities, using dot-separated names such as java.awt.
The namespace is hierarchical and is managed by the LogManager. The namespace should
typically be aligned with the Java packaging namespace, but is not required to follow it
exactly. For example, a Logger called java.awt might handle logging requests for classes in
the java.awt package, but it might also handle logging for classes in sun.awt that support
the client-visible abstractions defined in the java.awt package.
Logging Methods
The Logger class provides a large set of convenience methods for generating log
messages. For convenience, there are methods for each logging level, corresponding to the
logging level name. Thus rather than calling logger.log(Level.WARNING, ...), a
developer can simply call the convenience method logger.warning(...).
There are two different styles of logging methods, to meet the needs of different communities
of users.
First, there are methods that take an explicit source class name and source method name.
These methods are intended for developers who want to be able to quickly locate the source
of any given logging message. An example of this style is:
Second, there are a set of methods that do not take explicit source class or source method
names. These are intended for developers who want easy-to-use logging and do not require
detailed source information.
For this second set of methods, the Logging framework will make a "best effort" to determine
which class and method called into the logging framework and will add this information into
the LogRecord. However, it is important to realize that this automatically inferred information
may only be approximate. Virtual machines perform extensive optimizations when just-in-time
8-3
Chapter 8
compiling and may entirely remove stack frames, making it impossible to reliably
locate the calling class and method.
Handlers
Java SE provides the following Handler classes:
Formatters
Java SE also includes two standard Formatter classes:
The LogManager
There is a global LogManager object that keeps track of global logging information.
This includes:
• A hierarchical namespace of named Loggers.
• A set of logging control properties read from the configuration file. See the section
Configuration File.
There is a single LogManager object that can be retrieved using the static
LogManager.getLogManager method. This is created during LogManager
initialization, based on a system property. This property allows container applications
(such as EJB containers) to substitute their own subclass of LogManager in place of
the default class.
Configuration File
The logging configuration can be initialized using a logging configuration file that will
be read at startup. This logging configuration file is in standard
java.util.Properties format.
Alternatively, the logging configuration can be initialized by specifying a class that can
be used for reading initialization properties. This mechanism allows configuration data
to be read from arbitrary sources, such as LDAP and JDBC.
8-4
Chapter 8
There is a small set of global configuration information. This is specified in the description of
the LogManager class and includes a list of root-level handlers to install during startup.
The initial configuration may specify levels for particular loggers. These levels are applied to
the named logger and any loggers below it in the naming hierarchy. The levels are applied in
the order they are defined in the configuration file.
The initial configuration may contain arbitrary properties for use by handlers or by
subsystems doing logging. By convention, these properties should use names starting with
the name of the handler class or the name of the main Logger for the subsystem.
Default Configuration
The default logging configuration that ships with the JDK is only a default and can be
overridden by ISVs, system administrators, and end users. This file is located at java-
home/conf/logging.properties.
The default configuration makes only limited use of disk space. It doesn't flood the user with
information, but does make sure to always capture key failure information.
The default configuration establishes a single handler on the root logger for sending output to
the console.
Native Methods
There are no native APIs for logging.
Native code that wishes to use the Java Logging mechanisms should make normal JNI calls
into the Java Logging APIs.
XML DTD
The XML DTD used by the XMLFormatter is specified in Appendix A: DTD for
XMLFormatter Output.
The DTD is designed with a <log> element as the top-level document. Individual log records
are then written as <record> elements.
Note that in the event of JVM crashes it may not be possible to cleanly terminate an
XMLFormatter stream with the appropriate closing </log>. Therefore, tools that are
analyzing log records should be prepared to cope with un-terminated streams.
8-5
Chapter 8
Security
The principal security requirement is that untrusted code should not be able to change
the logging configuration. Specifically, if the logging configuration has been set up to
log a particular category of information to a particular Handler, then untrusted code
should not be able to prevent or disrupt that logging.
The security permission LoggingPermission controls updates to the logging
configuration.
Trusted applications are given the appropriate LoggingPermission so they can call
any of the logging configuration APIs. Untrusted applets are a different story. Untrusted
applets can create and use named loggers in the normal way, but they are not allowed
to change logging control settings, such as adding or removing handlers, or changing
log levels. However, untrusted applets are able to create and use their own
"anonymous" loggers, using Logger.getAnonymousLogger. These anonymous
loggers are not registered in the global namespace, and their methods are not access-
checked, allowing even untrusted code to change their logging control settings.
The logging framework does not attempt to prevent spoofing. The sources of logging
calls cannot be determined reliably, so when a LogRecord is published that claims to
be from a particular source class and source method, it may be a fabrication. Similarly,
formatters such as the XMLFormatter do not attempt to protect themselves against
nested log messages inside message strings. Thus, a spoof LogRecord might
contain a spoof set of XML inside its message string to make it look as if there was an
additional XML record in the output.
In addition, the logging framework does not attempt to protect itself against denial of
service attacks. Any given logging client can flood the logging framework with
meaningless messages in an attempt to conceal some important log message.
Configuration Management
The APIs are structured so that an initial set of configuration information is read as
properties from a configuration file. The configuration information may then be
changed programatically by calls on the various logging classes and objects.
In addition, there are methods on LogManager that allow the configuration file to be
re-read. When this happens, the configuration file values will override any changes
that have been made programatically.
Packaging
All of the logging class are in the java.* part of the namespace, in the
java.util.logging package.
Localization
Log messages may need to be localized.
8-6
Chapter 8
Java Logging Examples
Each logger may have a ResourceBundle name associated with it. The corresponding
ResourceBundle can be used to map between raw message strings and localized
message strings.
Normally, formatters perform localization. As a convenience, the Formatter class provides a
formatMessage method that provides some basic localization and formatting support.
Most of the logging classes are not intended to be serializable. Both loggers and handlers are
stateful classes that are tied into a specific virtual machine. In this respect they are analogous
to the java.io classes, which are also not serializable.
package com.wombat;
import java.util.logging.*;
8-7
Chapter 8
Java Logging Examples
}
}
package com.wombat;
import java.util.logging.*;
8-8
Chapter 8
Appendix A: DTD for XMLFormatter Output
<nanos>562000</nanos>
<sequence>1256</sequence>
<logger>kgh.test.fred</logger>
<level>INFO</level>
<class>kgh.test.XMLTest</class>
<method>writeLog</method>
<thread>10</thread>
<message>Hello world!</message>
</record>
</log>
<!-- Date and time when LogRecord was created in ISO 8601 format -->
<!ELEMENT date (#PCDATA)>
8-9
Chapter 8
Appendix A: DTD for XMLFormatter Output
<!-- The message element contains the text string of a log message. -->
<!ELEMENT message (#PCDATA)>
<!-- If the message string was localized, the key element provides
the original localization message key. -->
<!ELEMENT key (#PCDATA)>
<!-- If the message string was localized, the catalog element provides
the logger's localization resource bundle name. -->
<!ELEMENT catalog (#PCDATA)>
<!-- If the message string was localized, each of the param elements
provides the String value (obtained using Object.toString())
of the corresponding LogRecord parameter. -->
<!ELEMENT param (#PCDATA)>
8-10
9
Java NIO
The Java NIO (New Input/Output) API defines buffers, which are containers for data, and
other structures, such as charsets, channels, and selectable channels. Charsets are
mappings between bytes and Unicode characters. Channels represent connections to entities
capable of performing I/O operations. Selectable channels are those that can be multiplexed,
which means that they can process multiple I/O operations in one channel.
Buffers
They are containers for a fixed amount of data of a specific primitive type. See the java.nio
package and Table 9-1.
9-1
Chapter 9
Charsets
They are named mappings between sequences of 16-bit Unicode characters and
sequences of bytes. Support for charsets include decoders and encoders, which
translate between bytes and Unicode characters. See the java.nio.charset
package and Table 9-2.
Channels
They represent an open connection to an entity such as a hardware device, a file, a
network socket, or a program component that is capable of performing one or more
distinct I/O operations, for example reading or writing. See the java.nio.channels
package and Table 9-3.
9-2
Chapter 9
9-3
Chapter 9
Grep NIO Example
9-4
Chapter 9
Grep NIO Example
// Open the file and then get a channel from the stream
try (FileInputStream fis = new FileInputStream(f);
FileChannel fc = fis.getChannel()) {
9-5
Chapter 9
Checksum NIO Example
// Open the file and then get a channel from the stream
try (
FileInputStream fis = new FileInputStream(f);
FileChannel fc = fis.getChannel()) {
9-6
Chapter 9
Time Query NIO Example
sum(f);
} catch (IOException e) {
System.err.println(f + ": " + e);
}
}
}
}
// Connect
sc.connect(isa);
// Read the time from the remote host. For simplicity we assume
// that the time comes back to us in a single packet, so that we
// only need to read once.
dbuf.clear();
sc.read(dbuf);
}
}
9-7
Chapter 9
Time Server NIO Example
if (args.length < 1) {
System.err.println("Usage: java TimeQuery [port] host...");
return;
}
int firstArg = 0;
9-8
Chapter 9
Non-Blocking Time Server NIO Example
InetAddress.getLocalHost(), port);
ssc.socket().bind(isa);
return ssc;
}
try {
ServerSocketChannel ssc = setup();
for (;;) {
serve(ssc);
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
9-9
Chapter 9
Non-Blocking Time Server NIO Example
InetAddress lh = InetAddress.getLocalHost();
InetSocketAddress isa = new InetSocketAddress(lh, port);
ssc.socket().bind(isa);
int keysAdded = 0;
9-10
Chapter 9
Internet Protocol and UNIX Domain Sockets NIO Example
true);
Date now = new Date();
out.println(now);
out.close();
}
}
}
// Entry point.
public static void main(String[] args) {
// Parse command line arguments and
// create a new time server (no arguments yet)
try {
NBTimeServer nbt = new NBTimeServer();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Special handling is only required for the different address types at initialization. For the server
side, once a listener is created and bound to an address, the code managing the selector can
handle the different address families identically.
import java.io.IOException;
import java.io.UncheckedIOException;
import java.net.*;
import java.nio.ByteBuffer;
import java.nio.channels.*;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import jdk.net.ExtendedSocketOptions;
import jdk.net.UnixDomainPrincipal;
9-11
Chapter 9
Internet Protocol and UNIX Domain Sockets NIO Example
java Socat -h
UNIX:{path}
INET:{host}:port
INET6:{host}:port
9-12
Chapter 9
Internet Protocol and UNIX Domain Sockets NIO Example
System.out.println(ustring);
}
9-13
Chapter 9
Internet Protocol and UNIX Domain Sockets NIO Example
// Return the token between braces, that is, a domain name or UNIX
path.
9-14
Chapter 9
Internet Protocol and UNIX Domain Sockets NIO Example
9-15
Chapter 9
Internet Protocol and UNIX Domain Sockets NIO Example
UNIX
// channels; it's not required behavior.
UnixDomainPrincipal pr =
ch.getOption(ExtendedSocketOptions.SO_PEERCRED);
userid = "user: " + pr.user().toString() +
" group: " +
pr.group().toString();
}
ch.configureBlocking(false);
byteCounter.put(ch, 0);
System.out.printf("Server: new
connection\n\tfrom {%s}\n", ch.getRemoteAddress());
System.out.printf("\tConnection id: %s\n",
nextConnectionId);
if (userid.length() > 0) {
System.out.printf("\tpeer credentials:
%s\n", userid);
}
System.out.printf("\tConnection count: %d\n",
byteCounter.size());
ch.register(selector, SelectionKey.OP_READ,
nextConnectionId++);
} else {
var ch = (SocketChannel) c;
int id = (Integer)key.attachment();
int bytes = byteCounter.get(ch);
readBuf.clear();
int n = ch.read(readBuf);
if (n < 0) {
String remote =
ch.getRemoteAddress().toString();
System.out.printf("Server: closing
connection\n\tfrom: {%s} Id: %d\n", remote, id);
System.out.printf("\tBytes received:
%d\n", bytes);
byteCounter.remove(ch);
ch.close();
} else {
readBuf.flip();
bytes += n;
byteCounter.put(ch, bytes);
display(ch, readBuf, id);
}
}
} catch (IOException e) {
throw new UncheckedIOException(e);
}
};
keys.clear();
}
}
9-16
Chapter 9
Internet Protocol and UNIX Domain Sockets NIO Example
sendBuf.limit(sendBuf.capacity());
sendBuf.position(0);
9-17
Chapter 9
Chmod File NIO Example
}
}
In the first command-line shell, you'll see output similar to the following:
If you don't specify a file name when you create a UNIX domain socket, then the JVM
creates a socket file and automatically binds the socket to it:
9-18
Chapter 9
Chmod File NIO Example
int pos = 0;
// who
boolean u = false;
boolean g = false;
boolean o = false;
boolean done = false;
for (;;) {
switch (expr.charAt(pos)) {
case 'u' : u = true; break;
case 'g' : g = true; break;
9-19
Chapter 9
Chmod File NIO Example
// operator
boolean add = (op == '+');
boolean remove = (op == '-');
boolean assign = (op == '=');
if (!add && !remove && !assign)
throw new IllegalArgumentException("Invalid mode");
// permissions
boolean r = false;
boolean w = false;
boolean x = false;
for (int i=0; i<mask.length(); i++) {
switch (mask.charAt(i)) {
case 'r' : r = true; break;
case 'w' : w = true; break;
case 'x' : x = true; break;
default:
throw new IllegalArgumentException("Invalid
mode");
}
}
9-20
Chapter 9
Chmod File NIO Example
if (x) toAdd.add(GROUP_EXECUTE);
}
if (o) {
if (r) toAdd.add(OTHERS_READ);
if (w) toAdd.add(OTHERS_WRITE);
if (x) toAdd.add(OTHERS_EXECUTE);
}
}
if (remove) {
if (u) {
if (r) toRemove.add(OWNER_READ);
if (w) toRemove.add(OWNER_WRITE);
if (x) toRemove.add(OWNER_EXECUTE);
}
if (g) {
if (r) toRemove.add(GROUP_READ);
if (w) toRemove.add(GROUP_WRITE);
if (x) toRemove.add(GROUP_EXECUTE);
}
if (o) {
if (r) toRemove.add(OTHERS_READ);
if (w) toRemove.add(OTHERS_WRITE);
if (x) toRemove.add(OTHERS_EXECUTE);
}
}
if (assign) {
if (u) {
if (r) toAdd.add(OWNER_READ);
else toRemove.add(OWNER_READ);
if (w) toAdd.add(OWNER_WRITE);
else toRemove.add(OWNER_WRITE);
if (x) toAdd.add(OWNER_EXECUTE);
else toRemove.add(OWNER_EXECUTE);
}
if (g) {
if (r) toAdd.add(GROUP_READ);
else toRemove.add(GROUP_READ);
if (w) toAdd.add(GROUP_WRITE);
else toRemove.add(GROUP_WRITE);
if (x) toAdd.add(GROUP_EXECUTE);
else toRemove.add(GROUP_EXECUTE);
}
if (o) {
if (r) toAdd.add(OTHERS_READ);
else toRemove.add(OTHERS_READ);
if (w) toAdd.add(OTHERS_WRITE);
else toRemove.add(OTHERS_WRITE);
if (x) toAdd.add(OTHERS_EXECUTE);
else toRemove.add(OTHERS_EXECUTE);
}
}
}
// return changer
return new Changer() {
9-21
Chapter 9
Chmod File NIO Example
@Override
public Set<PosixFilePermission>
change(Set<PosixFilePermission> perms) {
perms.addAll(toAdd);
perms.removeAll(toRemove);
return perms;
}
};
}
/**
* A task that <i>changes</i> a set of {@link PosixFilePermission}
elements.
*/
public interface Changer {
/**
* Applies the changes to the given set of permissions.
*
* @param perms
* The set of permissions to change
*
* @return The {@code perms} parameter
*/
Set<PosixFilePermission> change(Set<PosixFilePermission>
perms);
}
/**
* Changes the permissions of the file using the given Changer.
*/
static void chmod(Path file, Changer changer) {
try {
Set<PosixFilePermission> perms = Files
.getPosixFilePermissions(file);
Files.setPosixFilePermissions(file, changer.change(perms));
} catch (IOException x) {
System.err.println(x);
}
}
/**
* Changes the permission of each file and directory visited
*/
static class TreeVisitor implements FileVisitor<Path> {
private final Changer changer;
TreeVisitor(Changer changer) {
this.changer = changer;
}
@Override
public FileVisitResult preVisitDirectory(Path dir,
BasicFileAttributes attrs) {
chmod(dir, changer);
return CONTINUE;
9-22
Chapter 9
Chmod File NIO Example
@Override
public FileVisitResult visitFile(Path file, BasicFileAttributes
attrs) {
chmod(file, changer);
return CONTINUE;
}
@Override
public FileVisitResult postVisitDirectory(Path dir, IOException exc)
{
if (exc != null)
System.err.println("WARNING: " + exc);
return CONTINUE;
}
@Override
public FileVisitResult visitFileFailed(Path file, IOException exc) {
System.err.println("WARNING: " + exc);
return CONTINUE;
}
}
9-23
Chapter 9
Copy File NIO Example
/**
* Returns {@code true} if okay to overwrite a file ("cp -i")
*/
static boolean okayToOverwrite(Path file) {
String answer = System.console().readLine("overwrite %s (yes/
no)? ", file);
return (answer.equalsIgnoreCase("y") ||
answer.equalsIgnoreCase("yes"));
}
/**
* Copy source file to target location. If {@code prompt} is true
then
* prompt user to overwrite target if it exists. The {@code
preserve}
* parameter determines if file attributes should be copied/
preserved.
*/
static void copyFile(Path source, Path target, boolean prompt,
boolean preserve) {
CopyOption[] options = (preserve) ?
new CopyOption[] { COPY_ATTRIBUTES, REPLACE_EXISTING } :
new CopyOption[] { REPLACE_EXISTING };
if (!prompt || Files.notExists(target) ||
okayToOverwrite(target)) {
try {
Files.copy(source, target, options);
} catch (IOException x) {
System.err.format("Unable to copy: %s: %s%n", source,
x);
}
}
}
/**
* A {@code FileVisitor} that copies a file-tree ("cp -r")
*/
static class TreeCopier implements FileVisitor<Path> {
private final Path source;
private final Path target;
private final boolean prompt;
private final boolean preserve;
9-24
Chapter 9
Copy File NIO Example
this.prompt = prompt;
this.preserve = preserve;
}
@Override
public FileVisitResult preVisitDirectory(Path dir,
BasicFileAttributes attrs) {
// before visiting entries in a directory we copy the directory
// (okay if directory already exists).
CopyOption[] options = (preserve) ?
new CopyOption[] { COPY_ATTRIBUTES } : new CopyOption[0];
@Override
public FileVisitResult visitFile(Path file, BasicFileAttributes
attrs) {
copyFile(file, target.resolve(source.relativize(file)),
prompt, preserve);
return CONTINUE;
}
@Override
public FileVisitResult postVisitDirectory(Path dir, IOException exc)
{
// fix up modification time of directory when done
if (exc == null && preserve) {
Path newdir = target.resolve(source.relativize(dir));
try {
FileTime time = Files.getLastModifiedTime(dir);
Files.setLastModifiedTime(newdir, time);
} catch (IOException x) {
System.err.format("Unable to copy all attributes to: %s:
%s%n", newdir, x);
}
}
return CONTINUE;
}
@Override
public FileVisitResult visitFileFailed(Path file, IOException exc) {
if (exc instanceof FileSystemLoopException) {
System.err.println("cycle detected: " + file);
} else {
System.err.format("Unable to copy: %s: %s%n", file, exc);
9-25
Chapter 9
Copy File NIO Example
}
return CONTINUE;
}
}
// process options
int argi = 0;
while (argi < args.length) {
String arg = args[argi];
if (!arg.startsWith("-"))
break;
if (arg.length() < 2)
usage();
for (int i=1; i<arg.length(); i++) {
char c = arg.charAt(i);
switch (c) {
case 'r' : recursive = true; break;
case 'i' : prompt = true; break;
case 'p' : preserve = true; break;
default : usage();
}
}
argi++;
}
9-26
Chapter 9
Disk Usage File NIO Example
target.resolve(source[i].getFileName()) : target;
if (recursive) {
// follow links when copying files
EnumSet<FileVisitOption> opts =
EnumSet.of(FileVisitOption.FOLLOW_LINKS);
TreeCopier tc = new TreeCopier(source[i], dest, prompt,
preserve);
Files.walkFileTree(source[i], opts, Integer.MAX_VALUE, tc);
} else {
// not recursive so source must not be a directory
if (Files.isDirectory(source[i])) {
System.err.format("%s: is a directory%n", source[i]);
continue;
}
copyFile(source[i], dest, prompt, preserve);
}
}
}
}
String s = store.toString();
if (s.length() > 20) {
System.out.println(s);
s = "";
}
System.out.format("%-20s %12d %12d %12d\n", s, total, used, avail);
}
9-27
Chapter 9
User-Defined File Attributes File NIO Example
printFileStore(store);
}
}
}
}
}
UserDefinedFileAttributeView view =
Files.getFileAttributeView(file,
UserDefinedFileAttributeView.class);
9-28
Chapter 9
User-Defined File Attributes File NIO Example
return;
}
System.out.println(Charset.defaultCharset().decode(buf).toString());
return;
}
9-29
10
Java Networking
The Java networking API provides classes for networking functionality, including addressing,
classes for using URLs and URIs, socket classes for connecting to servers, networking
security functionality, and more. It consists of these packages:
• java.net: Classes for implementing networking applications.
• java.net.http: Contains the API for the HTTP Client, which provides high-level client
interfaces to HTTP (versions 1.1 and 2) and low-level client interfaces to WebSocket
instances. See Java HTTP Client for more information about this API, including videos
and sample code.
• javax.net: Classes for creating sockets.
• javax.net.ssl: Secure socket classes.
• jdk.net: Platform specific socket options for the java.net and java.nio.channels
socket classes.
10-1
Chapter 10
Networking System Properties
10-2
Chapter 10
Networking System Properties
10-3
Chapter 10
Networking System Properties
10-4
Chapter 10
Networking System Properties
10-5
Chapter 10
Networking System Properties
Note:
The HTTPS protocol handler uses the same http.nonProxyHosts property
as the HTTP protocol.
10-6
Chapter 10
Networking System Properties
10-7
Chapter 10
Networking System Properties
10-8
Chapter 10
Networking System Properties
10-9
Chapter 10
Networking System Properties
Note:
If a client socket is connected to a remote destination without calling bind
first, then the socket is implicitly bound. In this case, UNIX domain sockets
are unnamed (that is, their path is empty). This behavior is not affected by
any system or networking properties.
10-10
Chapter 10
Networking System Properties
Table 10-8 (Cont.) Other HTTP URL Stream Protocol Handler Properties
10-11
Chapter 10
Networking System Properties
Table 10-8 (Cont.) Other HTTP URL Stream Protocol Handler Properties
10-12
Chapter 10
Networking System Properties
10-13
Chapter 10
Networking System Properties
10-14
11
Pseudorandom Number Generators
Random number generators included in Java SE are more accurately called pseudorandom
number generators (PRNGs). They create a series of numbers based on a deterministic
algorithm.
The most important interfaces and classes are RandomGenerator, which enables you to
generate random numbers of various primitive types given a PRNG algorithm, and
RandomGeneratorFactory, which enables you to create PRNGs based on characteristics
other than the algorithm's name.
See the java.util.random package for more detailed information about the PRNGs
implemented in Java SE.
Topics
• Characteristics of PRNGs
• Generating Pseudorandom Numbers with RandomGenerator Interface
• Generating Pseudorandom Numbers in Multithreaded Applications
– Dynamically Creating New Generators
– Creating Stream of Generators
• Choosing a PRNG Algorithm
Characteristics of PRNGs
Because PRNGs generate a sequence of values based on an algorithm instead of a
“random” physical source, this sequence will eventually restart. The number of values a
PRNG generates before it restarts is called a period.
The state cycle of a PRNG consists of the sequence of all possible values a PRNG can
generate. The state of a PRNG is the position of the last generated value in its state cycle.
In general, to generate a value, the PRNG bases it on the previously generated value.
However, some PRNGs can generate a value many values further down the sequence
without calculating any intermediate values. These are called jumpable PRNGs because they
could jump far ahead in the sequence of values, usually by a fixed distance, typically 264. A
leapable PRNG can jump even further, typically 2128 values. An arbitrarily jumpable PRNG
can jump to any value in the generated sequence of values.
11-1
Chapter 11
Generating Pseudorandom Numbers with RandomGenerator Interface
RandomGeneratorFactory<RandomGenerator> factory2 =
RandomGeneratorFactory.of("SecureRandom");
RandomGenerator random2 = factory2.create();
long value2 = random2.nextLong();
System.out.println(value2);
RandomGeneratorFactory.all()
.map(f -> f.name())
.sorted()
.forEach(n -> System.out.println(n));
RandomGeneratorFactory<RandomGenerator> greatest =
RandomGeneratorFactory
.all()
.sorted((f, g) -> g.period().compareTo(f.period()))
.findFirst()
.orElse(RandomGeneratorFactory.of("Random"));
System.out.println(greatest.name());
11-2
Chapter 11
Generating Pseudorandom Numbers in Multithreaded Applications
System.out.println(greatest.group());
System.out.println(greatest.create().nextLong());
RandomGeneratorFactory<SplittableGenerator> factory =
RandomGeneratorFactory.of("L128X1024MixRandom");
SplittableGenerator random = factory.create();
processes.parallel().forEach(p -> {
RandomGenerator r = random.split();
System.out.println(p + ": " + r.nextLong());
});
Splittable PRNGs generally have large periods to ensure that new objects resulting from a
split use different state cycles. But even if two instances "accidentally" use the same state
cycle, they are highly likely to traverse different regions of that shared state cycle.
11-3
Chapter 11
Choosing a PRNG Algorithm
RandomGeneratorFactory<JumpableGenerator> factory =
RandomGeneratorFactory.of("Xoshiro256PlusPlus");
JumpableGenerator random = factory.create();
11-4
Chapter 11
Choosing a PRNG Algorithm
Note:
As PRNG algorithms evolve, Java SE may add new PRNG algorithms and
deprecate older ones. It's recommended that you don't use deprecated algorithms;
they may be removed from a future Java SE release. Check if an algorithm has
been deprecated by calling either the RandomGenerator.isDeprecated() or
RandomGeneratorFactory.isDeprecated() method.
Cryptographically Secure
For applications that require a random number generator algorithm that is cryptographically
secure, use the SecureRandom class in the java.security package.
See The SecureRandom Class in Java Platform, Standard Edition Security Developer's
Guide for more information.
General Purpose
For applications with no special requirements, L64X128MixRandom balances speed, space,
and period well. It's suitable for both single-threaded and multithreaded applications when
used properly (a separate instance for each thread).
32-Bit Applications
For applications running in a 32-bit environment and using only one or a small number of
threads, L32X64StarStarRandom or L32X64MixRandom are good choices.
11-5
Chapter 11
Choosing a PRNG Algorithm
Large Permutations
For applications that generate large permutations, consider a generator whose period
is much larger than the total number of possible permutations; otherwise, it will be
impossible to generate some of the intended permutations. For example, if the goal is
to shuffle a deck of 52 cards, the number of possible permutations is 52! (52 factorial),
which is approximately 2225.58, so it may be best to use a generator whose period is
roughly 2256 or larger, such as L64X256MixRandom, L64X1024MixRandom,
L128X256MixRandom, or L128X1024MixRandom.
11-6