Node configuration
From JPPF 6.3 Documentation
1 Server discovery
In a JPPF node, the JPPF server discovery mechanisms are all implemented as node connection strategy plugins. JPPF provides several built-in such strategies, including the default strategy which uses the node configuration to find a JPPF driver to connect to. This mechanism allows both automatic discovery via UDP multicast, and manual configuration of the connection. These two features are described in the following sections.
1.1 Discovery through UDP multicast
By default, JPPF nodes are configured to automatically discover active servers on the network. As we have seen in the server discovery section of the JPPF driver configuration, this is possible thanks to the UDP broadcast mechanism of the server. On the other end, the node needs to join the same UDP group to subscribe to the bradcasts from the server.
1.1.1 Enabling and disabling UDP multicast
This is done with the following property, which defaults to true (enabled):
# Enable or disable automatic discovery of JPPF drivers via UDP multicast jppf.discovery.enabled = true
1.1.2 Configuration of UDP multicast
The configuration is performed by defining a multicast group and port number, as in this example showing their default values:
# UDP multicast group to which drivers broadcast their connection parameters jppf.discovery.group = 230.0.0.1 # UDP multicast port to which drivers broadcast their connection parameters jppf.discovery.port = 11111
1.1.3 Inclusion and exclusion patterns
The following four properties define inclusion and exclusion patterns for IPv4 and IPv6 addresses. They provide a means of controlling whether to connect to a server based on its IP address. Each of these patterns defines a list of comma- or semicolumn- separated patterns. The IPv4 patterns can be expressed in either CIDR notation, or in a syntax defined in the Javadoc for the class IPv4AddressPattern. Similarly, IPv6 patterns can be expressed in CIDR notation or in a syntax defined in IPv6AddressPattern. This enables filtering out unwanted IP addresses: the discovery mechanism will only allow addresses that are included and not excluded.
# IPv4 address inclusion patterns jppf.discovery.include.ipv4 = # IPv4 address exclusion patterns jppf.discovery.exclude.ipv4 = # IPv6 address inclusion patterns jppf.discovery.include.ipv6 = # IPv6 address exclusion patterns jppf.discovery.exclude.ipv6 =
Let's take for instance the following pattern specifications:
jppf.discovery.include.ipv4 = 192.168.1. jppf.discovery.exclude.ipv4 = 192.168.1.128-
The equivalent patterns in CIDR notation would be:
jppf.discovery.include.ipv4 = 192.168.1.0/24 jppf.discovery.exclude.ipv4 = 192.168.1.128/25
The inclusion pattern only allows IP addresses in the range 192.168.1.0 ... 192.168.1.255 The exclusion pattern filters out IP addresses in the range 192.168.1.128 ... 192.168.1.255 Thus, we actually defined a filter that only accepts addresses in the range 192.168.1.0 ... 192.168.1.127
These 2 patterns can in fact be rewritten as a single inclusion pattern::
jppf.discovery.include.ipv4 = 192.168.1.-127
or, in CIDR notation:
jppf.discovery.include.ipv4 = 192.168.1.0/25
1.2 Manual connection configuration
If server discovery is disabled, network access to a server must be configured manually. To this effect, the node requires the address or host on which the JPPF server is running, and a TCP port, as shown in this example:
# IP address or host name of the server jppf.server.host = my_host # JPPF server port jppf.server.port = 11111
Not defining these properties is equivalent to assigning them their default value (i.e. “localhost” for the host address, 11111 or 11143 for the port number, depending on whether secure connectivity is disabled or enabled, respectively).
1.3 Enabling secure connectivity
To enable secure connectivity via SSL/TLS for the configured connections, simply set the following:
# enable SSL/TLS over the discovered connections; defaults to false (disabled) jppf.ssl.enabled = true
1.4 Heartbeat-based connection failure detection
The heartbeat mechanism to recover from hardware failure is enabled in the node with the following configuration property:
# Enable recovery from hardware failures on the node. Default is false (disabled) jppf.recovery.enabled = true
As described in the server configuration counterpart, when the node hasn't received any heartbeat message for a time greater than heartbeat_timeout * heartbeat_retries, then it will close its connection to the server and attempt to reconnect.
1.5 Interaction between connection recovery and server discovery
When discovery is enabled for the node (jppf.discovery.enabled = true) and the maximum reconnection time is not infinite (reconnect.max.time = <strictly_positive_value>), a sophisticated failover mechanism takes place, following the sequence of steps below:
- the node attempts to reconnect to the driver to which it was previously connected (or attempted to connect), during a maximum time specified by the configuration property "reconnect.max.time"
- during this maximum time, it will make multiple attempts to connect to the same driver. This covers the case when the driver is restarted in the mean time.
- after this maximum time has elapsed, it will attempt to auto-discover another driver, during a maximum time, specified via the configuration property "jppf.discovery.timeout" (in milliseconds)
- if the node still fails to reconnect after this timeout has expired, it will fall back to the driver manually specified in the node's configuration file
- the cycle starts again
2 Node JVM options
In the same way as for a server (see server JVM options), the node is made of 2 processes: a “controller” process and a “node” process. The controller launches the node as a separate process and watches its exit code. If the exit code has a pre-defined value of 2, then the controller will restart the node process, otherwise it will simply terminate.
This mechanism allows the remote restart (eventually delayed) of a JPPF node using the management APIs or the management console. It is also made such that, if any of the two processes dies unexpectedly, then the other process will die as well, leaving no lingering Java process in the OS.
The node process inherits the following parameters from the controller process:
- location of jppf configuration (-Djppf.config or -Djppf.config.plugin)
- location of Log4j configuration (-Dlog4j.configuration)
- current directory
- environment variables
- Java class path
It is possible to specify additional JVM parameters for the server process, using the configuration property jppf.jvm.options, as in this example:
jppf.jvm.options = -Xms64m -Xmx512m
Here is another example with assertions enabled and remote debugging options:
jppf.jvm.options = -server -Xmx512m -ea -Xrunjdwp:transport=dt_socket,address=localhost:8000,server=y,suspend=n
Contrary to the Java command line, It is possible to specify multiple class path elements through this property, by adding one or more “-cp” or “-classpath” options. For example:
jppf.jvm.options = -cp lib/myJar1.jar:lib/myJar1.jar -Xmx512m -classpath lib/external/externalJar.jar
This syntax allows configuring multiple paths in an OS-independent way, in particular with regards to the path separator character (e.g. ':' on Linux, ';' on Windows).
If a classpath element contains one or more spaces, the path(s) it defines must be surrounded with double quotes:
jppf.jvm.options = -Xmx512m -cp "dir with spaces/myJar1.jar" -cp NoSpaces/myJar2.jar
3 Specifying the path to the JVM
It is possible to choose which JVM will run a node, by specifying the full path to the Java executable with the following property:
# Full path to the java executable jppf.java.path = <path_to_java_executable> # Linux example jppf.java.path = /opt/jdk1.8.0/bin/java # Windows example jppf.java.path = C:/java/jdk1.7.0/bin/java.exe
This property is used in several situations:
- by the shell script from the node distribution that launches the node (startNode.sh or startNode.bat)
- by slave nodes when the property is specified as a configuration override, allowing to start a slave node with a different JVM than its master's
- when a node is restarted with one of the JPPFNodeAdminMBean.updateConfiguration() management methods with jppf.java.path specified as an overriden property.
4 JMX management configuration
JPPF uses JMX to provide remote management capabilities for the nodes, and uses its own JMX connector for communication. The management features are enabled by default; this behavior can be changed by setting the following property:
# Enable or disable management of this node jppf.management.enabled = true
When management is enabled, the JPPF node runs its own JMX remote sevrer. The port on which this JMX server will listen can be defined as follows:
# JMX management port; defaults to 11198 jppf.node.management.port = 11198
5 Processing threads
A node can process multiple tasks concurrently, using a pool of threads. The size of this pool is configured as follows:
# number of threads running tasks in this node jppf.processing.threads = 4
If this property is not defined, its value defaults to the number of processors or cores available to the JVM.
6 Maximum number of concurrent jobs
A node can process multiple jobs concurrently. The maximum number of concurrent jobs is set with the following property:
# maximum number of jobs the node can process concurrently jppf.node.max.jobs = 20
The actual value used for the node is determined as follows:
- when this property is unspecified in the node, it defaults to the value of the same property defined for the server to which the node is connected
- when the property is defined neither in the node nor in the server, it will default to Integer.MAX_VALUE, that is, 231 - 1 or 2147483647
- when this property is defined in both node and server, the value set in the node overrides the value set in the server
7 Class loader cache
Each node creates a specific class loader for each new client whose tasks are executed in that node. The cache itself is managed as a bounded queue, and the oldest class loader will be evicted from the cache whenever the maximum size is reached. The evicted class loader then becomes unreachable and can be garbage collected. In most modern JDKs, this also results in the classes being unloaded.
If the class loader cache size is too large, this can lead to an out of memory condition in the node, especially in these 2 scenarios:
- if too many classes are loaded, the space reserved to the class definitions (permanent generation in Oracle JDK) will fill up and cause an “OutOfMemoryError: PermGen space”
- if the classes hold a large amount of static data (via static fields and static initializers), an “OutOfMemoryError: Heap Space” will be thrown
To mitigate this, the size of the class loader cache can be configured in the node as follows:
jppf.classloader.cache.size = 50
The default value for this property is 50, and the value must be at least equal to 1.
8 Class loader resources cache
To avoid uncessary network round trips, the node class loaders can store locally the resources found in their extended classpath when one of their methods getResourceAsStream(), getResource(), getResources() or getMultipleResources() is called. This cache is enabled by default and the type of storage and location of the file-persisted cache can be configured as follows:
# whether the resource cache is enabled, defaults to 'true' jppf.resource.cache.enabled = true # type of storage: either 'file' (the default) or 'memory' jppf.resource.cache.storage = file # root location of the file-persisted caches jppf.resource.cache.dir = some_directory
When “file” persistence is configured, the node will fall back to memory persistence if the resource cannot be saved to the file system for any reason. This could happen, for instance, when the file system runs out of space.
For more details, please refer to the local caching of network resources section of this documentation.
9 Offline mode
A node can be configured to run in “offline” mode. In this mode, there will be no class loader connection to the server (and thus no distributed dynamic class loading will occur), and remote management via JMX is disabled. For more details on this mode, please read the documentation section on offline nodes.
To turn the offline mode on:
# set the offline mode (false by default) jppf.node.offline = true
10 Redirecting the console output
As for JPPF drivers, the output of System.out and System.err of a node can be redirected to files. This can be accomplished with the following properties:
# file on the file system where System.out is redirected jppf.redirect.out = /some/path/someFile.out.log # whether to append to an existing file or to create a new one jppf.redirect.out.append = false # file on the file system where System.err is redirected jppf.redirect.err = /some/path/someFile.err.log # whether to append to an existing file or to create a new one jppf.redirect.err.append = false
By default, a new file is created each time the node is started, unless the properties “jppf.redirect.out.append = true” or “jppf.redirect.err.append = true” are specified. If a file path is not specified, then the output is not redirected.
Main Page > Configuration guide > Node configuration |