Client and administration console configuration
From JPPF 6.1 Documentation
1 Server discovery in the client
Connection pools: when a JPPF client connects to one or more servers, its connections are organized by pools. All the connections in a pool share the same basic characteristics:
- name given to the pool, used as a prefix for individual connections names. A pool named "pool" will name its individual connections as "pool-1", ..., "pool-N", where N is the number of connections in the pool.
- server host or IP address
- server port
- whether secure connectivity via SSL/TLS is enabled
- pool priority: allows defining a failover hierarchy of connection pools, where the client only uses the available pool(s) with the highest priority. Whenever these pools become unavailable for any reason, the client will fall back to the pools with the next highest priorirty, switching back to the previous pools later on if they become available again
- associated JMX connections pool size: each connection pool also maintains one or more JMX connections to allow the administration and monitoring of the corresponding server
Pool size: in addition to the propertie above, each connection pool has a size, which determines how many connections it manages.
The size of a connection pool can be defined statically in the configuration, but it can also be changed programmatically using the connection pools API.
In a JPPF client, all server discovery strategies are implemented as discovery plugins. By default, JPPF provides a built-in discovery mechanism that uses the client configuration properties to find which servers to connect to.
This mechanism allows automatic discovery via UDP multicast, along with the manual configuration of the connections pools. These two features are described in the following sections.
1.1 Discovery through UDP multicast
By default, JPPF clients are configured to automatically discover active servers on the network via UDP multicast. With this mechanism, the server broadcasts data packets on the network that contain sufficient information to establish a standard TCP/IP connection.
1.1.1 Enabling and disabling UDP multicast
This is done with the following property, which defaults to true (enabled):
# Enable or disable automatic discovery of JPPF drivers via UDP multicats jppf.discovery.enabled = true
When discovery is enabled, the client does not stop attempting to find one or more servers. A client can also connect to multiple servers, and will effectively connect to every server it discovers on the network.
1.1.2 Configuration of UDP multicast
The configuration is performed by defining a multicast group and port number, as in this example showing their default values:
# UDP multicast group to which drivers broadcast their connection parameters jppf.discovery.group = 230.0.0.1 # UDP multicast port to which drivers broadcast their connection parameters jppf.discovery.port = 11111
1.1.3 Connection pool size
The JPPF client will manage a pool of connections for each discovered server. The size of the connection pools is configured with the following property:
# connection pool size for each discovered server; defaults to 1 (single connection) jppf.pool.size = 5
1.1.4 JMX Connection pool size
Each server connection pool has an associated pool of JMX connections, whose size is configured as follows:
# JMX connection pool size, defaults to 1 jppf.jmx.pool.size = 1
1.1.5 Jobs concurrency
Each connection in a pool can handle multiple jobs concurrently. The number of jobs each connection can handle is defined with the following property:
# Number of concurrent jobs each connection can handle. Defaults to Integer.MAX_VALUE jppf.max.jobs = 100
By default, the maximum number of concurrent jobs is set to Integer.MAX_VALUE, that is, 231 - 1, or 2,147,483,647.
1.1.6 Connections naming
Each server connection has an assigned name, following the pattern: “jppf_discovery-<n>-<p>”, where n is a driver number, in order of discovery, and p is the connection number within the corresponding connection pool. For instance, if we defined jppf.pool.size = 2, then the first discovered driver will have 2 connections named “jppf_discovery-1-1” and “jppf_discovery-1-2”
1.1.7 Enabling secure connectivity
To enable secure connectivity via SSL/TLS for the discovered connections, simply set the following:
# enable SSL/TLS over the discovered connections jppf.ssl.enabled = true
1.1.8 Connections pools priority
It is also possible to specify the priority of all discovered server connections, so that they will easily fit into a failover or load-balancing strategy:
# priority assigned to all auto-discovered connections; defaults to 0 # this is equivalent to "<driver_name>.jppf.priority" in manual network configuration jppf.discovery.priority = 10
1.1.9 Inclusion and exclusion patterns
The following four properties define inclusion and exclusion patterns for IPv4 and IPv6 addresses. They provide a means of controlling whether to connect to a server based on its IP address. Each of these patterns defines a list of comma- or semicolumn- separated patterns. The IPv4 patterns can be exppressed in either CIDR notation, or in a syntax defined in the Javadoc for the class IPv4AddressPattern. Similarly, IPv6 patterns can be expressd in CIDR notation or in a syntax defined in IPv6AddressPattern. This enables filtering out unwanted IP addresses: the discovery mechanism will only allow addresses that are included and not excluded.
# IPv4 address inclusion patterns jppf.discovery.include.ipv4 = # IPv4 address exclusion patterns jppf.discovery.exclude.ipv4 = # IPv6 address inclusion patterns jppf.discovery.include.ipv6 = # IPv6 address exclusion patterns jppf.discovery.exclude.ipv6 =
Let's take for instance the following pattern specifications:
jppf.discovery.include.ipv4 = 192.168.1. jppf.discovery.exclude.ipv4 = 192.168.1.128-
The equivalent patterns in CIDR notation would be:
jppf.discovery.include.ipv4 = 192.168.1.0/24 jppf.discovery.exclude.ipv4 = 192.168.1.128/25
The inclusion pattern only allows IP addresses in the range 192.168.1.0 ... 192.168.1.255 The exclusion pattern filters out IP addresses in the range 192.168.1.128 ... 192.168.1.255 Thus, we actually defined a filter that only accepts addresses in the range 192.168.1.0 ... 192.168.1.127
These 2 patterns can in fact be rewritten as a single inclusion pattern::
jppf.discovery.include.ipv4 = 192.168.1.-127
or, in CIDR notation:
jppf.discovery.include.ipv4 = 192.168.1.0/25
1.1.10 Accepting multiple network interfaces per server
Additionally, you can specify the behavior to adopt, when a driver broadcasts its connection information for multiple network interfaces. In this case, the client may end up creating multiple connections to the same driver, but with different IP addresses. This default behavior can be disabled by setting the following property:
# enable or disable multiple network interfaces for each driver jppf.pool.acceptMultipleInterfaces = false
This property defaults to false, meaning that only the first discovered interface for a driver will be taken into account.
1.1.11 Heartbeat-based connection failure detection
To enable the detection of connection failure through a heartbeat mechanism, set the following property:
# enable the heartbeat mechanism for all discovered drivers; defaults to false (disabled) jppf.recovery.enabled = true
Note 2: this settings applies to all servers discovered via UDP multicast
1.2 Manual network configuration
As we have seen, a JPPF client can connect to multiple drivers. The first step will thus be to list and name these drivers:
# space-separated list of drivers this client may connect to # defaults to “default-driver” jppf.drivers = driver-1 driver-2
Then for each driver, we will define the connection attributes, each of them suffixed with "driver-1." or "driver-2.".
1.2.1 Connection to the JPPF server
The host name (or IP address) and port of each named server are defined as follows:
# host name, or ip address, of the host the JPPF driver is running on driver-1.jppf.server.host = localhost # port number for the on which the driver accepts connections driver-1.jppf.server.port = 11111
When left unspecifed, they will default to "localhost" and "11111", respectively.
1.2.2 Connection pool size
# size of the pool of connections to this driver; defaults to 1 driver-1.jppf.pool.size = 5
Note that, contrary to UDP multicast discovery, each manually configured connection pool can have a different size
1.2.3 JMX Connection pool size
The size of the associated JMX connection pool is configured as follows:
# JMX connection pool size, defaults to 1 driver-1.jppf.jmx.pool.size = 1
1.2.4 Jobs concurrency
Each connection in a pool can handle multiple jobs concurrently. The number of jobs each connection can handle is defined with the following property:
# Number of concurrent jobs each connection can handle. Defaults to Integer.MAX_VALUE driver-1.jppf.max.jobs = 100
By default, the maximum number of concurrent jobs is set to Integer.MAX_VALUE, that is, 231 - 1, or 2,147,483,647.
1.2.5 Enabling secure connectivity
To enable secure connectivity via SSL/TLS for the configured connections, simply set the following:
# enable SSL/TLS over the discovered connections; defaults to fals (disabled) driver-1.jppf.ssl.enabled = true
1.2.6 Priority
# assigned driver priority; defaults to 0 driver-1.jppf.priority = 10
The priority assigned to a server connection enables the definition of a fallback strategy for the client. In effect, the client will always use connections that have the highest priority. If the connection with the server is interrupted, then the client will use connections with the next highest priority in the remaining accessible server connection pools.
1.2.7 Heartbeat-based connection failure detection
To enable the detection of connection failure through a heartbeat mechanism, set the following property:
# enable the heartbeat mechanism for all discovered drivers; defaults to false (disabled) driver-1.jppf.recovery.enabled = true
1.3 Using manual configuration and UDP multicast together
It is possible to use the manual server configuration simultaneously with the UDP multicast discovery, by adding a special driver name, “jppf_discovery” to the list of manually configured drivers:
# enable multicast discovery jppf.discovery.enabled = true # specifiy both multicast discovery and manually configured drivers jppf.drivers = jppf_discovery driver-1 # host for this driver driver-1.jppf.server.host = my_host # port for this driver driver-1.jppf.server.port = 11111
2 Load-balancing and failover of server connection pools
Connection pools can be organized in any combination of two modes, using the priority attribute of each pool:
Load-balancing mode occurs when the connection pools have the same priority. In this case, the JPPF client will balance the jobs between the connections of the pools, according to the load-balancer settings. Example configuration:
jppf.drivers = driver-1 driver-2 driver-1.jppf.server.host = my.host1.com driver-1.jppf.server.port = 11111 driver-1.jppf.priority = 20 driver-2.jppf.server.host = my.host2.com driver-2.jppf.server.port = 11111 driver-2.jppf.priority = 20
Failover mode occurs when connection pools have different priorities. In this case, the JPPF client will only send jobs through the pool(s) with the highest priority. If the highest priority pool fails for any reason, the JPPF client will then fall back to the highest priority among the remaining connection pools, and so on. The following example shows such a configuration:
jppf.drivers = primary-pool secondary-pool primary-pool.jppf.server.host = my.host1.com primary-pool.jppf.server.port = 11111 primary-pool.jppf.priority = 20 secondary-pool.jppf.server.host = my.host2.com secondary-pool.jppf.server.port = 11111 secondary-pool.jppf.priority = 10
You can also combine load-balancing and failover modes, since there is no limit to the number of connection pools you can define. For example:
jppf.drivers = primary-pool-1 primary-pool-2 secondary-pool-1 secondary-pool-2 primary-pool-1.jppf.server.host = my.host1.com primary-pool-1.jppf.server.port = 11111 primary-pool-1.jppf.priority = 20 primary-pool-2.jppf.server.host = my.host2.com primary-pool-2.jppf.server.port = 11111 primary-pool-2.jppf.priority = 20 secondary-pool-1.jppf.server.host = my.host3.com secondary-pool-1.jppf.server.port = 11111 secondary-pool-1.jppf.priority = 10 secondary-pool-2.jppf.server.host = my.host4.com secondary-pool-2.jppf.server.port = 11111 secondary-pool-2.jppf.priority = 10
3 Local and remote execution
It is possible for a client to execute jobs locally (i.e. in the client JVM) rather than by submitting them to a server. This feature allows taking advantage of muliple CPUs or cores on the client machine, while using the exact same APIs as for a distributed remote execution. It can also be used for local testing and debugging before performing the “real-life” execution of a job.
Local execution is disabled by default. To enable it, set the following configuration property:
# enable local job execution; defaults to false jppf.local.execution.enabled = true
Local execution uses a pool of threads, whose size is configured as follows:
# number of threads to use for local execution # the default value is the number of CPUs or cores available to the JVM jppf.local.execution.threads = 4
A priority can be assigned to the local executor, so that it will easily fit into a failover strategy defined via the manual network configuration:
# priority assigned to the local executor; defaults to 0 # this is equivalent to "<driver_name>.jppf.priority" in manual network configuration jppf.local.execution.priority = 10
It is also possible to mix local and remote execution. This will happen whenever the client is connected to a server and has local execution enabled. In this case, the JPPF client uses an adaptive load-balancing algorithm to balance the workload between local execution and node-side execution.
Finally, the JPPF client also provides the ability to disable remote execution. This can be useful if you want to test the execution of jobs purely locally, even if the server discovery is enabled or the server connection properties would otherwise point to a live JPPF server. To achieve this, simply configure the following:
# enable remote job execution; defaults to true jppf.remote.execution.enabled = false
4 Load-balancing in the client
The JPPF client allows load balancing between local and remote execution. The load balancing configuration is exactly the same as for the driver, which means it uses exactly the same configuration properties, algorithms, parameters, etc... Please refer to the driver load-balancing configuration section for the configuration details. The default configuration, if none is provided, is equivalent to the following:
# name of the load balancing algorithm jppf.load.balancing.algorithm = manual # name of the set of parameter values (aka profile) to use for the algorithm jppf.load.balancing.profile = jppf # "jppf" profile jppf.load.balancing.profile.jppf.size = 1000000
Also note that the load balancing is active even if only remote execution is available. This has an impact on how tasks within a job will be sent tot he server. For instance, if the “manual” algorithm is configured, with a size of 1, this means the tasks in a job will be sent one at a time.
5 Default execution policies
A JPPF client can have a default client and/or server side execution policy. The default policies are applied only to submitted jobs that don't have an execution policy (i.e. the execution policy is null). Default execution policies can be set or in the configuration with the following properties:
# server-side default execution policy jppf.job.sla.default.policy = xml_source_type | xml_source # client-side default execution policy jppf.job.client.sla.default.policy = xml_source_type | xml_source
The xml_source_type part of the value specifies where to read the XML policy from, and the meaning of xml_source depends on its value. The value of xml_source_type can be one of:
- inline: xml_source is the actual XML policy specified inline in the configuration
- file: xml_source represents a path, in either the file system or classpath, to an XML file or resource. The path is looked up first in the file system, then in the classpath if it is not present in the file system
- url: xml_source represents a URL to an XML file, including but not limited to, http, https, ftp and file urls.
Here is an example specifying an inline policy:
# by default, jobs only execute on nodes with at least 4 CPUs jppf.job.sla.default.policy = inline | <jppf:ExecutionPolicy> \ <AtLeast> \ <Property>availableProcessors</Property> \ <Value>4</Value> \ </AtLeast> \ </jppf:ExecutionPolicy>
The above XML execution policy is equivalent to this Java expression:
new AtLeast("availableProcessors", 4).toXML();
This can be used in a scripted property value, which allows a much less cumbersome expression for the execution policy, as in this example also omitting the "inline" source type:
# server-side policy as an inline javascript expression jppf.job.sla.default.policy = $s{ new org.jppf.node.policy.AtLeast("availableProcessors", 4).toXML(); }$
Other examples of XML execution policies taken from a file and a URL:
# default client-side policy from a file jppf.job.client.sla.default.policy = file | ./config/defaultClientPolicy.xml # default server-side policy from a URL jppf.job.sla.default.policy = url | http://www.myhost.com/config/defaultServerPolicy.xml
6 Resolution of the drivers IP addresses
You can switch on or off the DNS name resolution for the drivers a client connects, with the following property:
# whether to resolve the drivers' ip addresses into host names # defaults to true (resolve the addresses) org.jppf.resolve.addresses = true
7 Socket connections idle timeout
In some environments, a firewall may be configured to automatically close socket connections that have been idle for more than a specified time. This may lead to a situation where a server may be unaware that a client was disconnected, and cause one or more jobs to never return. To remedy to that situation, it is possible to configure an idle timeout on the client side of the connection, so that the connection can be closed cleanly and grid operations can continue unhindered. This is done via the following property:
jppf.socket.max-idle = timeout_in_seconds
If the timeout value is less than 10 seconds, then it is considered as no timeout. The default value is -1.
8 UI refresh intervals in the administration tool
You may change the values of these properties if the graphical administration and monitoring tool is having trouble displaying all the information received from the nodes and servers. This may happen when the number of nodes and servers becomes large and the UI cannot cope. Increasing the refresh intervals (or decreasing the frequency of the updates) in the UI resolves such situations. The available configuration properties are defined as follows:
# refresh interval for the statistcs panel in millis; defaults to 1000 # this is the interval between 2 succesive stats requests to a driver via JMX jppf.admin.refresh.interval.stats = 1000 # refresh interval in millis for the topology panels: tree view and graph views # this is the interval between 2 successive runs of the task that refreshes the # topology via JMX requests; defaults to 1000 jppf.admin.refresh.interval.topology = 1000 # refresh interval for the JVM health panel in millis; defaults to 1000 # this is the interval between 2 successive runs of the task that refreshes # the JVM health via JMX requests jppf.admin.refresh.interval.health = 1000 # UI refresh interval for the job data panel in ms. Its meaning depends on the # publish mode specified with property "jppf.gui.publish.mode" (see below): # - in "immediate_notifications" mode, this is not used # - in "deferred_notifications" mode, this is the interval between 2 publications # of updates as job monitoring events # - in "polling" mode this is the interval between 2 polls of each driver jppf.gui.publish.period = 1000 # UI refresh mode for the job data panel. The possible values are: # - polling: the job data is polled at regular intervals and updates to the view are # computed as the differences with the previous poll. This mode generates less network # traffic than the other modes, but some updates, possibly entire jobs, may be missed # - deferred_notifications: updates are received as jmx notifications and published at # regular intervals, possibly aggregated in the interval. This mode provides a more # accurate view of the jobs life cycle, at the cost of increased network traffic # - immediate_notifications: updates are received as jmx notifications and are all # published immediately as job monitoring events, which are pushed to the UI. In this # mode, no event is missed, however this causes higher cpu and memory consumption # The default value is immediate_notifications jppf.gui.publish.mode = immediate_notifications
9 Customizing the administration console's splash screen
At startup, the desktop administration console displays a splash screen made of a sequence of rolling images with a fixed text at the center. The splash screen can be customized with the following properties:
# Whether to display the animated splash screen at console startup, defaults to false jppf.ui.splash = true # The fixed text displayed at center of the window jppf.ui.splash.message = The JPPF Admin Console is starting ... # The message's font color, expressed as an rgb or rgba value. If alpha is not # specified, it is assumed to be 255 (fully opaque). # Examples: 255, 233, 127 (opaque) | 255, 233, 127, 128 (semi-transparent) jppf.ui.splash.message.color = 64, 64, 128 # One or more paths to the images displayed in a rolling sequence (like a slide show) # The images may be either in the file system or in the classpath and are separated with # '|' (pipe) characters jppf.ui.splash.images = image_path_1 | ... | image_path_N # interval between images in milliseconds jppf.ui.splash.delay = 200
Main Page > Configuration guide > Client and administration console |