JPPF, java, parallel computing, distributed computing, grid computing, parallel, distributed, cluster, grid, cloud, open source, android, .net
JPPF, java, parallel computing, distributed computing, grid computing, parallel, distributed, cluster, grid, cloud, open source, android, .net
JPPF

The open source
grid computing
solution

 Home   About   Features   Download   Documentation   On Github   Forums 

Node provisioning

From JPPF 6.1 Documentation

Revision as of 08:56, 17 January 2018 by Lolocohen (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Contents

Main Page > Deployment > Node provisioning


Any JPPF node has the ability to start new nodes on the same physical or virtual machine, and stop and monitor these nodes afterwards. This constitutes a node provisioning facility, which allows dynamically growing or shrinking a JPPF grid based on the workload requirements.

This provisioning ability establishes a master/slave relationship between a standard node (master) and the nodes that it starts (slaves). Please note that a slave node cannot be in turn used as a master. Apart from this restriction, slave nodes can be managed and monitored as any other node - unless they are defined as offline nodes.

Please note that offline nodes cannot be used a master nodes, since they cannot be managed.

1 Provisioning with the JMX API

As seen in Node management > Node provisioning, provisioning can be done with an Mbean implementing the JPPFNodeProvisioningMBean interface, defined as follows:

public interface JPPFNodeProvisioningMBean {
  // The object name of this MBean
  String MBEAN_NAME = "org.jppf:name=provisioning,type=node";
  // Get the number of slave nodes started by this MBean
  int getNbSlaves();
  // Start or stop the required number of slaves to reach the specified number
  void provisionSlaveNodes(int nbNodes);
  // Same action, explicitely specifiying the interrupt flag
  void provisionSlaveNodes(int nbNodes, boolean interruptIfRunning);
  // Start or stop the required number of slaves to reach the specified number,
  // using the specified configuration overrides
  void provisionSlaveNodes(int nbNodes, TypedProperties configOverrides);
  // Same action, explicitely specifiying the interrupt flag
  void provisionSlaveNodes(
    int nbNodes, boolean interruptIfRunning, TypedProperties configOverrides);
}

Combined with the ability to manage and monitor nodes via the server to which.they are attached, this provides a powerful and sophisticated way to grow, shrink and control a JPPF grid on demand. Let's look at the following example, which shows how to provision new nodes with specific memory requirements, execute a job on these nodes, then restore the grid to its initial topology.

The first thing that we do is to initialize a JPPF client, obtain a JMX connection from the JPPF server, then get a reference to the server MBean which forwards management requests to the nodes:

JPPFClient client = new JPPFClient();
// wait until a standard connection to the driver is established
JPPFConnectionPool pool = client.awaitWorkingPool();
// wait until a JMX connection to the driver is established
JMXDriverConnectionWrapper jmxDriver = pool.awaitWorkingJMXConnection();
// get a proxy to the mbean that forwards management requests to the nodes
JPPFNodeForwardingMBean forwarder = jmxDriver.getNodeFrowarder();

In the next step, we will create a node selector, based on an execution policy which matches all master nodes:

// create a node selector which matches all master nodes
ExecutionPolicy masterPolicy = new Equal("jppf.node.provisioning.master", true);
NodeSelector masterSelector = new ExecutionPolicySelector(masterPolicy);

Note the use of the configuration property “jppf.node.provisoning.master = true” which is present in every master node. Next, we define configuration overrides to fit our requirements:

TypedProperties overrides = new TypedProperties()
  // request 2 processing threads
  .setInt("jppf.processing.threads", 2)
  // specify a server JVM with 512 MB of heap
  .setString("jppf.jvm.options", "-server -Xmx512m");

Now we can provision the nodes we need:

// request that 2 slave nodes be provisioned, by invoking the provisionSlaveNodes()
// method on all nodes matched by the selector, with the configuration overrides
forwarder.provisionSlaveNodes(masterSelector, 2, overrides);

// give the nodes enough time to start the slaves
Thread.sleep(3000L);

We then check that our master nodes effectively have two slaves started:

// request the 'NbSlaves' Mbean atrribute for each master
Map<String, Object> resultsMap = forwarder.getNbSlaves(masterSelector);

// keys in the map are node UUIDs
// the values are either integers if the request succeeded, or a Throwable if it failed
for (Map.Entry<String, Object> entry: resultsMap.entrySet()) {
  if (entry.getValue() instanceof Throwable) {
    System.out.println("node " + entry.getKey() + " raised " +
      ExceptionUtils.getStackTrace((Throwable) entry.getValue()));
  } else {
    System.out.println("master node " + entry.getKey() + " has " +
      entry.getValue() + " slaves");
  }
}

Once we are satisfied with the topology we just setup, we can submit a job on the slave nodes:

// create the job and add tasks
JPPFJob job = new JPPFJob();
job.setName("Hello World");
for (int i=1; i<=20; i++) job.add(new ExampleTask(i)).setId("task " + i);

// set the policy to execute on slaves only
ExecutionPolicy slavePolicy = new Equal("jppf.node.provisioning.slave", true);
job.getSLA().setExecutionPolicy(slavePolicy);

// submit the job and get the results
List<Task<?>> results = client.submitJob(job);

Finally, we can terminate the slave nodes with a provisioning request for 0 slaves, and get back to the initial grid topology:

forwarder.provisionSlaveNodes(masterSelector, 0, null);

// again, give it some time
Thread.sleep(2000L);

2 Provisioning with the administration console

In the JPPF administration console, the provisioning facility is available as shown here:

NodeProvisioning-01.gif

As you can see in this screenshot, the provisioning facility is integrated in the JPPF administration tool. To perform a provisioning operation, in the topology tree or graph view, select any number of master nodes, then click on the provisioning button provisiong-action.png in the toolbar or in the mouse popup menu.

The topology tree view has a column indicating the number of slaves started for each master node. Non-master nodes have an empty value in this column, whereas master nodes have a value of zero or more. Furthermore, master and non-master nodes have distinctive icons:

  • master-node.png for master nodes
  • slave-node.png for non-master nodes

After clicking on the provisioning button, a dialog will be displayed, allowing you to specifiy the number of slave nodes to provision, whether to use configuration overrides, and specify the overrides in free-form text. When clicking on the “OK” button, the provisioning request will be sent to all the selected master nodes.

Please note that the values entered in the provisioning dialog, including the state of the checkbox, are persisted by the administration tool, so that you will conveniently retrieve them the next time you open the dialog.

3 Configuration

3.1 Provisioning under the hood

Before a slave node is started, the master node will perform a number of operations, to ensure that the slave is properly configured and that any ouput it produces can be captured and retrieve. These operations include:

1) Creating a root directory for the slave, in which log files and output capture files will be created, along with configuration files. By default, this directory is in “${MASTER_ROOT}/slave_nodes/node_nn”, where the suffix nn is a sequence number assigned to the slave by the master node.

2) Copy the content of a user-specified directory, holding configuration files used by all the slaves, into the slave's “config” directory. For example, if “slave_config” is specified by the user, then all the files and sub-directories contained in the folder “${MASTER_ROOT}/slave_config” will be copied into “${SLAVE_ROOT}/config”. Note that the destination folder name “config” cannot be changed.

3) The JPPF configuration properties of the master node will be saved into a properties file located in “${SLAVE_ROOT}/config/jppf-node.properties”, after the user-specified (via the management API or the administration tool) overrides are applied, then the following overrides:

# mark the node as a slave
jppf.node.provisioning.slave = true
# a slave node cannot be a master
jppf.node.provisioning.master = false
# redirect the ouput of System.out to a file
jppf.redirect.out = system_out.log
# redirect the ouput of System.err to a file
jppf.redirect.err = system_err.log

4) The classpath of the slave node will be exactly the same as for the master, with the addition of the slave's root directory. This means that any jar file or class directory specified in the master's start command will also be available to the slaves.

5) Additional JVM options for the slave process can be specified in two ways:

  • first by overriding the “jppf.jvm.options” configuration property when provisioning slaves nodes
  • then, if the property “jppf.node.provisioning.slave.jvm.options” is defined in the master node, these options are added

For instance, setting “jppf.node.provisioning.slave.jvm.options = -Dlog4j.configuration=config/log4j.properties” will ensure that each slave will be able to find the Log4j configuration file in its “config” folder.

6) The master node maintains a link with each of its slaves, based on a local TCP socket connection, which serves two essential purposes:

  • even when a slave is forcibly terminated, the master will know almost immediately about it and will be able to update its internal state, for instance the number of slaves
  • when a master is forcibly terminated (e.g. with 'kill -9 pid' on Linux, or with the Task Manager on Windows), all its slaves will know about it and terminate themselves, to avoid having hanging Java processes on the host

3.2 Configuration properties

The following properties are available:

# Define a node as master. Defaults to true
jppf.node.provisioning.master = true
# Define a node as a slave. Defaults to false
jppf.node.provisioning.slave = false
# Specify the path prefix used for the root directory of each slave node
# defaults to "slave_nodes/node_", relative to the master root directory
jppf.node.provisioning.slave.path.prefix = slave_nodes/node_
# Specify the directory where slave-specific configuration files are located
# Defaults to the "config" folder, relative to the master root directory
jppf.node.provisioning.slave.config.path = config
# A set of space-separated JVM options always added to the slave startup command
jppf.node.provisioning.slave.jvm.options = -Dlog4j.configuration=config/log4j.properties
# Number of slaves to start at master node startup. Defaults to 0
jppf.node.provisioning.startup.slaves = 5

Note that "jppf.node.provisioning.slave" is only used by slave nodes and is always ignored by master nodes.

Additionally, each slave node has a property "jppf.node.provisioning.master.uuid", whose value is the uuid of the master node that started it. This can be very useful when using a node selector or execution policy that only selects the slave nodes of one or more specific master nodes:

String masterUuid = ...;
JPPFJob job = new JPPFJob();
// execute this job only on slave nodes of the specified master
ExecutionPolicy policy = new Equal("jppf.node.provisioning.master.uuid", true, masterUuid);
job.getSLA().setExecutionPolicy(policy);
Main Page > Deployment > Node provisioning



JPPF Copyright © 2005-2020 JPPF.org Powered by MediaWiki