adequate
adequate
adequate
adequate
JPPF, java, parallel computing, distributed computing, grid computing, parallel, distributed, cluster, grid, cloud, open source, android, .net
JPPF

The open source
grid computing
solution

 Home   About   Features   Download   Documentation   Forums 
February 22, 2018, 03:34:07 PM *
Welcome,
Please login or register.

Login with username, password and session length
Advanced search  
News: New users, please read this message. Thank you!
  Home Help Search Login Register  

Recent Posts

Pages: [1] 2 3 ... 10
1
Developers help / Re: Detect node Done
« Last post by lolo on February 14, 2018, 08:19:26 AM »
Hi Shiva,

To determine if a driver still has unfinished jobs, you can use the driver's job management MBean, via one of these two approaches:

1) Polling the driver to introspect the jobs it still has, as in this example that works from a client application:

Code: [Select]
try (JPPFClient client = new JPPFClient()) {
  JMXDriverConnectionWrapper driverJmx = client.awaitWorkingConnectionPool().awaitWorkingJMXConnection();
  // get a proxy to the remote job management MBean
  DriverJobManagementMBean jobManager = driverJmx.getJobManager();
  // obtain information on all jobs in the driver
  JobInformation[] jobInfos = jobManager.getJobInformation(JobSelector.ALL_JOBS);
  if (jobInfos.length <= 0) {
    // process the case where the driver has no job to process
  } else {
    // process the information on the jobs
  }
} catch (Exception e) {
  e.printStackTrace();
}

2) Subscribing to job status notifications, as in this example:

First you need to implement a notification listener that will process the job notifications, for example:

Code: [Select]
public class JobNotificationListener implements NotificationListener {
  // holds the uuids of the jobs remaining in the driver
  private final Set<String> jobUuids = new HashSet<>();

  @Override
  public void handleNotification(Notification notification, Object handback) {
    JobNotification jobNotif = (JobNotification) notification;
    // get the job uuid
    String uuid = jobNotif.getJobInformation().getJobUuid();
    synchronized(jobUuids) {
      switch (jobNotif.getEventType()) {
        case JOB_QUEUED:  // the job was just added to the driver
          jobUuids.add(uuid);
          break;

        case JOB_ENDED:  // the job just completed and is no longer in the driver
          jobUuids.remove(uuid);
          break;
      }
    }
  }

  public Set<String> getJoUuids() {
    synchronized(jobUuids) {
      // return a thread-safe copy
      return new HashSet<>(jobUuids);
    }
  }
}

Then, you have to register the notification listener with the job management MBean to start receiving the notifications:

Code: [Select]
try (JPPFClient client = new JPPFClient()) {
  JMXDriverConnectionWrapper driverJmx = client.awaitWorkingConnectionPool().awaitWorkingJMXConnection();
  // get a proxy to the remote job management MBean
  DriverJobManagementMBean jobManager = driverJmx.getJobManager();

  JobNotificationListener myListener = new JobNotificationListener();
  // register the job notification listener so we can receive the notifications
  jobManager.addNotificationListener(myListener, null, null);

  ...
} catch (Exception e) {
  e.printStackTrace();
}

I hope this helps,
-Laurent
2
Developers help / Re: Detect node Done
« Last post by shiva.verma on February 13, 2018, 09:51:33 PM »
Thanks Laurent for the suggestion. I will give it a try in a while and update if I encounter any issue.

I would also need suggestion to see if driver do not have any other jobs left to be shared. I want to double sure that I am not getting the node as idle in-between two jobs. I would like to kill the node (and process the post-node work activities) once I am sure there is nothing left for this node to be done.

Thanks again
3
Developers help / Re: Detect node Done
« Last post by lolo on February 08, 2018, 10:22:27 AM »
Hello Shiva,

There are multiple ways you can detect that a node is idle. Which one to use depends on what you intend to do with the information.

1) You can use a node life cycle listener from within the node. In particular, you would be interested in the beforeNextJob() notification, which is emitted just after the node has become idle

2) You can also poll the node state remotely via the JMX-based management APIs.

a) either by connecting directly to the node (works only if you are in the same subnet as the node):

Code: [Select]
try (JMXNodeConnectionWrapper nodeJmx = new JMXNodeConnectionWrapper(host, port, isSecure)) {
  nodeJmx.connectAndWait(5000L); // wait up to 5s for the connection ot be established.
  if (nodeJmx.isConnected()) {
    JPPFNodeState.ExecutionState nodeState = nodeJmx.state();
    switch(nodeSTate) {
      case IDLE:
        // process idle node
        break;
      ...
    }
  }
}

b) by forwarding the JMX requests via the JPPF driver (will work even if the node is on a different subnet, this is what the admin console does):

Code: [Select]
String nodeIP = "192.168.1.12";
try (JMXDriverConnectionDriver driverJmx = JMXDriverConnectionDriver(host, port, isSecure)) {
  driverJmx.connectAndWait(5000L); // wait up to 5s for the connection ot be established.
  if (driverJmx.isConnected()) {
    JPPFNodeForwardingMBean forwarder = driverJmx.getNodeForwarder();
    // filter the request to only include our node with its specific IP adress
    ExecutionPolicy filter = new IsInIPv4Subnet(nodeIP);
    NodeSlector selector = new ExecutionPolicySelector(filter);
    // Get the states of the selected nodes (there should only be one)
    Map<String, Object> result = forwarder.state(selector);
    JPPFNodeState nodeState = (JPPFNodeState) result.values().iterator().next();
    JPPFNodeState.ExecutionState executionState = nodeState.getExecutionStatus();
    switch(nodeSTate) {
      case IDLE:
        // process idle node
        break;

      ...
    }
  }
}

c) Since the driver keeps the information on whether the nodes are idle, you can query the driver, by asking how many nodes are idle and by filtering the request to only include a specific node:

Code: [Select]
String nodeIP = "192.168.1.12";
try (JMXDriverConnectionDriver driverJmx = JMXDriverConnectionDriver(host, port, isSecure)) {
  driverJmx.connectAndWait(5000L); // wait up to 5s for the connection ot be established.
  if (driverJmx.isConnected()) {
    // filter the request to only include our node with its specific IP adress
    ExecutionPolicy filter = new IsInIPv4Subnet(nodeIP);
    NodeSlector selector = new ExecutionPolicySelector(filter);
    int n = driverJmx.nbIdleNodes(selector);
    if (n > 0) { // the node is idle
        // process idle node
    }
  }
}

I hope thuis answers your questions.

Sincerely,
-Laurent
4
Developers help / Detect node Done
« Last post by shiva.verma on February 07, 2018, 08:37:39 AM »
Hi,

Any suggestion to detect if the node is done.

Basically I am looking for (on the node machine):
1. Detect if node is not doing anything.
2. Detect if server donot have any job remaining.

I am willing to run a shell-command or a java program to detect when a node is actually done.

Thanks in advance
-Shiva
5
Troubleshooting / Re: Task Hangs After Completion on Node Behind Firewall
« Last post by subes on January 22, 2018, 01:53:32 AM »
I encountered the same problem, increasing jppf.recovery.read.timeout to 30-60 seconds solved it for me. This makes sure the constraint "serverReaperInterval < nodeMaxRetries * nodeTimeout" is followed. The default configuration of 6 seconds just did not work for me. Maybe the default and/or documentation should be changed?
6
Well, I finally worked around this issue by relaying the results over a private ftp server. Even though the data operations are more with this, the actual bandwidth utilization is a lot higher and thus the results transmitted a lot faster.
7
Hi,

I am using JPPF in my test scenario to run one calculation job with one task on a driver with a local node. The task result is a few hundred megabytes large. When the task is finished and the result downloaded by the client, the available bandwidth (around 22 megabytes per second measured via sftp to the host) is only utilized in a fraction of around 1-2 megabytes per second. Any idea what settings can influence the result download speed? Or do I have to use some other framework for faster transmission of large results and let the task result only coordinate the connection rendezvous?

Best regards,
subes
8
Developers help / Starting with JPPF and vinculate with Android
« Last post by dmpetrocelli on December 27, 2017, 05:08:40 PM »
Hi Team:

First of all, sorry my english is not so good.

I'm trying to implement a GRID with JPPF for doing ( at the beggining ) image processing using Master PC and Android nodes as a tool for my research.  I have an old "Standard Java" simple code to obtain BufferedImage, then matrix and after all that, apply sobel filter and reconstruct the image.

I think the idea to prepair the task is:
1. Port that code to android
2. Test in AndroidStudio as a sample
3. Convert into task to deliver with FFMPeg.

If you have some examples codes to native java android development .... For me will be perfect.

Thanks for all

David
9
Installation and Configuration / Re: Grid topology
« Last post by Deshazo on December 21, 2017, 04:01:24 PM »
Hi Laurent, how do you co-locate multiple drivers on the same host exactly? I've tried doing this but I believe it didn't work.
10
Installation and Configuration / Bundle size problem
« Last post by arefaydi on December 08, 2017, 09:11:30 AM »
Hello,
I am using nothreads algorithm with multiplicator = 1 and servers have 16 core, so driver sending 16 tasks to nodes each time. If one of the tasks which sended to node in the same bundle takes more times to complete, the node couldn't take new tasks to fill its iddle threads because all of the task must be completed in the bundle to return to driver and take new tasks. I want all threads in the nodes be in use as much as possible even if same task requires more time to be completed. Is there a way to accomplish to this?

jppf 5.2.8
client conf:
Code: [Select]
jppf.discovery.enabled =  false
jppf.drivers =  driver1 driver2 driver3 driver4 driver5

driver1.jppf.server.host =  10.254.101.210
driver1.jppf.server.port =  11113
driver1.jppf.pool.size = 20
driver1.jppf.ssl.enabled =  false
driver1.jppf.priority = 100

driver2.jppf.server.host =  10.254.101.211
driver2.jppf.server.port =  11113
driver2.jppf.pool.size = 20
driver2.jppf.ssl.enabled =  false
driver2.jppf.priority = 99

driver3.jppf.server.host =  10.254.101.212
driver3.jppf.server.port =  11113
driver3.jppf.pool.size = 20
driver3.jppf.ssl.enabled =  false
driver3.jppf.priority = 98

driver4.jppf.server.host =  10.254.101.213
driver4.jppf.server.port =  11113
driver4.jppf.pool.size = 20
driver4.jppf.ssl.enabled =  false
driver4.jppf.priority = 97

driver5.jppf.server.host =  10.254.101.217
driver5.jppf.server.port =  11113
driver5.jppf.pool.size = 20
driver5.jppf.ssl.enabled =  false
driver5.jppf.priority = 96

jppf.resolve.addresses =  false
jppf.load.balancing.algorithm =  manual
jppf.load.balancing.profile =  manual_profile
jppf.load.balancing.profile.manual_profile.size =  1000000
jppf.admin.refresh.interval.topology =  1000
jppf.admin.refresh.interval.health =  3000
jppf.socket.buffer.size =  131072
jppf.temp.buffer.size =  12288
jppf.temp.buffer.pool.size =  200
jppf.length.buffer.pool.size =  100
jppf.object.serialization.class =  org.jppf.serialization.DefaultJPPFSerialization


one of peer conf (others same with it accept peer ips)
Code: [Select]
#------------------------------------------------------------------------------#
# JPPF                                                                         #
# Copyright (C) 2005-2016 JPPF Team.                                           #
# http://www.jppf.org                                                          #
#                                                                              #
# Licensed under the Apache License, Version 2.0 (the "License");              #
# you may not use this file except in compliance with the License.             #
# You may obtain a copy of the License at                                      #
#                                                                              #
# http://www.apache.org/licenses/LICENSE-2.0                                #
#                                                                              #
# Unless required by applicable law or agreed to in writing, software          #
# distributed under the License is distributed on an "AS IS" BASIS,            #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.     #
# See the License for the specific language governing permissions and          #
# limitations under the License.                                               #
#------------------------------------------------------------------------------#
#jppf.transition.thread.pool.size=64
#------------------------------------------------------------------------------#
# port number to which the server listens for plain connections                #
# default value is 11111; uncomment to specify a different value               #
# to disable, specify a negative port number                                   #
#------------------------------------------------------------------------------#

jppf.transition.thread.pool.size=80

jppf.server.port = 11113
#jppf.server.class.cache.enabled = false

#------------------------------------------------------------------------------#
# port number to which the server listens for secure connections               #
# default value is 11443; uncomment to specify a different value               #
# to disable, specify a negative port number                                   #
#------------------------------------------------------------------------------#

#jppf.ssl.server.port = 11443
jppf.ssl.server.port = -1

#------------------------------------------------------------------------------#
#                          SSL Settings                                        #
#------------------------------------------------------------------------------#

# location of the SSL configuration on the file system
#jppf.ssl.configuration.file = config/ssl/ssl-server.properties

# SSL configuration as an arbitrary source. Value is the fully qualified name
# of an implementation of java.util.concurrent.Callable<InputStream>
# with optional space-separated arguments
#jppf.ssl.configuration.source = org.jppf.ssl.FileStoreSource config/ssl/ssl-server.properties

# enable secure communications with other servers; defaults to false (disabled)#
#jppf.peer.ssl.enabled = true

#------------------------------------------------------------------------------#
# Enabling and configuring JMX features                                        #
#------------------------------------------------------------------------------#

# non-secure JMX connections; default is true (enabled)
#jppf.management.enabled = true

# secure JMX connections via SSL/TLS; default is false (disabled)
#jppf.management.ssl.enabled = true

# JMX management host IP address. If not specified (recommended), the first non-local
# IP address (i.e. neither 127.0.0.1 nor localhost) on this machine will be used.
# If no non-local IP is found, localhost will be used
#jppf.management.host = localhost

# JMX management port. Defaults to 11198. If the port is already bound, the driver
# will scan for the first available port instead.
#jppf.management.port = 11199

#------------------------------------------------------------------------------#
# Configuration of the driver discovery broadcast service                      #
#------------------------------------------------------------------------------#

# Enable/Disable automatic discovery of this JPPF drivers; default to true
jppf.discovery.enabled = false

# UDP multicast group to which drivers broadcast their connection parameters
# and to which clients and nodes listen. Default value is 230.0.0.1
#jppf.discovery.group = 230.0.0.1

# UDP multicast port to which drivers broadcast their connection parameters
# and to which clients and nodes listen. Default value is 11111
jppf.discovery.port = 11113

# Time between 2 broadcasts, in milliseconds. Default value is 1000
#jppf.discovery.broadcast.interval = 1000

# IPv4 inclusion patterns: broadcast these ipv4 addresses
#jppf.discovery.broadcast.include.ipv4 = 10.92.50.200

# IPv4 exclusion patterns: do not broadcast these ipv4 addresses
#jppf.discovery.exclude.ipv4 = 192.168.1.128-; 192.168.1.0/25

# IPv6 inclusion patterns: broadcast these ipv6 addresses
#jppf.discovery.include.ipv6 = 1080:0:0:0:8:800:200C-20FF:-; ::1/80

# IPv6 exclusion patterns: do not broadcast these ipv6 addresses
#jppf.discovery.exclude.ipv6 = 1080:0:0:0:8:800:200C-20FF:0C00-0EFF; ::1/64

#------------------------------------------------------------------------------#
# Connection with other servers, enabling P2P communication                    #
#------------------------------------------------------------------------------#

# Enable/disable auto-discovery of remote peer drivers. Default value is false
jppf.peer.discovery.enabled = false

jppf.peer.allow.orphans = true

# manual configuration of peer servers, as a space-separated list of peers names to connect to
jppf.peers = server_2 server_3 server_4 server_5

# enable both automatic and manual discovery
#jppf.peers = jppf_discovery server_1 server_2

# connection to server_1
jppf.peer.server_1.server.host = 10.254.101.210
jppf.peer.server_1.server.port = 11113

# connection to server_2
jppf.peer.server_2.server.host = 10.254.101.211
jppf.peer.server_2.server.port = 11113

# connection to server_3
jppf.peer.server_3.server.host = 10.254.101.212
jppf.peer.server_3.server.port = 11113

# connection to server_4
jppf.peer.server_4.server.host = 10.254.101.213
jppf.peer.server_4.server.port = 11113

# connection to server_5
jppf.peer.server_5.server.host = 10.254.101.217
jppf.peer.server_5.server.port = 11113


#------------------------------------------------------------------------------#
# Load-balancing configuration                                                 #
#------------------------------------------------------------------------------#

# name of the load-balancing algorithm to use; pre-defined possible values are:
# manual | autotuned | proportional | rl | nodethreads
# it can also be the name of a user-defined algorithm. Default value is "manual"
jppf.load.balancing.algorithm = nodethreads

# name of the set of parameter values (aka profile) to use for the algorithm
jppf.load.balancing.profile = nodethreads_profile

# "manual" profile
jppf.load.balancing.profile.manual_profile.size = 1

# "autotuned" profile
jppf.load.balancing.profile.autotuned_profile.size = 5
jppf.load.balancing.profile.autotuned_profile.minSamplesToAnalyse = 100
jppf.load.balancing.profile.autotuned_profile.minSamplesToCheckConvergence = 50
jppf.load.balancing.profile.autotuned_profile.maxDeviation = 0.2
jppf.load.balancing.profile.autotuned_profile.maxGuessToStable = 50
jppf.load.balancing.profile.autotuned_profile.sizeRatioDeviation = 1.5
jppf.load.balancing.profile.autotuned_profile.decreaseRatio = 0.2

# "proportional" profile
jppf.load.balancing.profile.proportional_profile.size = 5
jppf.load.balancing.profile.proportional_profile.initialMeanTime = 1e10
jppf.load.balancing.profile.proportional_profile.performanceCacheSize = 20
jppf.load.balancing.profile.proportional_profile.proportionalityFactor = 1

# "rl" profile
jppf.load.balancing.profile.rl_profile.performanceCacheSize = 1000
jppf.load.balancing.profile.rl_profile.performanceVariationThreshold = 0.0001
jppf.load.balancing.profile.rl_profile.maxActionRange = 10

# "nodethreads" profile
jppf.load.balancing.profile.nodethreads_profile.multiplicator = 1

# "rl2" profile
jppf.load.balancing.profile.rl2_profile.performanceCacheSize = 1000
jppf.load.balancing.profile.rl2_profile.performanceVariationThreshold = 0.75
jppf.load.balancing.profile.rl2_profile.minSamples = 20
jppf.load.balancing.profile.rl2_profile.maxSamples = 100
jppf.load.balancing.profile.rl2_profile.maxRelativeSize = 0.5

#------------------------------------------------------------------------------#
# Other JVM options added to the java command line when the driver is started  #
# as a subprocess. Multiple options are separated by spaces.                   #
#------------------------------------------------------------------------------#

#jppf.jvm.options = -Xmx256m -Djava.util.logging.config.file=config/logging-driver.properties

# example with remote debugging options
#jppf.jvm.options = -server -Xmx256m -Xrunjdwp:transport=dt_socket,address=localhost:8000,server=y,suspend=n

#------------------------------------------------------------------------------#
# path to the Java executable. When defined, it is used by the launch script   #
# (startDriver.bat or startDriver.sh) instead of the default Java path.        #
# It is undefined by default, meaning that the script will use the "java"      #
# command, relying on Java being in the system PATH.                           #
#------------------------------------------------------------------------------#

# linux/unix example
#jppf.java.path = /opt/java/jdk1.8.0_x64/bin/java
# windows example
#jppf.java.path = C:/java/jdk1.8.0_x64/bin/java.exe

#------------------------------------------------------------------------------#
# Specify alternate serialization schemes.                                     #
# Defaults to org.jppf.serialization.DefaultJavaSerialization.                 #
#------------------------------------------------------------------------------#

# default
#jppf.object.serialization.class = org.jppf.serialization.DefaultJavaSerialization

# built-in object serialization schemes
jppf.object.serialization.class = org.jppf.serialization.DefaultJPPFSerialization
#jppf.object.serialization.class = org.jppf.serialization.XstreamSerialization

# defined in the "Kryo Serialization" sample
#jppf.object.serialization.class = org.jppf.serialization.kryo.KryoSerialization

#------------------------------------------------------------------------------#
# Specify a data transformation class. If unspecified, no transformation occurs#
#------------------------------------------------------------------------------#

# Defined in the "Network Data Encryption" sample
#jppf.data.transform.class = org.jppf.example.dataencryption.SecureKeyCipherTransform

#------------------------------------------------------------------------------#
# whether to resolve the nodes' ip addresses into host names                   #
# defaults to true (resolve the addresses)                                     #
#------------------------------------------------------------------------------#

org.jppf.resolve.addresses = false

#------------------------------------------------------------------------------#
# Local (in-JVM) node. When enabled, any node-specific properties will apply   #
#------------------------------------------------------------------------------#

# Enable/disable the local node. Default is false (disabled)
jppf.local.node.enabled = true
jppf.local.node.bias = true
# example node-specific setting
#jppf.processing.threads = 2

#------------------------------------------------------------------------------#
# In idle mode configuration. In this mode the server or node starts when no   #
# mouse or keyboard activity has occurred since the specified timeout, and is  #
# stopped when any new activity occurs.                                        #
#------------------------------------------------------------------------------#

# Idle mode enabled/disabled. Default is false (disabled)
#jppf.idle.mode.enabled = false

# Fully qualified class name of the factory object that instantiates a platform-specific idle state detector
#jppf.idle.detector.factory = org.jppf.example.idlesystem.IdleTimeDetectorFactoryImpl

# Time of keyboard and mouse inactivity to consider the system idle, in milliseconds
# Default value is 300000 (5 minutes)
#jppf.idle.timeout = 6000

# Interval between 2 successive calls to the native APIs to determine idle state changes
# Default value is 1000
#jppf.idle.poll.interval = 1000

#------------------------------------------------------------------------------#
# Automatic recovery from hard failure of the nodes connections. These         #
# parameters configure how the driver reacts when a node fails to respond to   #
# its heartbeat messages.                                                      #
#------------------------------------------------------------------------------#

# Enable recovery from failures on the nodes. Default to false (disabled)
#jppf.recovery.enabled = false

# Max number of attempts to get a response from the node before the connection
# is considered broken. Default value is 3
#jppf.recovery.max.retries = 3

# Max time in milliseconds allowed for each attempt to get a response from the node.
# Default value is 6000 (6 seconds)
#jppf.recovery.read.timeout = 6000

# Dedicated port number for the detection of node failure. Defaults to 22222.
# If server discovery is enabled on the nodes, this value will override the port number specified in the nodes
#jppf.recovery.server.port = 22222

# Interval in milliseconds between two runs of the connection reaper
# Default value is 60000 (1 minute)
#jppf.recovery.reaper.run.interval = 60000

# Number of threads allocated to the reaper. Default to the number of available CPUs
#jppf.recovery.reaper.pool.size = 8

#------------------------------------------------------------------------------#
# Redirecting System.out and System.err to files.                              #
#------------------------------------------------------------------------------#

# file path on the file system where System.out is redirected.
# if unspecified or invalid, then no redirection occurs
#jppf.redirect.out = /home/arge/jppf.log
# whether to append to an existing file or to create a new one
jppf.redirect.out.append = false

# file path on the file system where System.err is redirected
# if unspecified or invalid, then no redirection occurs
#jppf.redirect.err = /home/arge/jppf.err.log
# whether to append to an existing file or to create a new one
jppf.redirect.err.append = false

#------------------------------------------------------------------------------#
# Global performance tuning parameters. These affect the performance and       #
# throughput of I/O operations in JPPF. The values provided in the vanilla     #
# JPPF distribution are known to offer a good performance in most situations   #
# and environments.                                                            #
#------------------------------------------------------------------------------#

# Size of send and receive buffer for socket connections.
# Defaults to 32768 and must be in range [1024, 1024*1024]
# 128 * 1024 = 131072
#jppf.socket.buffer.size = 131072
jppf.socket.buffer.size = 1048576
# Size of temporary buffers (including direct buffers) used in I/O transfers.
# Defaults to 32768 and must be in range [1024, 1024*1024]
#jppf.temp.buffer.size = 12288
jppf.temp.buffer.size = 1048576
# Maximum size of temporary buffers pool (excluding direct buffers). When this size
# is reached, new buffers are still created, but not released into the pool, so they
# can be quickly garbage-collected. The size of each buffer is defined with ${jppf.temp.buffer.size}
# Defaults to 10 and must be in range [1, 2048]
#jppf.temp.buffer.pool.size = 200
jppf.temp.buffer.pool.size = 1000
# Size of temporary buffer pool for reading lengths as ints (size of each buffer is 4).
# Defaults to 100 and must be in range [1, 2048]
#jppf.length.buffer.pool.size = 100
jppf.length.buffer.pool.size = 1000

#------------------------------------------------------------------------------#
# Enabling or disabling the lookup of classpath resources in the file system   #
# Defaults to true (enabled)                                                   #
#------------------------------------------------------------------------------#

#jppf.classloader.file.lookup = true

#------------------------------------------------------------------------------#
# Timeout in millis for JMX requests. Defaults to Long.MAX_VALUE (2^63 - 1)    #
#------------------------------------------------------------------------------#

#jppf.jmx.request.timeout = $script{ java.lang.Long.MAX_VALUE }$



#--------------------------------- NODE CONFIGURATION -------------------------------------#

# JMX management port, defaults to 11198 (no SSL) or 11193 with SSL. If the port
# is already bound, the node will automatically scan for the next available port.
jppf.node.management.port = 12003


# time in seconds after which the system stops trying to reconnect
# A value of zero or less means the system never stops trying. Defaults to 60
jppf.reconnect.max.time = -1

jppf.reconnect.interval=5

#------------------------------------------------------------------------------#
# Processing Threads: number of threads running tasks in this node.            #
# default value is the number of available CPUs; uncomment to specify a        #
# different value. Blocking tasks might benefit from a number larger than CPUs #
#------------------------------------------------------------------------------#
jppf.processing.threads = 16

# JPPF class loader delegation model. values: parent | url, defaults to parent
jppf.classloader.delegation = parent

# size of the class loader cache in the node, defaults to 50
jppf.classloader.cache.size = 50

# class loader resource cache enabled? defaults to true.
# jppf.resource.cache.enabled = false

# resource cache's type of storage: either "file" (the default) or "memory"
jppf.resource.cache.storage = file

# Define a node as master. Defaults to true
jppf.node.provisioning.master = true
# Define a node as a slave. Defaults to false
jppf.node.provisioning.slave = false
# Specify the path prefix used for the root directory of each slave node
# defaults to "slave_nodes/node_", relative to the master root directory
jppf.node.provisioning.slave.path.prefix = slave_nodes/node_
# Specify the directory where slave-specific configuration files are located
# Defaults to the "config" folder, relative to the master root directory
#jppf.node.provisioning.slave.config.path = config
# A set of space-separated JVM options always added to the slave startup command
#jppf.node.provisioning.slave.jvm.options = -Dlog4j.configuration=config/log4j-node.properties
# Specify the number of slaves to launch upon master node startup. Defaults to 0
jppf.node.provisioning.startup.slaves = 0
Pages: [1] 2 3 ... 10
JPPF Powered by SMF 2.0 RC5 | SMF © 2006–2011, Simple Machines LLC Get JPPF at SourceForge.net. Fast, secure and Free Open Source software downloads