adequate
adequate
adequate
adequate
JPPF, java, parallel computing, distributed computing, grid computing, parallel, distributed, cluster, grid, cloud, open source, android, .net
JPPF

The open source
grid computing
solution

 Home   About   Features   Download   Documentation   Forums 
July 22, 2017, 08:37:41 PM *
Welcome,
Please login or register.

Login with username, password and session length
Advanced search  
News: New users, please read this message. Thank you!
  Home Help Search Login Register  

Recent Posts

Pages: [1] 2 3 ... 10
1
Troubleshooting / Re: jobStarting event in local execution
« Last post by jim_pb on July 10, 2017, 06:32:23 PM »
Thank you very much.  Your suggestion worked beautifully!

Jim
2
JPPF Vision / Re: Job execution statistic persistance
« Last post by lolo on July 08, 2017, 06:38:42 AM »
Hello,

I apologize for this late answer.
I believe that being able to persist and reuse the state of the adapative load-balancers is an excellent idea. However, this feature is very far from trivial and I cannot promise an ETA any time soon. I registered this as a feature request for the upcoming JPPF 6.0: JPPF-511 Ability to persist and reuse the state of adaptive load-balancers

Thank you very much,
-Laurent
3
Troubleshooting / Re: jobStarting event in local execution
« Last post by lolo on July 08, 2017, 06:16:06 AM »
Hi Jim,

Indeed, the client local executor does not have a NodeLifeCycleListener mechanism. However you can achieve something equivalent with a job listener, in particular by overriding/implementing its jobDispatched() method. In effect, in that method, the JobEvent carries information about the type of connection used to dispatch the job for execution, notably via its isRemoteExecution() method. You can do it with code like this:

Code: [Select]
JPPFJob job = new JPPFJob();
job.addJobListener(new JobListenerAdapter() {
  @Override
  public void jobDispatched(JobEvent event) {
    if (!event.isRemoteExecution()) {
      // get the locally dispatched tasks
      List<Task<?>> tasks = event.getJobTasks();
      // perform initialization for local exec
      ...
    }
  }
});

Also note that, while the isRemoteExecution() method is not mentioned in the doc link above, it is part of the public and supported API. I raised the bug JPPF-510 for this.

I hope this helps,
-Laurent
4
Troubleshooting / jobStarting event in local execution
« Last post by jim_pb on July 07, 2017, 10:54:40 PM »
Hi Laurent,

I'm currently using JPPF 5.2.1.  My distributed application has an initialization step that occurs when a NodeLifeCycleListener captures a jobStarting event.  If I run this application with local execution only enabled, the initialization does not occur.  I assume the NodeLifeCycle events are different in the case of local execution?

I can image I could do something like poll the jppfClient to see if the local execution only property was set.  In case it was, I could call another method that mimics what my jobStarting() method does.  Is this the right approach, or is there a more direct way to cause the jobStarting event to be launched such that my listener captures it and executes the jobStarting method for that event when local execution only is enabled?

Thank you for your insight!

Jim
5
Troubleshooting / Re: p2p load balancing
« Last post by lolo on June 24, 2017, 06:56:50 AM »
Hello,

The problem here is that the configuration passed in the JPPFClient constructor is in fact not used by the load-balancer. Instead it uses the global configuration, obtained via the static call JPPFConfiguration.getProperties(). In your scenario, this causes the client to have the same load-balancer settings as the driver, which means the client will only send a maximum of 3 tasks at any time. This is why it looks like the driver is only sending tasks to a single node, when in fact the problem is caused by the client.

I registered this as a bug: JPPF-506 Client side load-balancer does not use the configuration passed to the JPPFClient constructor.

However, here you have an easy workaround, which consists in dynamically setting the JPPFClient's load-balancer settings, after it has been initialized. You can do it in your getClient() method like this:

Code: [Select]
public static JPPFClient getClient() {
  if (jppfClient == null) {
    TypedProperties clientConfig = ...;
    jppfClient = new JPPFClient(null, clientConfig, (ConnectionPoolListener[]) null);
    // wait until at least one driver connection is establshed
    jppfClient.awaitWorkingConnectionPool();
    // change load-balancer settings
    try {
      jppfClient.setLoadBalancerSettings("manual", new TypedProperties().setInt("size", 1_000_000));
    } catch (Exception e) {
      e.printStackTrace();
    }
  }
  return jppfClient;
}

I hope this helps,
-Laurent
6
JPPF Vision / Job execution statistic persistance
« Last post by arefaydi on June 22, 2017, 03:54:14 PM »
Hello,
Adaptive algorithms use statistics but when driver restarts or hardware failure, statistics will be gone and load balancing algorithm adaptation will be return to beginning.
  • Is it possible (and logical?) to save job execution statistics periodically and load them to same driver while restart or to another driver which already running?
  • Another idea, maybe sharing these statistics with peer drivers, so when one of them down, informations still exist on other peers and when it restarts or a new driver added as peer, it will start with existing statistics.
We are planning to use p2p because of the risk of a single point of failure, but progress of algorithm's learning important and it shouldn't reset each time the server reset. Maybe I'm wrong, but using common statistics between all peer drivers (even with drivers added lately) make sense to me.
7
Troubleshooting / p2p load balancing
« Last post by arefaydi on June 21, 2017, 10:32:26 AM »
Hello,
(5.2.7 with patch 01 http://www.jppf.org/patch_info.php?patch_id=61). I have 3 p2p connected drivers with only local nodes. They are embedded in the client application (code below).

1. When I set manual algorithm for the drivers, tasks are delivered to only one node  and the other two stay idle until it finish its tasks, so no paralellizm occurs because only one node executing tasks at same time. For example, when client submits a job with 15 tasks, node1 executes 3 (size on the profile) tasks while node2 and node3 idle, when its finish, node2 executes 3 tasks while node1 and node3 idle .. Is this the normal behaviour of manual algorithm (I expected to delivering 3 tasks to each nodes at the same time) or my configuariton is wrong?

2. With p2p drivers and any of the algorithms, are the tasks which submitted to a driver by other drivers, executed always on its local nodes or does it try to deliver them to other drivers according to algorithm settings and drivers' load at that time? For example, client1 submits a job with 10 task to driver1,  driver1 delivers 4 to its local node and 6 to driver2. Does driver2 execute all six tasks on its local node or try to deliver some of them to other drivers (even to driver1 back). If second option is true, does it create conflicts and miscalculations especially when using adaptive algorithms?

driver and local node configuration
Code: [Select]
#------------------------------------------------------------------------------#
# JPPF                                                                         #
# Copyright (C) 2005-2016 JPPF Team.                                           #
# http://www.jppf.org                                                          #
#                                                                              #
# Licensed under the Apache License, Version 2.0 (the "License");              #
# you may not use this file except in compliance with the License.             #
# You may obtain a copy of the License at                                      #
#                                                                              #
# http://www.apache.org/licenses/LICENSE-2.0                                #
#                                                                              #
# Unless required by applicable law or agreed to in writing, software          #
# distributed under the License is distributed on an "AS IS" BASIS,            #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.     #
# See the License for the specific language governing permissions and          #
# limitations under the License.                                               #
#------------------------------------------------------------------------------#

#------------------------------------------------------------------------------#
# port number to which the server listens for plain connections                #
# default value is 11111; uncomment to specify a different value               #
# to disable, specify a negative port number                                   #
#------------------------------------------------------------------------------#

jppf.server.port = 11111

#------------------------------------------------------------------------------#
# port number to which the server listens for secure connections               #
# default value is 11443; uncomment to specify a different value               #
# to disable, specify a negative port number                                   #
#------------------------------------------------------------------------------#

jppf.ssl.server.port = 11443
#jppf.ssl.server.port = -1

#------------------------------------------------------------------------------#
#                          SSL Settings                                        #
#------------------------------------------------------------------------------#

# location of the SSL configuration on the file system
jppf.ssl.configuration.file = config/ssl/ssl-server.properties

# SSL configuration as an arbitrary source. Value is the fully qualified name
# of an implementation of java.util.concurrent.Callable<InputStream>
# with optional space-separated arguments
#jppf.ssl.configuration.source = org.jppf.ssl.FileStoreSource config/ssl/ssl-server.properties

# enable secure communications with other servers; defaults to false (disabled)#
#jppf.peer.ssl.enabled = true

#------------------------------------------------------------------------------#
# Enabling and configuring JMX features                                        #
#------------------------------------------------------------------------------#

# non-secure JMX connections; default is true (enabled)
#jppf.management.enabled = true

# secure JMX connections via SSL/TLS; default is false (disabled)
#jppf.management.ssl.enabled = true

# JMX management host IP address. If not specified (recommended), the first non-local
# IP address (i.e. neither 127.0.0.1 nor localhost) on this machine will be used.
# If no non-local IP is found, localhost will be used
#jppf.management.host = localhost

# JMX management port. Defaults to 11198. If the port is already bound, the driver
# will scan for the first available port instead.
#jppf.management.port = 11198

#------------------------------------------------------------------------------#
# Configuration of the driver discovery broadcast service                      #
#------------------------------------------------------------------------------#

# Enable/Disable automatic discovery of this JPPF drivers; default to true
jppf.discovery.enabled = true

# UDP multicast group to which drivers broadcast their connection parameters
# and to which clients and nodes listen. Default value is 230.0.0.1
#jppf.discovery.group = 230.0.0.1

# UDP multicast port to which drivers broadcast their connection parameters
# and to which clients and nodes listen. Default value is 11111
#jppf.discovery.port = 11111

# Time between 2 broadcasts, in milliseconds. Default value is 1000
#jppf.discovery.broadcast.interval = 1000

# IPv4 inclusion patterns: broadcast these ipv4 addresses
#jppf.discovery.broadcast.include.ipv4 = 192.168.1.; 192.168.1.0/24

# IPv4 exclusion patterns: do not broadcast these ipv4 addresses
#jppf.discovery.exclude.ipv4 = 192.168.1.128-; 192.168.1.0/25

# IPv6 inclusion patterns: broadcast these ipv6 addresses
#jppf.discovery.include.ipv6 = 1080:0:0:0:8:800:200C-20FF:-; ::1/80

# IPv6 exclusion patterns: do not broadcast these ipv6 addresses
#jppf.discovery.exclude.ipv6 = 1080:0:0:0:8:800:200C-20FF:0C00-0EFF; ::1/64

#------------------------------------------------------------------------------#
# Connection with other servers, enabling P2P communication                    #
#------------------------------------------------------------------------------#

# Enable/disable auto-discovery of remote peer drivers. Default value is false
jppf.peer.discovery.enabled = true

# manual configuration of peer servers, as a space-separated list of peers names to connect to
#jppf.peers = server_1 server_2

# enable both automatic and manual discovery
#jppf.peers = jppf_discovery server_1 server_2

# connection to server_1
jppf.peer.server_1.server.host = host_1
jppf.peer.server_1.server.port = 11111
# connection to server_2
jppf.peer.server_2.server.host = host_2
jppf.peer.server_2.server.port = 11112

#------------------------------------------------------------------------------#
# Load-balancing configuration                                                 #
#------------------------------------------------------------------------------#

# name of the load-balancing algorithm to use; pre-defined possible values are:
# manual | autotuned | proportional | rl | nodethreads
# it can also be the name of a user-defined algorithm. Default value is "manual"
jppf.load.balancing.algorithm = manual

# name of the set of parameter values (aka profile) to use for the algorithm
jppf.load.balancing.profile = manual_profile

# "manual" profile
jppf.load.balancing.profile.manual_profile.size = 3

# "autotuned" profile
jppf.load.balancing.profile.autotuned_profile.size = 5
jppf.load.balancing.profile.autotuned_profile.minSamplesToAnalyse = 100
jppf.load.balancing.profile.autotuned_profile.minSamplesToCheckConvergence = 50
jppf.load.balancing.profile.autotuned_profile.maxDeviation = 0.2
jppf.load.balancing.profile.autotuned_profile.maxGuessToStable = 50
jppf.load.balancing.profile.autotuned_profile.sizeRatioDeviation = 1.5
jppf.load.balancing.profile.autotuned_profile.decreaseRatio = 0.2

# "proportional" profile
jppf.load.balancing.profile.proportional_profile.size = 5
jppf.load.balancing.profile.proportional_profile.initialMeanTime = 1e10
jppf.load.balancing.profile.proportional_profile.performanceCacheSize = 300
jppf.load.balancing.profile.proportional_profile.proportionalityFactor = 1

# "rl" profile
jppf.load.balancing.profile.rl_profile.performanceCacheSize = 1000
jppf.load.balancing.profile.rl_profile.performanceVariationThreshold = 0.0001
jppf.load.balancing.profile.rl_profile.maxActionRange = 10

# "nodethreads" profile
jppf.load.balancing.profile.nodethreads_profile.multiplicator = 1

# "rl2" profile
jppf.load.balancing.profile.rl2_profile.performanceCacheSize = 1000
jppf.load.balancing.profile.rl2_profile.performanceVariationThreshold = 0.75
jppf.load.balancing.profile.rl2_profile.minSamples = 20
jppf.load.balancing.profile.rl2_profile.maxSamples = 100
jppf.load.balancing.profile.rl2_profile.maxRelativeSize = 0.5

#------------------------------------------------------------------------------#
# Other JVM options added to the java command line when the driver is started  #
# as a subprocess. Multiple options are separated by spaces.                   #
#------------------------------------------------------------------------------#

jppf.jvm.options = -Xmx256m -Djava.util.logging.config.file=config/logging-driver.properties

# example with remote debugging options
#jppf.jvm.options = -server -Xmx256m -Xrunjdwp:transport=dt_socket,address=localhost:8000,server=y,suspend=n

#------------------------------------------------------------------------------#
# path to the Java executable. When defined, it is used by the launch script   #
# (startDriver.bat or startDriver.sh) instead of the default Java path.        #
# It is undefined by default, meaning that the script will use the "java"      #
# command, relying on Java being in the system PATH.                           #
#------------------------------------------------------------------------------#

# linux/unix example
#jppf.java.path = /opt/java/jdk1.8.0_x64/bin/java
# windows example
#jppf.java.path = C:/java/jdk1.8.0_x64/bin/java.exe

#------------------------------------------------------------------------------#
# Specify alternate serialization schemes.                                     #
# Defaults to org.jppf.serialization.DefaultJavaSerialization.                 #
#------------------------------------------------------------------------------#

# default
#jppf.object.serialization.class = org.jppf.serialization.DefaultJavaSerialization

# built-in object serialization schemes
jppf.object.serialization.class = org.jppf.serialization.DefaultJPPFSerialization
#jppf.object.serialization.class = org.jppf.serialization.XstreamSerialization

# defined in the "Kryo Serialization" sample
#jppf.object.serialization.class = org.jppf.serialization.kryo.KryoSerialization

#------------------------------------------------------------------------------#
# Specify a data transformation class. If unspecified, no transformation occurs#
#------------------------------------------------------------------------------#

# Defined in the "Network Data Encryption" sample
#jppf.data.transform.class = org.jppf.example.dataencryption.SecureKeyCipherTransform

#------------------------------------------------------------------------------#
# whether to resolve the nodes' ip addresses into host names                   #
# defaults to true (resolve the addresses)                                     #
#------------------------------------------------------------------------------#

org.jppf.resolve.addresses = true

#------------------------------------------------------------------------------#
# Local (in-JVM) node. When enabled, any node-specific properties will apply   #
#------------------------------------------------------------------------------#

# Enable/disable the local node. Default is false (disabled)
jppf.local.node.enabled = true
jppf.local.node.bias = false
# example node-specific setting
#jppf.processing.threads = 4

#------------------------------------------------------------------------------#
# In idle mode configuration. In this mode the server or node starts when no   #
# mouse or keyboard activity has occurred since the specified timeout, and is  #
# stopped when any new activity occurs.                                        #
#------------------------------------------------------------------------------#

# Idle mode enabled/disabled. Default is false (disabled)
#jppf.idle.mode.enabled = false

# Fully qualified class name of the factory object that instantiates a platform-specific idle state detector
#jppf.idle.detector.factory = org.jppf.example.idlesystem.IdleTimeDetectorFactoryImpl

# Time of keyboard and mouse inactivity to consider the system idle, in milliseconds
# Default value is 300000 (5 minutes)
#jppf.idle.timeout = 6000

# Interval between 2 successive calls to the native APIs to determine idle state changes
# Default value is 1000
#jppf.idle.poll.interval = 1000

#------------------------------------------------------------------------------#
# Automatic recovery from hard failure of the nodes connections. These         #
# parameters configure how the driver reacts when a node fails to respond to   #
# its heartbeat messages.                                                      #
#------------------------------------------------------------------------------#

# Enable recovery from failures on the nodes. Default to false (disabled)
#jppf.recovery.enabled = false

# Max number of attempts to get a response from the node before the connection
# is considered broken. Default value is 3
#jppf.recovery.max.retries = 3

# Max time in milliseconds allowed for each attempt to get a response from the node.
# Default value is 6000 (6 seconds)
#jppf.recovery.read.timeout = 6000

# Dedicated port number for the detection of node failure. Defaults to 22222.
# If server discovery is enabled on the nodes, this value will override the port number specified in the nodes
#jppf.recovery.server.port = 22222

# Interval in milliseconds between two runs of the connection reaper
# Default value is 60000 (1 minute)
#jppf.recovery.reaper.run.interval = 60000

# Number of threads allocated to the reaper. Default to the number of available CPUs
#jppf.recovery.reaper.pool.size = 8

#------------------------------------------------------------------------------#
# Redirecting System.out and System.err to files.                              #
#------------------------------------------------------------------------------#

# file path on the file system where System.out is redirected.
# if unspecified or invalid, then no redirection occurs
#jppf.redirect.out = System.out.log
# whether to append to an existing file or to create a new one
jppf.redirect.out.append = false

# file path on the file system where System.err is redirected
# if unspecified or invalid, then no redirection occurs
#jppf.redirect.err = System.err.log
# whether to append to an existing file or to create a new one
jppf.redirect.err.append = false

#------------------------------------------------------------------------------#
# Global performance tuning parameters. These affect the performance and       #
# throughput of I/O operations in JPPF. The values provided in the vanilla     #
# JPPF distribution are known to offer a good performance in most situations   #
# and environments.                                                            #
#------------------------------------------------------------------------------#

# Size of send and receive buffer for socket connections.
# Defaults to 32768 and must be in range [1024, 1024*1024]
# 128 * 1024 = 131072
jppf.socket.buffer.size = 131072
# Size of temporary buffers (including direct buffers) used in I/O transfers.
# Defaults to 32768 and must be in range [1024, 1024*1024]
jppf.temp.buffer.size = 12288
# Maximum size of temporary buffers pool (excluding direct buffers). When this size
# is reached, new buffers are still created, but not released into the pool, so they
# can be quickly garbage-collected. The size of each buffer is defined with ${jppf.temp.buffer.size}
# Defaults to 10 and must be in range [1, 2048]
jppf.temp.buffer.pool.size = 200
# Size of temporary buffer pool for reading lengths as ints (size of each buffer is 4).
# Defaults to 100 and must be in range [1, 2048]
jppf.length.buffer.pool.size = 100

#------------------------------------------------------------------------------#
# Enabling or disabling the lookup of classpath resources in the file system   #
# Defaults to true (enabled)                                                   #
#------------------------------------------------------------------------------#

#jppf.classloader.file.lookup = true

#------------------------------------------------------------------------------#
# Timeout in millis for JMX requests. Defaults to Long.MAX_VALUE (2^63 - 1)    #
#------------------------------------------------------------------------------#

#jppf.jmx.request.timeout = $script{ java.lang.Long.MAX_VALUE }$



#--------------------------------- NODE CONFIGURATION -------------------------------------#

# JMX management port, defaults to 11198 (no SSL) or 11193 with SSL. If the port
# is already bound, the node will automatically scan for the next available port.
jppf.node.management.port = 12001


# time in seconds after which the system stops trying to reconnect
# A value of zero or less means the system never stops trying. Defaults to 60
jppf.reconnect.max.time = 5

#------------------------------------------------------------------------------#
# Processing Threads: number of threads running tasks in this node.            #
# default value is the number of available CPUs; uncomment to specify a        #
# different value. Blocking tasks might benefit from a number larger than CPUs #
#------------------------------------------------------------------------------#
#jppf.processing.threads = 1

# JPPF class loader delegation model. values: parent | url, defaults to parent
jppf.classloader.delegation = parent

# size of the class loader cache in the node, defaults to 50
jppf.classloader.cache.size = 50

# class loader resource cache enabled? defaults to true.
jppf.resource.cache.enabled = true

# resource cache's type of storage: either "file" (the default) or "memory"
jppf.resource.cache.storage = file

# Define a node as master. Defaults to true
jppf.node.provisioning.master = true
# Define a node as a slave. Defaults to false
jppf.node.provisioning.slave = false
# Specify the path prefix used for the root directory of each slave node
# defaults to "slave_nodes/node_", relative to the master root directory
jppf.node.provisioning.slave.path.prefix = slave_nodes/node_
# Specify the directory where slave-specific configuration files are located
# Defaults to the "config" folder, relative to the master root directory
jppf.node.provisioning.slave.config.path = config
# A set of space-separated JVM options always added to the slave startup command
jppf.node.provisioning.slave.jvm.options = -Dlog4j.configuration=config/log4j-node.properties
# Specify the number of slaves to launch upon master node startup. Defaults to 0
jppf.node.provisioning.startup.slaves = 0

client conf
Code: [Select]
public class JPPFClientProvider {
    private static JPPFClient jppfClient;

    public static JPPFClient getClient(){
        if(jppfClient==null){
            TypedProperties clientConfig = new TypedProperties()
                    .setBoolean("jppf.discovery.enabled", false)
                    .setString("jppf.drivers", "driver1")
                    .setString("driver1.jppf.server.host", "localhost")
                    .setInt("driver1.jppf.server.port", 11111)
                    .setInt("driver1.jppf.pool.size",1)
                    .setBoolean("driver1.jppf.ssl.enabled", false)
                    .setBoolean("jppf.resolve.addresses", true)
                    .setString("jppf.load.balancing.algorithm", "manual")
                    .setString("jppf.load.balancing.profile", "manual_profile")
                    .setInt("jppf.load.balancing.profile.manual_profile.size", 1000000)
                    .setInt("jppf.admin.refresh.interval.topology", 1000)
                    .setInt("jppf.admin.refresh.interval.health", 3000)
                    .setInt("jppf.socket.buffer.size", 131072)
                    .setInt("jppf.temp.buffer.size", 12288)
                    .setInt("jppf.temp.buffer.pool.size", 200)
                    .setInt("jppf.length.buffer.pool.size", 100)
                    .setString("jppf.object.serialization.class", "org.jppf.serialization.DefaultJPPFSerialization")
                    ;

            jppfClient = new JPPFClient(null, clientConfig, new ConnectionPoolListener[0]);
        }
        return jppfClient;
    }
}

driver initializion
Code: [Select]
public class JPPFDriverProvider {
    private static JPPFDriver jppfDriver=null;

    public static void startJppfDriver(){
        if(jppfDriver==null){
            JPPFDriver.main("noLauncher");
            jppfDriver=JPPFDriver.getInstance();
        }
    }

    public static JPPFDriver getJppfDriver(){
        if(jppfDriver==null){
            JPPFDriver.main("noLauncher");
            jppfDriver=JPPFDriver.getInstance();
        }
        return jppfDriver;
    }
}





8
Installation and Configuration / Re: Embedded client, driver and node
« Last post by arefaydi on June 20, 2017, 01:58:42 PM »
It worked! Thanks again for quick response and quick solution.
9
Installation and Configuration / Re: Embedded client, driver and node
« Last post by lolo on June 20, 2017, 07:17:46 AM »
Hello,

I have added the enhancement JPPF-505 Ability to disable the bias towards local node in the driver to the patch 01 for JPPF 5.2.7. Please feel free to download it and let us know if it works for you.

Sincerely,
-Laurent
10
Installation and Configuration / Re: Peer driver communication
« Last post by lolo on June 20, 2017, 06:36:49 AM »
Hello,

The short answer is yes. If for example, clientA sends a job to driverA, and local nodeA is already busy, driverA then sends the job to driverB which executes it on local nodeB, then the resulting executed tasks will be sent all the way back to clientA. This is true for any level of inderection in the job execution path. In other words, the results of a job are always delivered to the client that submitted it. This is possible because the job always maintains a "uuid path" in the form "clientUuid / driver1Uuid / ... / driverNUuid", so JPPF always knows where the job comes from and where to send the results back.

I hope this clarifies,
-Laurent
Pages: [1] 2 3 ... 10
JPPF Powered by SMF 2.0 RC5 | SMF © 2006–2011, Simple Machines LLC Get JPPF at SourceForge.net. Fast, secure and Free Open Source software downloads