JPPF, java, parallel computing, distributed computing, grid computing, parallel, distributed, cluster, grid, cloud, open source, android, .net
JPPF, java, parallel computing, distributed computing, grid computing, parallel, distributed, cluster, grid, cloud, open source, android, .net
JPPF

The open source
grid computing
solution

 Home   About   Features   Download   Documentation   On Github   Forums 

Built-in algorithms

From JPPF 6.3 Documentation

Revision as of 15:58, 14 February 2019 by Lolocohen (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Contents

Main Page > Load Balancing > Built-in algorithms


1 "manual"

As seen in the code example section, the "manual" algorithm is a static, global and deterministic algorithm which always uses the same bundle size for all the channels. It is equivalent to a round-robin mechanism.

Here is an example configuration:

# name of the load-balancing algorithm
jppf.load.balancing.algorithm = manual
# name of the set of parameter (profile) for the algorithm
jppf.load.balancing.profile = fixed_size
# "manual_profile" profile
jppf.load.balancing.profile.fixed_size.size = 20

2 "nodethreads"

The "nodethreads" algorithm is an adaptive, local and deterministic algorithm which computes the bundle size based on a channel's number of processing threads. It uses a single parameter named "multipllicator" such that:

bundle_size = multiplicator * processing_threads

Therefore, the bundle size is always a multiple of the channel's number of processing threads.

Due to its computations being based of the number of processing threads, this algorithm can only effectively apply to a node, because server channels do not have a notion of processing threads. Therefore, it should only be used on the server-side. If used on the client-side, JPPF will "force" a number of threads equal to 1 for all channels, making the algorithm equivalent to a "manual" algorithm.

This algorithm is adaptive because the number of processing threads in a node can be update dynamically and the updates will be reported to the load-balancer on the server side. This is also why this algorithm implements the ChannelAwareness interface.

Example configuration:

# name of the load-balancing algorithm
jppf.load.balancing.algorithm = nodethreads
# name of the set of parameter (profile) for the algorithm
jppf.load.balancing.profile = threads
# "manual_profile" profile
jppf.load.balancing.profile.threads.multiplicator = 1

3 "autotuned"

This algorithm is heuristic in the sense that it determines a good solution for the number of tasks to send to each channel, while the solution may not be the optimal one. It is loosely based on a simulated annealing algorithm. The heuristic part is provided by the fact that this algorithm performs a random walk within the solutions space.

It is a purely adaptive algorithm, based on the known past performance of each channel. It does not rely or know about the characteristics of the channels (i.e. hardware and software configuration). This algorithm is local to each channel, meaning that a separate task bundle size is determined for each channel, independently of the other channels. In fact, the other channels will implicitely adjust due to the performance changes generated each time the bundle size changes.

It starts using the bundle size defined in property file and changes it to find a better performance. The algorithm waits for some execution results to get a mean execution time, and then makes a change to the bundle size. Each time a change is done, it is done over a smaller range randomly.

This algorithm uses the following parameters:

  • "size": this is the initial bundle size to start with, to bootstrap the algorithm.
  • "minSamplesToAnalyse": the minimum number of samples that must be collected before an analysis is triggered.
  • "minSamplesToCheckConvergence": the minimum number of samples to be collected before checking if the performance profile has changed.
  • "maxDeviation": the percentage of deviation of the current mean to the mean when the system was considered stable.
  • "maxGuessToStable": the maximum number of guesses of number generated that were already tested for the algorithm to consider the current best solution stable.
  • "sizeRatioDeviation": this parameter defines the multiplicity used to define the range available to random generator, as the maximum.
  • "decreaseRatio": this parameter defines how fast it will stop generating random numbers. This is essential to define the size of the universe that is explored. Greater numbers make the algorithm stop sooner. Just as an example, if the best solution is between 0-100, the following might occur (unless maxGuessToStable is small):
    • 1 => 2 max guesses
    • 2 => 5 max guesses
    • 0.5 => 9 max guesses
    • 0.1 => 46 max guesses
    • 0.05 => 96 max guesses


Example configuration:

# name of the load-balancing algorithm
jppf.load.balancing.algorithm =  autotuned
# name of the set of parameter (profile) for the algorithm
jppf.load.balancing.profile =  autotuned
# parameters
jppf.load.balancing.profile.autotuned.size = 5
jppf.load.balancing.profile.autotuned.minSamplesToAnalyse = 100
jppf.load.balancing.profile.autotuned.minSamplesToCheckConvergence = 50
jppf.load.balancing.profile.autotuned.maxDeviation = 0.2
jppf.load.balancing.profile.autotuned.maxGuessToStable = 50
jppf.load.balancing.profile.autotuned.sizeRatioDeviation = 1.5
jppf.load.balancing.profile.autotuned.decreaseRatio = 0.2

4 "proportional"

The "proportional" algorithm is an adaptive, global and deterministic load-balancing algorithm. As for the "autotuned" algorithm, it is purely adaptive and based solely on the known past performance of the channels.

The computation is performed without a random part. Each bundle size is determined in proportion to the mean task execution time to the power of N. Here, N is one of the algorithm's parameters, called "proportionality factor". The mean time is computed as a moving average over the last M executed tasks for a given channel. M is another algorithm parameter, called "performance cache size".

Also, and contrary to the "autotuned" approach, this algorithm is global: the bundle size for each channel depends on that of the other channels.

Example configuration with the default values:

# name of the load-balancing algorithm
jppf.load.balancing.algorithm = proportional
# name of the set of parameter (profile) for the algorithm
jppf.load.balancing.profile = prop

# algorithm parameters
jppf.load.balancing.profile.prop.performanceCacheSize = 2000
jppf.load.balancing.profile.prop.proportionalityFactor = 1

# bootstrap parameters
jppf.load.balancing.profile.prop.initialSize = 1e9
jppf.load.balancing.profile.prop.initialMeanTime = 5

Note the "initialSize" and "initialMeanTime" parameters, which are used to bootstrap the algorithm the first time a bundler is used.

Description of this algorithm

First, let's define the following variables:

  • n = current number of channels
  • max = maximum number of tasks in a job in the current queue state
  • meani = mean execution time for channel i
  • si = number of tasks to send to channel i
  • p = proportionality factor parameter


We then define:

LoadBalancing_proportional_f2.gif

si is then computed as:

LoadBalancing_proportional_f1.gif

Here, we can see that the bundle size for each node is proportional to its contribution to sum S, hence the name of the algorithm. Every time performance data is fed back to the load-balancer, all channel bundle sizes are re-computed.

Note: a noteworthy consequence of the algorithm's implementation is that it will always attempt to dispatch all the tasks in a job at once among the available channels.

5 "rl" (deprecated)

The "rl" algorithm is an adaptive, local and deterministic load-balancing algorithm, loosely based on a reinforcement learning technique (hence the name). As for the "proportional" algorithm, it keeps a bounded history of the channel's performance, from which it can compute a mean task execution time.

At each feedback cycle, the bundle size will be adjusted by adding or substracting a number called "action". The action's value is proportional to the difference between the new mean execution time and the previous mean time (i.e. before the feedback occurred). Its sign depends on whether the performance has improved or worsened and on the sign of the previous action.

To avoid spurious recomputations, the absolute value of the ratio (newMeanTime - previousMeanTime) / previousMeanTime must be greater than a threshold value given as a configuration parameter. The bundle adjustment is also bounded by another configuration parameter.

"rl" uses the following parameters:

  • "performanceCacheSize": the maximum size of the performance samples cache. The lower, the more sensistive the bundler is to changes in the tasks performance profile, i.e. it adapts faster at the potential risk of over-adjusting to non-significant changes
  • "performanceVariationThreshold": the minimum variation of the mean execution time that triggers a re-computation of the bundle size
  • "maxActionRange": the absolute value of the maximum increase of the the bundle size. The possible values of the "action" (or bundle size adjustment) are in [-maxActionRange, -1] Ë… [1, maxActionRange], that is, all posible integer values between -maxActionRange and +maxActionRange except 0.


Example configuration with default values:

# name of the load-balancing algorithm
jppf.load.balancing.algorithm = rl
# name of the set of parameter (profile) for the algorithm
jppf.load.balancing.profile = rl

# "rl" parameters profile
jppf.load.balancing.profile.rl.performanceCacheSize = 2000
jppf.load.balancing.profile.rl.performanceVariationThreshold = 0.001
jppf.load.balancing.profile.rl.maxActionRange = 50

Deprecation notice: it was observed, over past performance tests, that the "rl" algorithm has a tendency to converge towards a very low bundle size (generally just 1). Consequently, even though it produces a reasonably balanced workload, the resulting overall throughput is significantly below optimal. The recommendation is to use a different algorithm. In particular, the "rl2" algorithm, described int he next section represents an evolution of this one closer to the AI technique it is inspired from.

6 "rl2"

The "rl2" algorithm is an adaptive, local and heuristic algorithm which randomly explores the space of bundle sizes and builds a set of possible states of the system it represents over time, in order to determine the states and actions that provide a performance that is as close to optimal as possible. It is based on a reinforcement learning technique.

It uses the following parameters:

  • "performanceCacheSize": the maximum size of the performance samples cache. The lower, the more sensistive the bundler is to changes in the tasks performance profile, i.e. it adapts faster at the potential risk of over-adjusting to non-significant changes
  • "performanceVariationThreshold": the minimum variation of the mean execution time that triggers a reset of the states of the system, thereby causing the algorithm to restart its learning phase. It is used to detect when the performance has significantly changed, for example when a new type of jobs is submitted to the grid
  • "minSamples": the minimum number of states (bundle size associated with a mean execution time) to collect randomly before switching to the next phase of the algorithm
  • "maxSamples": the maximum number of randomly collected states before the random exploration phase ends
  • "maxRelativeSize": the maximum value of the bundle size, expressed as a fraction of the size of the current job being evaluated. This avoids sending all the tasks of a job to a single node, which would defeat the purpose of parallelization.


The algorithm performs the following actions, when its feedback() method is invoked:

Let a state be a couple (bundleSize, meanTime) where meanTime is the best (i.e. lowest) known mean execution time for a bundle of size bundleSize.
Let nbStates be the number of known states
Let maxSize be currentJobSize * maxRelativeSize

1) Detect whether a significant performance change has occurred. This happens for instance when a new kind of jobs is submitted to the grid, where the tasks in these jobs take a significantly longer time to execute. When this happens, the algorithm clears all its known states, to "forget" what it has learned about previous jobs' performance, and restarts it learning cycle. The condition for a significant performance change is expressed as:

(previousMean - currentMean) / previousMean < -performanceVariationThreshold

where previousMean and currentMean represent the mean execution time of the performance cache before and after it has been updated with the latest feedback.data

2) When nbStates < minSamples, a new bundle size is chosen randomly among the values in the range [1, maxSize] for which no state has been collected yet.

3) When minSamples <= nbStates < maxSamples, we compute a probability p that the next bundle size will be chosen randomly, which decreases linearly with (maxSamples - nbStates). In other terms, the more known sates, the less likely it is that the bundle size will be chosen randomly. The probability p is computed as:

p = (maxSamples - nbStates) / (1 + (maxSamples - minSamples))

If the bundle size is not chosen randomly, then it is set to the bundle size of the known state with the best performance.

4) When nbStates >= maxSamples then the next bundle size is the size of the known state with the best performance.

To summarize, we can say that the algorithm goes through 3 phases over time: it first learns the states of the system by randomly walking the space of possible bundle sizes, then continues to learn while increasingly applying its acquired knowledge, and finally considers learning as complete and uses the optimal value it converged to.

Example configuration with default values:

# name of the load-balancing algorithm
jppf.load.balancing.algorithm = rl2
# name of the set of parameter (profile) for the algorithm
jppf.load.balancing.profile = rl2_profile
# "rl2" parameters
jppf.load.balancing.profile.rl2_profile.performanceCacheSize = 1000
jppf.load.balancing.profile.rl2_profile.performanceVariationThreshold = 0.75
jppf.load.balancing.profile.rl2_profile.minSamples = 20
jppf.load.balancing.profile.rl2_profile.maxSamples = 100
jppf.load.balancing.profile.rl2_profile.maxRelativeSize = 0.5

7 Class hierarchy

All JPPF's built-in bundlers are implemented in the org.jppf.load.balancer.impl package. SImilarly, the corresponding bundler providers are found in the org.jppf.load.balancer.spi package.

LoadBalancerClassHierarchy.gif

Main Page > Load Balancing > Built-in algorithms



JPPF Copyright © 2005-2020 JPPF.org Powered by MediaWiki