The "jppf.node.provisioning.master.uuid" property, also represented as [http://jppf.org/javadoc/5.2/org/jppf/utils/configuration/JPPFProperties.html#PROVISIONING_MASTER_UUID '''JPPFProperties.PROVISIONING_MASTER_UUID'''] in the configuration API is a property that is only set on slave nodes and contains the UUID of the master node that started them.
It appears this property is only documented in the Javadoc and nowhere else. We should add something about it in the [http://www.jppf.org/doc/5.2/index.php?title=Node_provisioning '''provisioning'''] documentation.
The idea is to be able to add cutom columns to the JVM health view of the desktop and web admin consoles, along with the ability to make data available for the charts in the desktop console. This would be the client side counterpart to the changes proposed in feature request JPPF-396.
This should be implemented as a plugin for the admin conosle(s), with new data based on those found in the refactored [http://www.jppf.org/javadoc/6.0/index.html?org/jppf/management/diagnostics/HealthSnapshot.html '''HealthSnapshot''']
This is a bit cumbersome. We propose to relax the syntactic constraints and allow using 'S' or 's' instead of 'script', and only specifying the first character of each possible script source type ('u' or 'U' for url etc...)
When using a POJO task where one of the methods or constructor is annotated with @JPPFRunnable, the node executing the task throws a ClassNotFoundException saying it can't find the class of the POJO task
The PeerAttributesHandler class uses a thread pool to handle JMX notficiations from peer drivers when they update their number of nodes and total number of node threads. It uses Runtime.getRuntime().availableProcessors() which seems wasteful since the tasks performed by the threds are very short-lived.
We should use a configuration property "jppf.peer.handler.threads" which defaults to 1 to configure this number of threads instead.
From [http://www.jppf.org/forums/index.php/topic,7993.0.html this forum post]:
> Adaptive algorithms use statistics but when driver restarts or hardware failure, statistics will be gone and load balancing algorithm adaptation will be return to beginning.
> - Is it possible (and logical?) to save job execution statistics periodically and load them to same driver while restart or to another driver which already running?
> - Another idea, maybe sharing these statistics with peer drivers, so when one of them down, informations still exist on other peers and when it restarts or a new driver added as peer, it will start with existing statistics.
> We are planning to use p2p because of the risk of a single point of failure, but progress of algorithm's learning important and it shouldn't reset each time the server reset.
The documentation on [http://www.jppf.org/doc/5.2/index.php?title=Jobs_runtime_behavior,_recovery_and_failover#Job_lifecycle_notifications:_JobListener '''job listeners'''] does not mention the '''isRemoteExecution()''' and '''getConnection()''' methods in the [http://www.jppf.org/javadoc/5.2/index.html?org/jppf/client/event/JobEvent.html '''JobEvent'''] class.
I've noticed in the admin console that peer to peer driver connections are not detected anymore. Looking at the logs, I could see that the toplogy monitoring API never logs peer connections. I suspect this due to the JPPFNodeForwardingMBean excluding peer nodes when retrieving nodes specified with a NodeSelector.
The feature request JPPF-480 provides a pluggable way for the driver to persist jobs, to enable both job failover/recovery and the ability to execute jobs and retrieve their results offline. In particular, it provides a client-side API to administer persisted jobs.
We propose to add an administration interface to the web and desktop consoles to allow users to perform these tasks graphically in addition to programmatically.
When using the constructor JPPFClient(String uuid, TypedProperties config, ConnectionPoolListener... listeners), the load-balancer for this client is not using the TypedProperties object, but instead uses the global configuration via a static call to JPPFConfiguration.getProperties(). This will cause wrong settings for the client load-balancer.
A possible workaround is to dynamically set the load-balancer configuration once the client is initialized, using JPPFClient.setLoadBalancerSettings(String algorithm, Properties config).
Currently, when a driver is configured with a local (same JVM) node, this local node is always given priority for job scheduling. We propose to give users the ability to disable this behavior via a driver configuration proeprty such as "jppf.local.node.bias = false", with a default value of "true" to keep compatibility with previous versions.
When starting a JPPF driver with a local node, the local node does not complete its connection with the driver it is embedded in, even though it display a message "Node successfully initialized". The node then behaves as if it were not started at all, and does not appear in the administration console.