JPPF Issue Tracker
Please log in to bookmark issues
CLOSED  Feature request JPPF-436  -  Integration of JMX remote with NIO
Posted Feb 01, 2016 - updated May 07, 2018
icon_info.png This issue has been closed with status "Closed" and resolution "RESOLVED".
Issue details
  • Type of issue
    Feature request
  • Status
  • Assigned to
  • Type of bug
    Not triaged
  • Likelihood
    Not triaged
  • Effect
    Not triaged
  • Posted by
  • Owned by
    Not owned by anyone
  • Category
    JMX connector
  • Resolution
  • Priority
  • Targetted for
    icon_milestones.png JPPF 6.0
Issue description
The goal is to make the JMX generic/JMXMP connector available via NIO (socket channels) rather than straight Sockets on the server side.

One of the main benefits is that it will allow us to use a shared thread poool for all JMX client connections. This should result in a much reduced number of threads running in the driver, due to its JMX connections to all the nodes: instead of 2 threads per node, there would be at most the number of threads in the pool.

Along with scalability and performance improvements, this will allow us to get rid of the separate management port used in the JPPF configuration, since it will use the same port as the other services (e.g. the default 11111 port).

Comment posted by
Nov 01, 10:38

Status  ⇑ top

I currently have a first implementation that is integrated with JPPF management. It seems to keep its promises, up to a certain point: number of JMX threads is kept low and the JMX server can use the same port as the JPPF driver.

There are still many things to do:
  • iron out the bugs: possible connection leaks, incomplete exception handling, possible deadlocks
  • implement automated tests providing sufficient coverage
  • test with TLS connections
  • perform scalability tests, stress tests, endurance tests
  • integrate in the build process (both standalone and integrated in JPPF components)
  • document the connector