JPPF, java, parallel computing, distributed computing, grid computing, parallel, distributed, cluster, grid, cloud, open source, android, .net
JPPF

The open source
grid computing
solution

 Home   About   Features   Download   Documentation   On Github   Forums 
March 29, 2023, 12:00:59 AM *
Welcome,
Please login or register.

Login with username, password and session length
Advanced search  
News: New users, please read this message. Thank you!
  Home Help Search Login Register  
Pages: [1]   Go Down

Author Topic: Migrating from JPPF 5.1.3 to 6.0  (Read 1490 times)

codemonkey

  • JPPF Council Member
  • *****
  • Posts: 138
Migrating from JPPF 5.1.3 to 6.0
« on: December 17, 2018, 09:18:30 PM »

Hey everyone, currently going through an upgrade from jppf 5.1.3 to 6.0. Migration for the most part has been good, running into a couple of issues/challenges

First here's the version I've deployed:
Code: [Select]
INFO  [2018-12-17 15:02:20,265] --------------------------------------------------------------------------------
INFO  [2018-12-17 15:02:20,265] JPPF Version: 6.0, Build number: 2240, Build date: 2018-10-06 09:32 CEST
INFO  [2018-12-17 15:02:20,265] starting node with PID=16592, UUID=C139E8FF-0046-0267-D5E8-274B970AFE01
INFO  [2018-12-17 15:02:20,265] --------------------------------------------------------------------------------

1.getAllJobIds management API is no longer available,  from what I can tell it's to be substituted with getAllJobUuids. However no such method exists the JMXDriverConnectionWrapper, still has getAllJobIds which throws the following exception:
Code: [Select]
javax.management.AttributeNotFoundException: No such attribute: AllJobIds
at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:81)
at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at org.jppf.jmxremote.nio.JMXMessageReader.handleRequest(JMXMessageReader.java:125)
at org.jppf.jmxremote.nio.JMXMessageReader.handleMessage(JMXMessageReader.java:98)
at org.jppf.jmxremote.nio.JMXMessageReader.access$0(JMXMessageReader.java:95)
at org.jppf.jmxremote.nio.JMXMessageReader$HandlingTask.run(JMXMessageReader.java:339)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Possibly jppf client issue?


2. Consistently get java.lang.InterruptedException on existing call to see if driver is running.

Code: [Select]
try (JMXDriverConnectionWrapper jmxDriverConn = new JMXDriverConnectionWrapper("localhost", 11111)) {
            jmxDriverConn.connectAndWait(5000);

            if (jmxDriverConn.isConnected()) {
                          ......
            }
        } catch (Exception e) {
           ........................
        }
    }

This exception is swallowed with JPPF, never reaches the above catch block:

Code: [Select]
WARN  [2018-12-17 15:02:14,468] java.lang.InterruptedException
DEBUG [2018-12-17 15:02:14,468] localhost:11111 JMX URL = service:jmx:jppf://localhost:11111
java.net.ConnectException: Connection refused: connect
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.jppf.comm.socket.SocketChannelClient.open(SocketChannelClient.java:248)
at org.jppf.comm.socket.SocketInitializerImpl.initialize(SocketInitializerImpl.java:105)
at org.jppf.comm.socket.QueuingSocketInitializer.access$001(QueuingSocketInitializer.java:31)
at org.jppf.comm.socket.QueuingSocketInitializer$1.call(QueuingSocketInitializer.java:61)
at org.jppf.comm.socket.QueuingSocketInitializer$1.call(QueuingSocketInitializer.java:58)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
DEBUG [2018-12-17 15:02:14,468]
java.lang.NullPointerException
at org.jppf.jmxremote.JPPFJMXConnector.close(JPPFJMXConnector.java:140)
at org.jppf.management.JMXConnectionWrapper$1.run(JMXConnectionWrapper.java:137)
at java.lang.Thread.run(Thread.java:748)


Any assistance would be greatly appreciated.

CM
Logged

lolo

  • Administrator
  • JPPF Council Member
  • *****
  • Posts: 2272
    • JPPF Web site
Re: Migrating from JPPF 5.1.3 to 6.0
« Reply #1 on: December 18, 2018, 07:46:32 AM »

Hello CM!

Thanks once again for this detailed report.

Regarding the first problem: this is a mistake on our side, for which I registered the bug JPPF-567 JMXDriverConnectionWrapper.getAllJobIds still exists and raises an exception. Fortunately, there is an easy workaround:

Code: [Select]
JMXDriverConnectionWrapper jmx = ...;
String[] jobUuids = jmx.getJobManager().getAllJobUuids();

For the second problem, this is another bug: JPPF-568 Exceptions shown in the log when JMXDriverConnectionWrapper fails to connect to the driver. I guarantee that these exceptions are harmless, however I understand they can be an annoyance when showing up in the logs. The warning for InterruptedException can be hidden if you set the logging level to ERROR for the class org.jppf.comm.socket.QueuingSocketInitializer, whereas the other 2 exceptions can be hidden by setting the log level to INFO or above for the package org.jppf.management.

Sincerely,
-Laurent
Logged

codemonkey

  • JPPF Council Member
  • *****
  • Posts: 138
Re: Migrating from JPPF 5.1.3 to 6.0
« Reply #2 on: December 18, 2018, 04:12:44 PM »

Hi Laurent! Thank you so much for quick response!

Your suggestions worked perfect, again thank you!

CM
Logged

Piercy

  • JPPF Padawan
  • *
  • Posts: 1
Re: Migrating from JPPF 5.1.3 to 6.0
« Reply #3 on: March 21, 2019, 03:12:06 PM »

Hi Laurent, is there a way to somehow hide those harmless errors in the logs?
Logged

lolo

  • Administrator
  • JPPF Council Member
  • *****
  • Posts: 2272
    • JPPF Web site
Re: Migrating from JPPF 5.1.3 to 6.0
« Reply #4 on: March 21, 2019, 04:59:45 PM »

Hello,

The bugs JPPF-567 and JPPF-568 were fixed in the JPPF 6.0.2 release. If you upgrade to this version, you shouldn't have the logs anymore. Could you please do that and let us know of the outcome?

Thanks,
-Laurent
Logged
Pages: [1]   Go Up
 
JPPF Powered by SMF 2.0 RC5 | SMF © 2006–2011, Simple Machines LLC Get JPPF at SourceForge.net. Fast, secure and Free Open Source software downloads