adequate
adequate
adequate
adequate
JPPF, java, parallel computing, distributed computing, grid computing, parallel, distributed, cluster, grid, cloud, open source, android, .net
JPPF

The open source
grid computing
solution

 Home   About   Features   Download   Documentation   Forums 
May 29, 2017, 02:01:23 AM *
Welcome,
Please login or register.

Login with username, password and session length
Advanced search  
News: New users, please read this message. Thank you!
  Home Help Search Login Register  

Recent Posts

Pages: [1] 2 3 ... 10
1
Dear Laurent,

Hi, I come back with question on how to integrate Hazelcast in the JPPF.

From my previous case as in the topic here: http://www.jppf.org/forums/index.php/topic,7964.msg12613.html
I have managed to use the FileUtils function to transfer data between Node and Driver and I have tested it in the grid environment (with 3-4 PCs as the nodes) but I am not quite satisfied with the result I got so far.

To illustrate the case in more details here is my case study requirement:
1. I created three main classes: JobClient , JobRunner and JobTask (in this case JobTask extends CommandLineTask)
2. When the application started, it starts by executing JobClient to load an input file (with content of Windows Command list) that I selected and then it would be processed by the JobRunner to loop through the list while adding it as a JPPF tasks (Windows CMD task) into a created JPPF Job.
3. Those tasks would be picked up by any node available and each of the node would each generate result to be transferred back to the Driver once done.
4. Currently, as each of the node could perform more than one tasks, each of its result would be saved into a temporary folder (in the node PC) until all of the tasks have been done and additional process would be performed by the node to zip all of the results before sending it to the Driver.
5. Once all of the tasks completed, the driver would then loop thru the zip result files and extracted it as a final file (in the Driver PC side)
6. All of the behavior run without problems for few list in the input file (meaning, smaller job to be done) and it would fail for bigger list.

So, overall I gain the performance in term of timing until the job is finished within the JPPF grid is not much different than if I do it with one PC without JPPF. Maybe I have been doing some unnecessary steps in the logic which I am still not very sure about as it seems already doing fine for some input files.

Now for comparison purpose, I am trying to use the Hazelcast to share the input file to the node and then to send the result file from the node to the driver, but I am having difficulty in understanding your code as in the Data Dependency example.

Would you be having time to give me some guidance on how I can setup the hazelcast into the JPPF after I load the jar into my library folder classpath?

Currently as my elaboration above, I use the FileUtils.getFileAsBytes(fileTemp) call and set it to the setResult() method of the JPPF Task to send the result file to the Driver.

If I use hazelcast in this case, can I use JPPFDriverStartupSPI instead of JPPFNodeStartupSPI (as in your example)?
And is it the correct way by just populating a Hazelcast Map/MultiMap for input file and result file to be accessed/updated by each node within the JPPF task to achieve the same logic? Or is it necessary to setup some additional configuration for hazelcast to run in JPPF for my case? 


Really appreciate your attention and have a great day!

Goshlive  ;)
2
Installation and Configuration / Distribute the tasks to distinct nodes
« Last post by marco-polo on May 21, 2017, 10:37:20 AM »
Hello,

I have a question about the Distribution of the Tasks from a Job on different nodes.
In my case, a Job contains n Tasks. Now I was wondering if I can declare, that each Task is distributed to a node, so that each node only processes exactly one Task? Is that possible?

Thank you and have a great day!

Marco
3
Forums / Re: Please post a message when you register
« Last post by marco-polo on May 18, 2017, 01:08:59 PM »
Hello everyone!!
4
Troubleshooting / Re: application does not scale
« Last post by lolo on May 11, 2017, 08:11:42 AM »
Hello,

Unfortunately, I do not have any conclusive information as to the scaling issue. I made some experiments, on a much smaller time scale than your (seconds whereas yours is hours), and there is one thing I found out though. I tried running cpu-bound computations, and then scaling them to 10x longer then 100x longer and I could see they did not scale linearly. By running a CPU monitoring tool side by side with the nodes, I could see that the CPU (an Intel i7-4390) was constantly adjusting its frequency, depending on the temperature of each core.

I ran the following code in the tasks:

Code: [Select]
public class MyTask extends AbstractTask<String> {
  private int arraySize, nbIterations;
  private long elapsed;

  public MyTask(final int arraySize, final int nbIterations) {
    this.arraySize = arraySize;
    this.nbIterations = nbIterations;
  }

  @Override
  public void run() {
    long start = System.nanoTime();
    for (int i=0; i<nbIterations; i++) {
      byte[] array = new byte[arraySize];
      for (int j=0; j<arraySize; j++) {
        int n = (int) Math.exp(Math.log(j + 1));
        n = (n % 256) - 128;
        array[j] = (byte) n;
      }
    }
    elapsed = (System.nanoTime() - start) / 1_000_000L;
    setResult(String.format("completed in %,d ms on node '%s'", elapsed, JPPFConfiguration.getProperties().getString("id")));
  }

  public long getElapsed() { return elapsed; }
}

I used 2 nodes, one with 128 MB heap, the other with 1 GB, but could not see and significant difference in performance.

Using an array size of 512kb and varying the number of iterations, I got these results (times in millis):
Code: [Select]
   170 iterations: job time =   6,375; avg task =   6,151; min task =   5,967; max task =   6,319
 1,700 iterations: job time =  68,742; avg task =  67,424; min task =  66,138; max task =  68,672
17,000 iterations: job time = 983,948; avg task = 964,960; min task = 950,358; max task = 983,877

Here I'm quite sure the discrepancy is due to the CPU's temperature control mechanism.

Quote
The application never completes. Why?

There isn't currently enough information to answer this. What I would suggest is to add logging information so the job's life cycle can be logged. Can you add the following:

In the driver's Log4j configuration:
Code: [Select]
log4j.logger.org.jppf.server.job.management.DriverJobManagement=DEBUGthis will allows us if and when a job is received from the application (JOB_QUEUED notification), dispatched to a node (JOB_DISPATCHED), results are received from a node (JOB_RETURNED) and the job completes in the driver (JOB_ENDED)

in the application's Log4j configuration:
Code: [Select]
log4j.logger.org.jppf.client.JPPFJob=DEBUGsimilarly, this will show when a job is submitted (JOB_START), tasks are sent to the driver (JOB_DISPATCH), results are received from the driver (JOB_RETURN) and the job completes in the client (JOB_END).

Can you perform a test with these log levels and let us know of the outcome?

Thanks,
-Laurent
5
Troubleshooting / application does not scale
« Last post by broiyan on May 09, 2017, 08:30:22 AM »
My job has 55 tasks sent to 7 servers (8 processors in each server).
Most servers get 8 tasks and one server gets only 7 and the total is the expected 55.
server0 is runs the application and driver and a node.
server1 through server6 each run a node.
With 20 units of data, the application execution completes in about 19 hours.
This works as expected.

With 200 units of data it is expected to complete at 190 hours but actually execution completes around 210 hours plus or minus 10 hours because of some apparent non-linearity and because "identical" machines do not perform identically for unknown reasons. When a node completes the 8 tasks allocated to it I wait several minutes and then terminate the node via control-C and shutdown Linux. Sometimes I even use vmstat to see that the unit is idling before I issue control-C. Termination of nodes is done to save electricity. The application never completes. Why?

On server0, which is an application+driver+node, vmstat shows that the unit is 96% idling and performing very little block I/O and not performing swapping. For some reason swpd memory is still occupied. Note that the second line of output from vmstat is the important one because the first line is historical (since boot) and the second line is from the recent sampling period.
$ vmstat -Sm 8 2
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0  17372    257     13    271    0    0    20    22    5    0 96  0  2  2  0
 0  0  17372    258     13    271    0    0     0     2 1450 3351  3  0 96  0  0


The application's heap high water mark is 20GB on a 32GB machine according to my log file. Ultimately, I don't think the heap high water mark has any meaning because it tends to be higher on machines that have more memory and a higher Xmx even if subject to the same data. In other words, I believe the JVM just postpones garbage collection on "bigger" machines. Given that, I think vmstat is a better indicator of a constraint. Columns si so bi bo show that not much is happening. The 96% idle reiterates that point.

I believe vmstat shows swpd memory is always 0 on server1 through server6.

Since the application never terminates the console of the application is stuck looking like this:
OptimizingGridWorker constructor
OptimizingGridWorker constructor
OptimizingGridWorker constructor
OptimizingGridWorker constructor
... OMITTING SOME MORE OF THE SAME...
OptimizingGridWorker constructor
Number of tasks to start 55.
client process id: 2524, uuid: 01EC117A-A58C-9ECE-402A-488AC1A36B09
[client: driver1-1 - ClassServer] Attempting connection to the class server at localhost:11111
[client: driver1-1 - ClassServer] Reconnected to the class server
[client: driver1-1 - TasksServer] Attempting connection to the task server at localhost:11111
[client: driver1-1 - TasksServer] Reconnected to the JPPF task server


The application log:
$ cat jppf.log
2017-04-29 15:58:32,713 [INFO ][org.jppf.utils.ManagementUtils.<clinit>(153)]: management successfully initialized
2017-04-29 15:58:32,722 [INFO ][org.jppf.utils.VersionUtils.logVersionInformation(80)]: --------------------------------------------------------------------------------
2017-04-29 15:58:32,722 [INFO ][org.jppf.utils.VersionUtils.logVersionInformation(81)]: JPPF Version: 5.1.2, Build number: 1758, Build date: 2016-02-05 06:24 CET
2017-04-29 15:58:32,722 [INFO ][org.jppf.utils.VersionUtils.logVersionInformation(82)]: starting client with PID=2524, UUID=01EC117A-A58C-9ECE-402A-488AC1A36B09
2017-04-29 15:58:32,723 [INFO ][org.jppf.utils.VersionUtils.logVersionInformation(83)]: --------------------------------------------------------------------------------
2017-04-29 15:58:32,817 [INFO ][org.jppf.client.AbstractGenericClient.newConnection(281)]: connection [driver1-1] created
2017-04-29 15:58:32,829 [INFO ][org.jppf.client.ClassServerDelegateImpl.init(76)]: [client: driver1-1 - ClassServer] Attempting connection to the class server at localhost:11111
2017-04-29 15:58:32,846 [INFO ][org.jppf.client.ClassServerDelegateImpl.init(84)]: [client: driver1-1 - ClassServer] Reconnected to the class server
2017-04-29 15:58:32,853 [INFO ][org.jppf.client.TaskServerConnectionHandler.init(78)]: [client: driver1-1 - TasksServer] Attempting connection to the task server at localhost:11111
2017-04-29 15:58:32,899 [INFO ][org.jppf.client.TaskServerConnectionHandler.init(91)]: [client: driver1-1 - TasksServer] Reconnected to the JPPF task server
2017-04-29 15:58:32,917 [INFO ][org.jppf.client.balancer.queue.TaskQueueChecker.dispatchJobToChannel(299)]: dispatching 1000000 tasks to remote channel


The driver log shows activity on April 29 and also on May 8. That represents about 200 hours:
$ cat jppf-driver.log
2017-04-29 15:50:12,368 [INFO ][org.jppf.utils.ManagementUtils.<clinit>(153)]: management successfully initialized
2017-04-29 15:50:12,398 [INFO ][org.jppf.utils.VersionUtils.logVersionInformation(80)]: --------------------------------------------------------------------------------
2017-04-29 15:50:12,398 [INFO ][org.jppf.utils.VersionUtils.logVersionInformation(81)]: JPPF Version: 5.1.2, Build number: 1758, Build date: 2016-02-05 06:24 CET
2017-04-29 15:50:12,399 [INFO ][org.jppf.utils.VersionUtils.logVersionInformation(82)]: starting driver with PID=2185, UUID=A89E415F-408D-1191-F6FF-B3434E2E60DC
2017-04-29 15:50:12,399 [INFO ][org.jppf.utils.VersionUtils.logVersionInformation(83)]: --------------------------------------------------------------------------------
2017-04-29 15:50:12,660 [INFO ][org.jppf.nio.NioConstants.getCheckConnection(80)]: NIO checks are enabled
2017-04-29 15:50:12,664 [INFO ][org.jppf.nio.StateTransitionManager.initExecutor(288)]: globalExecutor=java.util.concurrent.ThreadPoolExecutor@47f37ef1[Running, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], maxSize=8
2017-05-08 01:56:43,138 [WARN ][org.jppf.nio.StateTransitionTask.run(89)]: error on channel SelectionKeyWrapper[id=16, readyOps=1, interestOps=0, context=RemoteNodeContext[channel=SelectionKeyWrapper[id=16], state=IDLE, uuid=D88CC823-0CC4-7F74-3379-4822710ADA66, connectionUuid=null, peer=false, ssl=false]] : java.net.ConnectException: node SelectionKeyWrapper[id=16, readyOps=1, interestOps=0, context=RemoteNodeContext[channel=SelectionKeyWrapper[id=16], state=IDLE, uuid=D88CC823-0CC4-7F74-3379-4822710ADA66, connectionUuid=null, peer=false, ssl=false]] has been disconnected
2017-05-08 01:56:43,138 [WARN ][org.jppf.nio.StateTransitionTask.run(89)]: error on channel SelectionKeyWrapper[id=14, readyOps=1, interestOps=0, context=NodeClassContext[channel=SelectionKeyWrapper[id=14], state=WAITING_NODE_REQUEST, resource=null, pendingResponses=0, type=node, peer=false, uuid=D88CC823-0CC4-7F74-3379-4822710ADA66, secure=false, ssl=false]] : java.io.EOFException: null
2017-05-08 16:35:09,349 [WARN ][org.jppf.nio.StateTransitionTask.run(89)]: error on channel SelectionKeyWrapper[id=12, readyOps=1, interestOps=0, context=RemoteNodeContext[channel=SelectionKeyWrapper[id=12], state=IDLE, uuid=4AC689F0-DE3B-3A61-E354-D8925CB080B8, connectionUuid=null, peer=false, ssl=false]] : java.net.ConnectException: node SelectionKeyWrapper[id=12, readyOps=1, interestOps=0, context=RemoteNodeContext[channel=SelectionKeyWrapper[id=12], state=IDLE, uuid=4AC689F0-DE3B-3A61-E354-D8925CB080B8, connectionUuid=null, peer=false, ssl=false]] has been disconnected
2017-05-08 16:35:09,349 [WARN ][org.jppf.nio.StateTransitionTask.run(89)]: error on channel SelectionKeyWrapper[id=20, readyOps=1, interestOps=0, context=RemoteNodeContext[channel=SelectionKeyWrapper[id=20], state=IDLE, uuid=7D744992-2E33-31EE-CBF5-DA2CD83CF2EA, connectionUuid=null, peer=false, ssl=false]] : java.net.ConnectException: node SelectionKeyWrapper[id=20, readyOps=1, interestOps=0, context=RemoteNodeContext[channel=SelectionKeyWrapper[id=20], state=IDLE, uuid=7D744992-2E33-31EE-CBF5-DA2CD83CF2EA, connectionUuid=null, peer=false, ssl=false]] has been disconnected
2017-05-08 16:35:09,349 [WARN ][org.jppf.nio.StateTransitionTask.run(89)]: error on channel SelectionKeyWrapper[id=26, readyOps=1, interestOps=0, context=NodeClassContext[channel=SelectionKeyWrapper[id=26], state=WAITING_NODE_REQUEST, resource=null, pendingResponses=0, type=node, peer=false, uuid=282105AB-E026-DCA1-0A0A-74EE10F85586, secure=false, ssl=false]] : java.io.EOFException: null
2017-05-08 16:35:09,349 [WARN ][org.jppf.nio.StateTransitionTask.run(89)]: error on channel SelectionKeyWrapper[id=6, readyOps=1, interestOps=0, context=NodeClassContext[channel=SelectionKeyWrapper[id=6], state=WAITING_NODE_REQUEST, resource=null, pendingResponses=0, type=node, peer=false, uuid=9CD38BB6-A195-F74B-B0F2-A668C8E66939, secure=false, ssl=false]] : java.io.EOFException: null
2017-05-08 16:35:09,349 [WARN ][org.jppf.nio.StateTransitionTask.run(89)]: error on channel SelectionKeyWrapper[id=10, readyOps=1, interestOps=0, context=NodeClassContext[channel=SelectionKeyWrapper[id=10], state=WAITING_NODE_REQUEST, resource=null, pendingResponses=0, type=node, peer=false, uuid=4AC689F0-DE3B-3A61-E354-D8925CB080B8, secure=false, ssl=false]] : java.io.EOFException: null
2017-05-08 16:35:09,349 [WARN ][org.jppf.nio.StateTransitionTask.run(89)]: error on channel SelectionKeyWrapper[id=18, readyOps=1, interestOps=0, context=NodeClassContext[channel=SelectionKeyWrapper[id=18], state=WAITING_NODE_REQUEST, resource=null, pendingResponses=0, type=node, peer=false, uuid=7D744992-2E33-31EE-CBF5-DA2CD83CF2EA, secure=false, ssl=false]] : java.io.EOFException: null
2017-05-08 16:35:09,349 [WARN ][org.jppf.nio.StateTransitionTask.run(89)]: error on channel SelectionKeyWrapper[id=8, readyOps=1, interestOps=0, context=RemoteNodeContext[channel=SelectionKeyWrapper[id=8], state=IDLE, uuid=9CD38BB6-A195-F74B-B0F2-A668C8E66939, connectionUuid=null, peer=false, ssl=false]] : java.net.ConnectException: node SelectionKeyWrapper[id=8, readyOps=1, interestOps=0, context=RemoteNodeContext[channel=SelectionKeyWrapper[id=8], state=IDLE, uuid=9CD38BB6-A195-F74B-B0F2-A668C8E66939, connectionUuid=null, peer=false, ssl=false]] has been disconnected
2017-05-08 22:00:50,674 [WARN ][org.jppf.nio.StateTransitionTask.run(89)]: error on channel SelectionKeyWrapper[id=22, readyOps=1, interestOps=0, context=NodeClassContext[channel=SelectionKeyWrapper[id=22], state=WAITING_NODE_REQUEST, resource=null, pendingResponses=0, type=node, peer=false, uuid=1C643953-664F-5392-8902-3B73593EB596, secure=false, ssl=false]] : java.io.EOFException: null
2017-05-08 22:00:51,885 [WARN ][org.jppf.nio.StateTransitionTask.run(89)]: error on channel SelectionKeyWrapper[id=4, readyOps=1, interestOps=0, context=RemoteNodeContext[channel=SelectionKeyWrapper[id=4], state=WAITING_RESULTS, uuid=A9FE2297-7D2B-A04F-329D-A8B7AB61B376, connectionUuid=null, peer=false, ssl=false]] : java.io.IOException: Connection reset by peer
2017-05-08 22:00:50,674 [WARN ][org.jppf.nio.StateTransitionTask.run(89)]: error on channel SelectionKeyWrapper[id=2, readyOps=1, interestOps=0, context=NodeClassContext[channel=SelectionKeyWrapper[id=2], state=WAITING_NODE_REQUEST, resource=null, pendingResponses=0, type=node, peer=false, uuid=A9FE2297-7D2B-A04F-329D-A8B7AB61B376, secure=false, ssl=false]] : java.io.EOFException: null


An example of a node log:
$ cat jppf-node.log
2017-04-29 15:51:34,603 [INFO ][org.jppf.utils.ManagementUtils.<clinit>(153)]: management successfully initialized
2017-04-29 15:51:34,612 [INFO ][org.jppf.utils.VersionUtils.logVersionInformation(80)]: --------------------------------------------------------------------------------
2017-04-29 15:51:34,612 [INFO ][org.jppf.utils.VersionUtils.logVersionInformation(81)]: JPPF Version: 5.1.2, Build number: 1758, Build date: 2016-02-05 06:24 CET
2017-04-29 15:51:34,612 [INFO ][org.jppf.utils.VersionUtils.logVersionInformation(82)]: starting node with PID=2307, UUID=A9FE2297-7D2B-A04F-329D-A8B7AB61B376
2017-04-29 15:51:34,613 [INFO ][org.jppf.utils.VersionUtils.logVersionInformation(83)]: --------------------------------------------------------------------------------
2017-04-29 15:51:35,185 [INFO ][org.jppf.classloader.ClassLoaderRequestHandler.run(155)]: maxBatchSize = 1
2017-04-29 15:51:35,240 [INFO ][org.jppf.execute.AbstractExecutionManager.<init>(114)]: running 8 processing threads
2017-04-29 15:51:35,241 [INFO ][org.jppf.execute.AbstractExecutionManager.createThreadManager(140)]: Using default thread manager

6
Hello,

This behavior is normal and is due to the serialization and deserialization that occurs between the client application and the nodes. Keep in mind that the "output" that is processed in the node is a copy-by-serialization of the output created in the client. Similarly, the output that is returned as the result of jppfClient.submitJob(job) is a copy-by-serialization of the one processed by the node. This is important, because the state of these 3 instances are different. What you want to do, is to display the output that is returned, rather than the one initially created in the client as you are doing now.

Thus, I would suggest the following:
- add a getOutput() method to GenericTask, so the ouput with a processed state is accessible
- rewrite the loop that displays the output like this:
Code: [Select]
for (int i = 0; i <10; i++ ){
  GenericTask task = (GenericTask) results.get(i).getTaskObject();
  Claim processedOutput = task.getOutput();
  System.out.println(processedOutput.get("FirstNameEdit"));
}

Sincerely,
-Laurent
7
Hi,

I have a modified problem of similar nature.

I have three global variables -

Code: [Select]
public List<Claim> output;
    public ManageCache cache;
    public List<Claim> input;



My TemplateRunner code has this -

Code: [Select]
// Create a job
            JPPFJob job = createJob("Claims Edit Loop");
           
            for (int i = 0; i < 10; i++) {
                GenericTask t = new GenericTask(input.get(i), output.get(i));
                 job.add(t);
            }

            List<Task<?>> results = jppfClient.submitJob(job);
           
        for (int i = 0; i <10; i++ ){
            System.out.println(output.get(i).get("FirstNameEdit"));
        }
         

My GenericTask has the following code -

Code: [Select]
public class GenericTask implements Callable<String>, Serializable {

    Claim input;
    Claim output;

    public GenericTask(Claim input, Claim output) {
        this.input = input;
        this.output = output;
    }

    @Override
    public String call() {
        for (Object key : input.keySet()) {
            // output booleans & error messages for this claim and its value
            // does a lot of processing to get the results which are as below

                    output.put(key.toString() + "Edit", ((Boolean) m1.invoke(edit, input) ? true : m2.invoke(edit)).toString());
                    System.out.println(output.get(key) + " " + output.get(key + "Edit"));
             }
        }
        return null;
    }
}


Problem - On the node terminal, the values of output.get(key) and output.get(key + "Edit") are what I want.
On the TemplateRunner side I'm getting null for output.get(key + "Edit"). I'm guessing this is a pass by reference vs pass by value problem but I'm not sure how to solve it.

Thanks,
8
Installation and Configuration / Re: Interface gridcopmuting project
« Last post by steve4j on April 28, 2017, 08:50:07 AM »
Hallo Laurent,

thanks for your super-fast response.
Interfaces in my project are "connectors" between two systems.
Lets say for a example there is a system wich provides businessdata in CSV formated file on a ftp server. This file needs to be picked up every 15min  and be iterated, mapped and written to a database.
So my project has 75 different interfaces with a total of 150 versions.
My historical data tells me that 15-80 interfaces are started at the same time due the day.
A interface version can only run at the same time.

The current model of this project is a stand alone rich client (swing).
But since we dont want to be bound to hardware / OS / other problems we want to commit the jobs into a cluster and use a scheduler www.quartz-scheduler.org.

Thanks for your help,
Steve
9
Installation and Configuration / Re: Interface gridcopmuting project
« Last post by lolo on April 28, 2017, 08:08:05 AM »
Hello,

You do not have to have as many nodes as there are jobs at any given time. When all nodes are busy, the server's queuing mechanism ensures that unexecuted jobs are preserved until a node becomes available. Similar queuing occurs in the client as well, since the maximum number of jobs that can be submitted to a server is determined by the number of connections in a connection pool.

The one-job-per-node model (or one job per connection in the client) is a design choice which significantly simplifies the communication model, makes the grid more robust and less prone to crashes, allows better control of the resources usage , enables load-balancing which adapts to the workload, etc. The benefits are too important to abandon them now.

I'll be happy to provide suggestions on how the performance or topology can be optimized, however I'd need little more information. In particular, it is not clear to me what you mean by "interface". What does it represent in terms of JPPF jobs and tasks? How many tasks do your jobs have? Remember that the granularity of jobs and tasks is a critical factor for performance, and it should be given some consideration at design time when building a distributed application.

Thanks for your time,
-Laurent
10
Installation and Configuration / Interface gridcopmuting project
« Last post by steve4j on April 27, 2017, 12:22:37 PM »
Hallo Laurent,

thanks for your help in the issue tracker so far http://www.jppf.org/tracker/tbg/jppf/issues/JPPF-500.
Now since i got the DBCP & Hibernate workin my project "gridcomputing for interfaces" i came along this Thread http://www.jppf.org/forums/index.php/topic,1961.0.html
Where you said (4y ago) that one node can only handle one job at a time.

Since my project starts like min. 15 to max. 80 interfaces per min. i have to ask if i realy need like at least 100 nodes with each a single JVM.
This would produce so much overhead from each jvm that i could not use JPPF as ligthweight parallel processing framework.

Is there a way to start lets say like 20 nodes with 5 open "Jobpool" each so i need only 20 JVM not 100?
The nodes are needed for fail-security, but i cannot start that much jvms.

Can there be a feature request for a new Release or something that could help to fix this?

Thank you for your efforts
Pages: [1] 2 3 ... 10
JPPF Powered by SMF 2.0 RC5 | SMF © 2006–2011, Simple Machines LLC Get JPPF at SourceForge.net. Fast, secure and Free Open Source software downloads