JPPF Issue Tracker
star_faded.png
Please log in to bookmark issues
enhancement_small.png
CLOSED  Enhancement JPPF-183  -  Add relevant classes cache lookups in the driver
Posted Aug 29, 2013 - updated Sep 19, 2013
action_vote_minus_faded.png
0
Votes
action_vote_plus_faded.png
icon_info.png This issue has been closed with status "Closed" and resolution "RESOLVED".
Issue details
  • Type of issue
    Enhancement
  • Status
     
    Closed
  • Assigned to
     lolo4j
  • Progress
       
  • Type of bug
    Not triaged
  • Likelihood
    Not triaged
  • Effect
    Not triaged
  • Posted by
     lolo4j
  • Owned by
    Not owned by anyone
  • Category
    Server
  • Resolution
    RESOLVED
  • Priority
    Normal
  • Targetted for
    icon_milestones.png JPPF 3.3.6
Issue description
Currently, the driver does not handle concurrent requests for the same class in an optimized way. The only synchronization that occurs is at the cache level. Consequently, since requests are never really simulateneous, multiple requests for the class will be sent to the same client, before a corresponding cache entry is created.

When a class loading request comes from a node, it is added to a queue in the client channel, which then processes them sequentially. Currently, once a request is in the client channel queue, it is too late to take advantage of the cache, even if a cache entry was created while the request was waiting in the queue. So this is very inefficient.

What we propose is that, when the disconnection of the client is detected, instead of just sending a null response for each pending class loading request, we make a "last chance" lookup in the cache, in the hope that the corresponding entry exists at that time. Actually, We could add a cache lookup for pending requests each time an operation is performed on the request queue, this should provide a nice improvement without changing much code.

#3
Comment posted by
 lolo4j
Sep 19, 08:21
Since additional class cache lookups do not bring any satisfying performance improvement, I implemented a mechanism which groups requests for the same resource and to the same client. This mechanism allows the server to group requests from new nodes, even while waiting for the client response. I observed a performance gain going from 8% with 4 nodes, up to 27% with 50 nodes, testing with a broadcast job that explicitely loads 1100 classes from each node.

Changes committed to SVN:

The issue was updated with the following change(s):
  • This issue has been closed
  • The status has been updated, from Confirmed to Closed.
  • This issue's progression has been updated to 100 percent completed.
  • The resolution has been updated, from Not determined to RESOLVED.
  • Information about the user working on this issue has been changed, from lolo4j to Not being worked on.