Saturday, July 6, 2013

Grid Computing vs. Distributed Cache Locking

If you want to change a piece of data in a distributed cache you could lock the data, change it then release the lock. But there is a more efficient way.

The idea is to leverage grid computing to execute a unit of work on the cache that does not require any locking. The advantage to this is that there are fewer network calls (one call to execute the work rather than the three to lock, mutate, unlock).

This architectural pattern works for two main reasons:

  1. Key affinity - requests for a given piece of data get routed to the same server in the cluster.
  2. One thread per key - although the host node may have a thread pool, only one of its threads can execute work against a particular key at any one time.
In Oracle's Coherence, this is achieved by implementing the EntryProcessor interface. "Within the process method, you don't need to worry about concurrency - Coherence guarantees that the individual entry processors against the same entry will execute atomically and in the order of arrival, which greatly simplifies the processing logic. It also guarantees that the processor will be executed in the case of server failure, by failing it over to the node that becomes the new owner of the entry it needs to process." - Oracle Coherence 3.5.

No comments:

Post a Comment