Due to remote invocation nature of REST services, they are in constant situation to run into race condition with each other. One of the everyday resources to race for is session. In order to be practical, you need to be able to put a lock over the resource at the beginning of your process and lift it up whenever you are done with it.
Now my question is, does Spring Session have any feature to deal with race condition over the session entries?
Or any other library / framework in Java!!!
2 Answers
Answers 1
If you're using Spring Controllers, then you can use
RequestMappingHandlerAdapter.setSynchronizeOnSession-boolean-
This will make every Controller method synchronized in presence of a session.
HttpSession.setAttribute
is thread safe. However getAttribute
followed by setAttribute
has to be manually made tread safe.
synchronized(session) { session.setAttribute("foo", "bar"); session.getAttribute("foo"); }
Same can be done in case of spring session beans.
synchronized(session) { //do something with the session bean }
#Edit
In case of multiple containers with normal spring session beans you would have to use sticky sessions
. That would ensure that one session state is stored on one container and that container is accessed every single time the same session is requested. This has to be done on the load balancer with the help of something like BigIP cookies
. Rest would would work the same way as for a single session there exists a single container, so locking session would suffice.
If you would like to use session sharing across instances there are supports on the containers like Tomcat and Jetty
These approaches use a back-end database or some other persistence mechanism to store state.
For the same purpose you can try using Spring Session. Which is trivial to configure with the Redis
. Since Redis is single threaded it ensures that one instance of a entry is accessed atomically.
Above approaches are non invasive. Both the database and Redis based approaches support transactions.
However if you want more control over the distributed state and locking you can try using the distributed data grids like Hazelcast and Gemfire.
I have personally worked with the Hazelcast and it does provide methods to lock entries made in the map.
Answers 2
As a previous answer stated. If you are using Spring Session and you are concerned for thread safety on concurrent access of a Session, you should set:
RequestMappingHandlerAdapter.setSynchronizeOnSession(true);
One example can be found here EnableSynchronizeOnSessionPostProcessor :
import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.BeansException; import org.springframework.beans.factory.config.BeanPostProcessor; import org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter; public class EnableSynchronizeOnSessionPostProcessor implements BeanPostProcessor { private static final Logger logger = LoggerFactory .getLogger(EnableSynchronizeOnSessionPostProcessor.class); @Override public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException { // NO-OP return bean; } @Override public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException { if (bean instanceof RequestMappingHandlerAdapter) { RequestMappingHandlerAdapter adapter = (RequestMappingHandlerAdapter) bean; logger.info("enable synchronizeOnSession => {}", adapter); adapter.setSynchronizeOnSession(true); } return bean; } }
Sticky Sessions and Session Replication
With regards to a clustered application and Sessions, there is a very good post here on SO, that discusses this topic: Sticky Sessions and Session Replication
In my experience, you would want both Sticky Session and Session replication. You use sticky session to eliminate the concurrent Session access across nodes, because sticky session will pin a session to a single node and each subsequent request for the same session will always be directed to that node. This eliminates the cross-node session access concern.
Replicated sessions are helpful mainly in case a node goes down. By replicating sessions, when a node goes down, future requests for existing sessions will be directed to another node that will have a copy of the original session and makes the fail over transparent to the user.
There are many frameworks that support session replication. The one I use for large projects is the open-source Hazelcast.
In response to your comments made on @11thdimension post:
I think you are in a bit of a challenging area. Basically, you want to enforce all session operations to be atomic across nodes in a cluster. This leads me to lean towards a common session store across nodes, where access is synchronized (or something similar).
Multiple Session store / replication frameworks surely support an external store concept and I am sure Reddis does. I am most familiar with Hazelcast and will use that as an example.
Hazelcast allows to configure the session persistence to use a common database. If you look at Map Persistence section, it shows an example and a description of options.
The description for the concept states:
Hazelcast allows you to load and store the distributed map entries from/to a persistent data store such as a relational database. To do this, you can use Hazelcast's MapStore and MapLoader interfaces.
Data store needs to be a centralized system that is accessible from all Hazelcast Nodes. Persistence to local file system is not supporte
Hazelcast supports read-through, write-through, and write-behind persistence modes which are explained in below subsections.
The interesting mode is write-through:
Write-Through
MapStore can be configured to be write-through by setting the write-delay-seconds property to 0. This means the entries will be put to the data store synchronously.
In this mode, when the map.put(key,value) call returns:
MapStore.store(key,value) is successfully called so the entry is persisted. In-Memory entry is updated. In-Memory backup copies are successfully created on other JVMs (if backup-count is greater than 0). The same behavior goes for a map.remove(key) call. The only difference is that MapStore.delete(key) is called when the entry will be deleted.
I think, using this concept, plus setting up your database tables for the store properly to lock entries on insert/update/deletes, you can accomplish what you want.
Good Luck!
0 comments:
Post a Comment