This site hosts historical documentation. Visit www.terracotta.org for recent product information.
This page contains further information on configuring and troubleshooting Terracotta Web Sessions.
The following diagram shows the architecture of a typical Terracotta-enabled web application.
The load balancer parcels out HTTP requests from the Internet to each application server. To maximize the locality of reference of the clustered HTTP session data, the load balancer uses HTTP session affinity so all requests corresponding to the same HTTP session are routed to the same application server. However, with a Terracotta-enabled web application, any application server can process any request. Terracotta Web Sessions clusters the sessions, allowing sessions to survive node hops and failures.
The application servers run both your web application and the Terracotta client software, and are called "clients" in a Terracotta cluster. As many application servers may be deployed as needed to handle your site load.
For more information about the Terracotta clusters, refer to the pages in the Terracotta Server Array section.
While Terracotta Web Sessions is designed for optimum performance with the configuration you set at installation, in some cases it may be necessary to use the configuration attributes described in the following sections.
By default, session locking is off in Terracotta Web Sessions. If your application requires disabling concurrent requests in sessions, you can enable session locking.
To enable session locking, add an <init-param>
block as follows:
<filter>
<filter-name>terracotta-filter</filter-name>
<filter-class>org.terracotta.session.TerracottaContainerSpecificSessionFilter</filter-class>
<init-param>
<param-name>tcConfigUrl</param-name>
<param-value>localhost:9510</param-value>
</init-param>
<init-param>
<param-name>sessionLocking</param-name>
<param-value>true</param-value>
</init-param>
</filter>
If you enable session locking, see Deadlocks When Session Locking Is Enabled.
Synchronous write locks provide an extra layer of data protection by having a client node wait until it receives acknowledgement from the Terracotta Server Array that the changes have been committed. The client releases the write lock after receiving the acknowledgement. Note that enabling synchronous write locks can substantially raise latency rates, thus degrading cluster performance.
To enable synchronous writes, add an <init-param>
block as follows:
<filter>
<filter-name>terracotta-filter</filter-name>
<filter-class>org.terracotta.session.TerracottaContainerSpecificSessionFilter</filter-class>
<init-param>
<param-name>tcConfigUrl</param-name>
<param-value>localhost:9510</param-value>
</init-param>
<init-param>
<param-name>synchronousWrite</param-name>
<param-value>true</param-value>
</init-param>
</filter>
Web Sessions gives you the option to configure both heap and off-heap memory tiers.
To set the sizing attributes, add one or both <init-param>
blocks to your web.xml as follows:
<filter>
<filter-name>terracotta-filter</filter-name>
<filter-class>org.terracotta.session.TerracottaContainerSpecificSessionFilter</filter-class>
<init-param>
<param-name>tcConfigUrl</param-name>
<param-value>localhost:9510</param-value>
</init-param>
<init-param>
<param-name>maxBytesOnHeap</param-name>
<param-value>128M</param-value>
</init-param>
<init-param>
<param-name>maxBytesOffHeap</param-name>
<param-value>2G</param-value>
</init-param>
</filter>
The nonstop timeout is the number of milliseconds an application waits for any cache operation to return before timing out. Nonstop allows certain operations to proceed on clients that have become disconnected from the cluster. One way clients go into nonstop mode is when they receive a "cluster offline" event. Note that a nonstop cache can go into nonstop mode even if the node is not disconnected, such as when a cache operation is unable to complete within the timeout allotted by the nonstop configuration.
To set the nonstop timeout, add an <init-param>
block to your web.xml as follows:
<filter>
<filter-name>terracotta-filter</filter-name>
<filter-class>org.terracotta.session.TerracottaContainerSpecificSessionFilter</filter-class>
<init-param>
<param-name>tcConfigUrl</param-name>
<param-value>localhost:9510</param-value>
</init-param>
<init-param>
<param-name>nonStopTimeout</param-name>
<param-value>30000</param-value>
</init-param>
</filter>
You can tune the timeout value to fit your environment. The following information provides additional guidance for choosing a nonStopTimeout value:
The concurrency attribute allows you to set the number of segments for the map backing the underlying server store managed by the Terracotta Server Array. If concurrency is not explicitly set (or set to "0"), the system selects an optimized value.
To configure or tune concurrency, add an <init-param>
block to your web.xml as follows:
<filter>
<filter-name>terracotta-filter</filter-name>
<filter-class>org.terracotta.session.TerracottaContainerSpecificSessionFilter</filter-class>
<init-param>
<param-name>tcConfigUrl</param-name>
<param-value>localhost:9510</param-value>
</init-param>
<init-param>
<param-name>concurrency</param-name>
<param-value>256</param-value>
</init-param>
</filter>
The server map underlying the Terracotta Server Array contains the data used by clients in the cluster and is segmented to improve performance through added concurrency. Under most circumstances, the concurrency value is optimized by the Terracotta Server Array and does not require tuning.
If an explicit and fixed segmentation value must be set, use the concurrency attribute, making sure to set an appropriate concurrency value. A too-low concurrency value could cause unexpected eviction of elements. A too-high concurrency value may create many empty segments on the Terracotta Server Array (or many segments holding a few or just one element).
The following information provides additional guidance for choosing a concurrency value:
The following sections summarize common issues than can be encountered when clustering Web Sessions.
Sessions that are set to expire after a certain time instead seem to expire at unexpected times, and sooner than expected. This problem can occur when sessions hop between nodes that do not have the same system time. A node that receives a request for a session that originated on a different node still checks local time to validate the session, not the time on the original node. Adding the Network Time Protocol (NTP) to all nodes can help avoid system-time drift. However, note that having nodes set to different time zones can cause this problem, even with NTP.
This problem can also cause sessions to time out later than expected, although this variation can have many other causes.
Terracotta Web Sessions must run in serialization mode. In serialization mode, sessions are clustered, and your application must follow the standard servlet convention on using setAttribute()
for mutable objects in replicated sessions.
In some containers or frameworks, it is possible to see deadlocks when session locking is in effect. This happens when an external request is made from inside the locked session to access that same session. This type of request fails because the session is locked.
Most Servlet spec-defined events will work with Terracotta clustering, but the events are generated on the node where they occur. For example, if a session is created on one node and destroyed on a second node, the event is received on the second node, not on the first node.