This site hosts historical documentation. Visit www.terracotta.org for recent product information.

Web Sessions Reference Guide

This page contains further information on configuring and troubleshooting Terracotta Web Sessions.

Architecture of a Terracotta Cluster

The following diagram shows the architecture of a typical Terracotta-enabled web application.

Terracotta cluster connected to the cloud using load balancers.

The load balancer parcels out HTTP requests from the Internet to each application server. To maximize the locality of reference of the clustered HTTP session data, the load balancer uses HTTP session affinity so all requests corresponding to the same HTTP session are routed to the same application server. However, with a Terracotta-enabled web application, any application server can process any request. Terracotta Web Sessions clusters the sessions, allowing sessions to survive node hops and failures.

The application servers run both your web application and the Terracotta client software, and are called "clients" in a Terracotta cluster. As many application servers may be deployed as needed to handle your site load.

For more information about the Terracotta clusters, refer to the pages in the Terracotta Server Array section.

Optional Configuration Attributes

While Terracotta Web Sessions is designed for optimum performance with the configuration you set at installation, in some cases it may be necessary to use the configuration attributes described in the following sections.

Session Locking

By default, session locking is off in Terracotta Web Sessions. If your application requires disabling concurrent requests in sessions, you can enable session locking.

To enable session locking, add an <init-param> block as follows:

<filter>
 <filter-name>terracotta-filter</filter-name>
 <filter-class>org.terracotta.session.TerracottaContainerSpecificSessionFilter</filter-class>
 <init-param>
   <param-name>tcConfigUrl</param-name>
   <param-value>localhost:9510</param-value>
 </init-param>
 <init-param>
   <param-name>sessionLocking</param-name>
   <param-value>true</param-value>
 </init-param>
</filter>

If you enable session locking, see Deadlocks When Session Locking Is Enabled.

Synchronous Writes

Synchronous write locks provide an extra layer of data protection by having a client node wait until it receives acknowledgement from the Terracotta Server Array that the changes have been committed. The client releases the write lock after receiving the acknowledgement. Note that enabling synchronous write locks can substantially raise latency rates, thus degrading cluster performance.

To enable synchronous writes, add an <init-param> block as follows:

<filter>
 <filter-name>terracotta-filter</filter-name>
 <filter-class>org.terracotta.session.TerracottaContainerSpecificSessionFilter</filter-class>
 <init-param>
   <param-name>tcConfigUrl</param-name>
   <param-value>localhost:9510</param-value>
 </init-param>
 <init-param>
   <param-name>synchronousWrite</param-name>
   <param-value>true</param-value>
 </init-param>
</filter>

Sizing Options

Web Sessions gives you the option to configure both heap and off-heap memory tiers.

  • Memory store – Heap memory that holds a copy of the hottest subset of data from the off-heap store. Subject to Java garbage collection (GC).
  • Off-heap store – Limited in size only by available RAM. Not subject to Java GC. Can store serialized data only. Provides overflow capacity to the memory store. Note: If using off-heap, refer to Allocating direct memory in the JVM.

To set the sizing attributes, add one or both <init-param> blocks to your web.xml as follows:

<filter>
 <filter-name>terracotta-filter</filter-name>
 <filter-class>org.terracotta.session.TerracottaContainerSpecificSessionFilter</filter-class>
 <init-param>
   <param-name>tcConfigUrl</param-name>
   <param-value>localhost:9510</param-value>
 </init-param>
 <init-param>
   <param-name>maxBytesOnHeap</param-name>
   <param-value>128M</param-value>
 </init-param>
 <init-param>
   <param-name>maxBytesOffHeap</param-name>
   <param-value>2G</param-value>
 </init-param>
</filter>

Nonstop and Rejoin Options

The nonstop timeout is the number of milliseconds an application waits for any cache operation to return before timing out. Nonstop allows certain operations to proceed on clients that have become disconnected from the cluster. One way clients go into nonstop mode is when they receive a "cluster offline" event. Note that a nonstop cache can go into nonstop mode even if the node is not disconnected, such as when a cache operation is unable to complete within the timeout allotted by the nonstop configuration.

To set the nonstop timeout, add an <init-param> block to your web.xml as follows:

<filter>
 <filter-name>terracotta-filter</filter-name>
 <filter-class>org.terracotta.session.TerracottaContainerSpecificSessionFilter</filter-class>
 <init-param>
   <param-name>tcConfigUrl</param-name>
   <param-value>localhost:9510</param-value>
 </init-param>
 <init-param>
   <param-name>nonStopTimeout</param-name>
   <param-value>30000</param-value>
 </init-param>
</filter>

Tuning Nonstop Timeout

You can tune the timeout value to fit your environment. The following information provides additional guidance for choosing a nonStopTimeout value:

  • In an environment with regular network interruptions, consider increasing the timeout value to prevent timeouts for most of the interruptions.
  • In an environment where cache operations can be slow to return and data is required to always be in sync, increase timeout value to prevent frequent timeouts. For example, a locking operation may exceed the nonstop timeout while waiting for a lock. This would trigger nonstop mode only because the lock couldn't be acquired in time. Setting the method's timeout to less than the nonstop timeout avoids this problem.
  • If a nonstop cache employs bulk loading, be aware that a timeout multiplier may be applied by the bulk-loading method.

Concurrency

The concurrency attribute allows you to set the number of segments for the map backing the underlying server store managed by the Terracotta Server Array. If concurrency is not explicitly set (or set to "0"), the system selects an optimized value.

To configure or tune concurrency, add an <init-param> block to your web.xml as follows:

<filter>
 <filter-name>terracotta-filter</filter-name>
 <filter-class>org.terracotta.session.TerracottaContainerSpecificSessionFilter</filter-class>
 <init-param>
   <param-name>tcConfigUrl</param-name>
   <param-value>localhost:9510</param-value>
 </init-param>
 <init-param>
   <param-name>concurrency</param-name>
   <param-value>256</param-value>
 </init-param>
</filter>

Tuning Concurrency

The server map underlying the Terracotta Server Array contains the data used by clients in the cluster and is segmented to improve performance through added concurrency. Under most circumstances, the concurrency value is optimized by the Terracotta Server Array and does not require tuning.

If an explicit and fixed segmentation value must be set, use the concurrency attribute, making sure to set an appropriate concurrency value. A too-low concurrency value could cause unexpected eviction of elements. A too-high concurrency value may create many empty segments on the Terracotta Server Array (or many segments holding a few or just one element).

The following information provides additional guidance for choosing a concurrency value:

  • In general, the concurrency value should be no less than than the number of active servers in the Terracotta Server Array, and optimally at least twice the number of active Terracotta servers.
  • With extremely large data sets, a high concurrency value can improve performance by hashing the data into more segments, which reduces lock contention.
  • In environments with very few cache elements, set concurrency to a value close to the number of expected elements.

Troubleshooting

The following sections summarize common issues than can be encountered when clustering Web Sessions.

Sessions Time Out Unexpectedly

Sessions that are set to expire after a certain time instead seem to expire at unexpected times, and sooner than expected. This problem can occur when sessions hop between nodes that do not have the same system time. A node that receives a request for a session that originated on a different node still checks local time to validate the session, not the time on the original node. Adding the Network Time Protocol (NTP) to all nodes can help avoid system-time drift. However, note that having nodes set to different time zones can cause this problem, even with NTP.

This problem can also cause sessions to time out later than expected, although this variation can have many other causes.

Changes Not Replicated

Terracotta Web Sessions must run in serialization mode. In serialization mode, sessions are clustered, and your application must follow the standard servlet convention on using setAttribute() for mutable objects in replicated sessions.

Deadlocks When Session Locking Is Enabled

In some containers or frameworks, it is possible to see deadlocks when session locking is in effect. This happens when an external request is made from inside the locked session to access that same session. This type of request fails because the session is locked.

Events Not Received on Node

Most Servlet spec-defined events will work with Terracotta clustering, but the events are generated on the node where they occur. For example, if a session is created on one node and destroyed on a second node, the event is received on the second node, not on the first node.