This site hosts historical documentation. Visit www.terracotta.org for recent product information.
The Terracotta Management Console (TMC) is a web-based administration and monitoring application for Terracotta products. TMC connections are managed through the Terracotta Management Server (TMS), which must be running for the TMC to function.
To confirm the version of the TMC you are running, and for other information about the TMC, click About on the toolbar.
When you first connect to the TMC, the authentication setup page appears, where you can choose to run the TMC with authentication or without. Authentication can also be enabled/disabled in the TMC Settings panel once the TMC is running.
If you do not enable authentication, you can connect to the TMC without being prompted for a login or password.
If you enable authentication, the following choices appear:
Instructions for setting up connections to LDAP and Active Directory are available with the form that appears when you select the LDAP or Active Directory. Note that setting up authorization and authentication controls access to the TMC but does not affect connections, which must be secured separately. In addition, an appropriate Terracotta license file is needed to run the TMC with security.
Authentication based on using built-in role-based accounts backed by a .ini file is the simplest scheme offered by the TMC. When you choose .ini-file authentication, you must restart the TMC using the stop-tmc and start-tmc scripts. A setup page appears for initializing the two accounts that control access to the TMC:
Create a password for each account, then click Done to go to the login screen. The login screen appears each time a connection is made to the TMC.
Once a user logs in, there is no default timeout for inactivity. To set a default timeout for inactivity, uncomment the following block in web.xml
and set the timeout value (in minutes) using the <param-value> element:
<context-param> <description> After this amount of time has passed without user activity, the user will be automatically logged out. </description> <param-name>idleTimeoutMinutes</param-name> <param-value>30</param-value> </context-param>
A view of the TMC is shown below. Note that display panels and the connection-groups drop-down menu appear if an active (connected) connection group is available and selected.
When you initially log on to the TMC, only default connection groups with default connections exist. If a node that can be monitored is running on localhost at the port specified by one of the default connections, then that default connection will appear as an active connection. Other default connections appear as an unavailable (inactive) connections.
You can create and edit connections and connection groups using the Connections panel. To open the Connections panel, click Preferences on the tool bar. You can also create new connections directly by clicking New Connection on the tool bar. Connections are assigned to connection groups to simplify management tasks.
Connections allow you to monitor and administer nodes (both clustered and standalone) using the TMC. Connections from the TMS to agents are made using a location URI in the following form:
<scheme>:<host-address>:<port>
Note that the URIs showing "http:" are for non-secure connections.
If the URI is for a server in a Terracotta Server Array, all other nodes participating in the cluster are automatically found. It is not required to create separate connections for those other nodes. A typical URI for a server will appear similar to:
http://myServer:9030
where an IP address or resolvable hostname is followed by the tsa-group-port (9530 by default), which is used as the management port. This port is configured in tc-config.xml
.
A typical URI for a Terracotta client or BigMemory Go will appear similar to:
http://myHost:9888
where an IP address or resolvable hostname is followed by the agent's management port (9888 by default), which has been set in the node's configuration file. For BigMemory Go, for example, use the managementRESTService
element in ehcache.xml
.
To add a new connection, follow these steps:
A screen appears confirming the agent found at the given location. If no agent is found, a warning appears, and no connection can be set up. Note that the location is relative to the machine running the Terracotta Management Server (TMS). The default location, "localhost", is the machine the TMS is running on, and may not be the machine your browser is running on.
The connection timeout limits the time for successfully establishing a connection to the node. This ensures that the TMC does not hang waiting for a connection in the case where the node is unreachable.
The read timeout limits the time the TMC waits for data from a connected node. This ensures that the TMC does not hang waiting for a connection in the case where the node is unresponsive.
Managed connections that appear in the connections list can be edited or deleted.
To delete an existing standalone connection, click Preferences on the toolbar to view the Connections panel. Locate the connection under its connection group in the Configured Connections list and click the red X next to that connection's name. A dialog allows you to confirm or cancel the delete operation.
To delete an existing cluster connection, click Preferences on the toolbar to view the Connections panel. Locate the connection group in the Configured Connections list and click Delete next to that group's name. A dialog allows you to confirm or cancel the delete operation.
To edit a standalone connection, follow these steps:
You can choose a group for the connection from the menu of existing groups, or create new connection group for the new connection. If you create a new group, you must enter a name for the group in the provided field.
The connection timeout limits the time for successfully establishing a connection to the node. This ensures that the TMC does not hang waiting for a connection in the case where the node is unreachable.
The read timeout limits the time the TMC waits for data from a connected node. This ensures that the TMC does not hang waiting for a connection in the case where the node is unresponsive.
To edit a cluster connection, click Edit for the cluster group you want to edit, then edit the group name and connection URL. Click Save Changes to save the new values or Cancel to revert to the original values.
For every configured connection group, you can display a mini dashboard to view group status.
Each TSA connection-group dashboard displays the number of connected active (green) and mirror (blue) servers. It also displays the number of clients connected to that TSA. Certain other server states may also be indicated on the dashboard, including server starting or recovering (yellow) and server unreachable (red).
Each standalone connection group dashboard displays its number of configured connections and the number currently connected.
Each dashboard has a control drop-down menu with commands applicable to that dashboard and its associated connection group. For example, to hide a connection group's dashboard, choose Hide This Connection from the group's dashboard control menu. The connection group's connections are unaffected by hiding the dashboard. To restore the dashboard to the connections, click Preferences from the tool bar, then enable Show in Dashboard checkbox for that group.
To manage the application data of nodes in a connection group, select the group, then click the Application Data tab. Each Application Data panel has a CacheManager and Scope menu to select which CacheManagers and nodes supply the data for that panel.
The Overview panel displays health metrics for CacheManagers and their caches, including certain cache statistics to help you track performance and resource usage across all CacheManagers.
Real-time statistics are displayed in a table with the following columns:
To choose the types of statistics displayed in the table, click Configure Columns to open a list of available statistics. Choose statistics (or set the option to display all statistics), then click OK to accept the change. The table immediately begins to display the chosen statistics.
To sort the table by a specific statistic, click the column head for that statistic.
The Charts panel graphs the same statistics available in the Overview panel. This is useful for tracking performance trends and discovering potential issues.
In addition to being able to select a CacheManager and scope for the displayed data, you can also select a specific cache (or all caches) for the selected CacheManager.
Each historical real-time graph plots the appropriate metrics along the Y axis against system time (X axis). To view the value along a single point on a graph, float the mouse pointer over that point. This also displays the units used for the statistic being graphed.
To choose the type of statistic graphed by a particular chart, click the chart's corresponding Configure link to open a list of available statistics. Choose a statistic, then click OK to accept the change. The chart immediately begins to graph the chosen statistic.
The Sizing panel provides information on the usage of the heap, off-heap, and disk tiers by the caches of the selected CacheManager. To view tier usage by any active CacheManager, select that CacheManager from the CacheManager drop-down menu.
The Relative Cache Sizes by Tier table displays usage of the tier selected from the Tier drop-down menu. The table has the following columns:
Click a row in the table to set the cache-related tier graphs to display values for the named cache.
The panel shows the following bar graphs:
Float the mouse pointer over a bar to display an exact usage value. Click a tier's bar to display values for that tier in the Relative Cache Sizes by Tier table. The selected tier's bar is lighter in color than the other bars.
The Selected Cache drop-down menu determines which cache is shown in the cache-related tier graphs and highlighted in the Relative Cache Sizes by Tier. The menu also indicates if the cache uses size-based (ARC) or entry-based sizing.
The Management panel displays a table listing information about the selected CacheManager by node (where the CacheManager exists) or by its caches. Choose the CacheManagers radio button to show a table with a node list, or the Caches radio button to show a table with a cache list. These tables (and any sublist tables) can be sorted and ordered by any column by clicking the column head.
Global cache disable/enable controls at at the top of the panel.
The cache list is a table of caches under the selected cache manager.
The table has the following columns:
If a cache listing is expanded using the arrow to the left of the cache name, a sublist appears with a table of all of the nodes that contain the cache. The table has the following columns:
The CacheManager list is a table of nodes under the selected cache manager.
The table has the following columns:
If a node listing is expanded using the arrow to the left of the connection name, a sublist appears with a table of all of the nodes that contain the cache. The table has the following columns:
The Content panel allows you to issue BigMemory SQL queries against your caches. For more information about BigMemory SQL, click the Query link to see help, or go to BigMemory SQL Queries.
The Monitoring tab is available only for cluster connection groups. You can use the features available under this tab to monitor the functioning of the cluster, as well as the functioning of individual cluster components.
Runtime statistics provide a continuous feed of sampled real-time data on a number of server and client metrics. The data is plotted on graphs. Sampling begins automatically when a runtime statistic panel is first viewed, but historical data is not saved.
Use the Select View menu to set the runtime statistics view to one of the following:
Specific runtime statistics are defined in the following sections. The cluster components for which the statistic is available are indicated in the text.
Shows the total number of live objects in the cluster, mirror group, server, or clients.
If the trend for the total number of live objects goes up continuously, clients in the cluster will eventually run out of memory and applications may fail. Upward trends indicate a problem with application logic, garbage collection, or a tuning issue on one or more clients.
Shows the number of entries being evicted from the cluster, mirror group, or server.
Shows the number of expired entries found (and being evicted) on the TSA, mirror group, or server.
Shows the number of completed writes (or mutations) in the TSA or selected server. Operations can include evictions and expirations, so that large-scale eviction or expiration operations can cause spikes in the operations rate (see the corresponding evictions and expirations statistical graphs). This rate is low in read-mostly setups, indicating that there are few writes and little data to evict. If it drops or deviates regularly from an established baseline, it may indicate issues with network connections or overloaded servers.
Note that when a client is (or all clients are) selected, then this statistic is reported as the Write Transaction Rate, tracking client-to-server write transactions.
A measure of how many objects (per second) are being faulted in from the TSA in response to application requests. Faults from off-heap or disk occur when an object is not available in a server's on-heap cache. Flushes occur when the heap or off-heap cache must clear data due to memory constraints. Objects being requested for the first time, or objects that have been flushed from off-heap memory before a request arrives, must be faulted in from disk. High rates could indicate inadequate memory allocation at the server.
BigMemory Max 4.1 provides support for a "Hybrid" mix of solid-state device (SSD) "flash drives" (an economical way to increase storage) along with the standard DRAM-based offheap storage. This Data Storage Usage graph, when compared to the Offheap Usage graph, shows that the hybrid maximum data storage, which includes both offheap memory and any "flash drives", can be on an entirely larger scale than off-heap alone.
Shows the amount, in megabytes or gigabytes, of maximum available off-heap memory (configured limit), the "OffHeap Reserved" (made available), and used off-heap memory (containing data). These statistics appear only if BigMemory is in effect.
The Events panel displays cluster events received by the Terracotta server array. You can use this panel to quickly view these events in one location in an easy-to-read format, without having to search the Terracotta logs.
The number of unread events is shown in a badge on each clustered connection's mini dashboard. The badge color indicates the severity of unread events: red for warnings and above, or gray if all unread events are of lower severity.
Note that, in addition to displaying events with the chosen severity level, all events with a higher severity level are also displayed. For example, if the INFO level is chosen, then all events with WARN and above are also displayed.
For more information on specific events, see this table.
The Administration panels provide information about the Terracotta cluster as well as tools for operations, including backing up cluster data.
Using subpanels, the Configuration panel displays the status, environment, and configuration information for the servers and clients selected in the Cluster Node menu. This information is useful for debugging and when reporting problems.
The Main subpanel displays the server status and a list of properties, including IP address, version, license (capabilities), and restartability and failover modes. A specific server must be selected to view this subpanel. Administrators can shut down servers from this panel.
The following additional subpanels are available:The Logs panel displays live logs for the server selected in the Cluster Node menu. Scroll up to pause the live update (or click Pause). Scroll down to the end of the log to restart the live update (or click Resume).
The Backup panel provides a control for creating a backup of cluster data. The following server configuration elements control backup execution:
<restartable enabled="true"/>
– Global setting required to be "true" for backups (for all servers) to be enabled. False by default.<data-backup>terracotta/backups
– server-level element setting the path for storing the backup files. The default path is shown.For more information on restoring from backups, see the Terracotta Server Array documentation.
You can reload the Terracotta configuration to add or remove servers. The configuration file must be edited and made available to every server and client before it can be reloaded successfully.
For more information on the Terracotta configuration and editing the servers section, see the Terracotta Server Array documentation.
Data lifecycle operations have been added to the TMC for more control and visibility of clustered data. This includes the following capabilities: to enumerate caches and cache managers on the server side even when no clients are connected to it, to destroy clustered cache when no clients are connected to it, and to know if clients are connected to the cache.
Only the administrator can see the "Destroy" feature. Use of this feature appears only in the TMC/TMS Logs and not in server logs.
Troubleshooting Terracotta clusters with the TMC includes both passive monitoring through viewing events and statistical trends using the monitoring panels as well as proactively investigating logs and thread dumps. In the case where a cluster crosses certain resource thresholds, it may enter a mode of limited functionality to prevent an all-out crash.
The TMC flashes warnings whenever the TSA enters throttled or restricted mode. These modes are initiated whenever memory resources drop below a certain threshold and endanger the operations of the cluster. The TSA can automatically recover from throttled mode (for example, once sufficient expired data is evicted), although under certain conditions recovery may fail and restricted mode is entered. You may provide temporary relief by clearing or disabling caches. However, if the TSA enters this mode, it is an indication that memory resources have been under-allocated. The cluster may need to be stopped and additional steps taken to ensure that enough memory is available to cover cluster operations.
You can get a snapshot of the state of each server and client in the Terracotta cluster using thread dumps. To display the console's thread-dumps feature, click Troubleshooting.
The thread-dump navigation pane lists completed thread dumps by date-time stamp. The contents of selected thread dumps are displayed in the right-side pane. To delete all shown thread dumps, click Clear All.
To generate a thread dump, follow these steps:
When complete, the thread dump appears in the thread-dumps navigation pane.
The entries correspond to servers and clients included in the thread dump.
Thread dumps are downloaded in the form of a zip file.
Servers that appear in the Scope menu but are not connected produce empty thread dumps.
To view the log of each server in the Terracotta cluster:
The logs will no longer update and will stop automatically scrolling. Click Resume (or scroll to the bottom) to restart the updating process.
Logs are downloaded in the form of a zip file.
Click Preferences on the toolbar to open a dialog where global TMC settings can be configured.
Click the Polling tab to set the Polling Interval Seconds, which controls the granularity of polled statistical data. Note that shorter polling intervals can have a greater effect on the overall performance of the nodes being polled. To reset to default values, click Reset to Defaults.
Click the Security tab to configure security. If you choose to change the type of security used by the TMS, note the following:
For SSL connections, you can choose to use a custom truststore instead of the default Java cacerts. The custom truststore must be located in the default directory specified in the Security panel.
See the account setup section and additional TMC documentation for more information on setting up security.