Apache Tomcat :: Apache Software Foundation

Links

Reference Guide

Generic HowTo

Webserver HowTo

AJP Protocol Reference

Miscellaneous Documentation

News

The Apache Tomcat Connector - Generic HowTo

LoadBalancer HowTo

Printer Friendly Version
print-friendly
version
Introduction

A load balancer is a worker that does not directly communicate with Tomcat. Instead it is responsible for the management of several "real" workers, called members or sub workers of the load balancer.

This management includes:

  • Instantiating the workers in the web server.
  • Using the worker's load-balancing factor, perform weighted load balancing (distributing load according to defined strengths of the targets).
  • Keeping requests belonging to the same session executing on the same Tomcat (session stickyness).
  • Identifying failed Tomcat workers, suspending requests to them and instead falling-back on other workers managed by the load balancer.
  • Providing status and load metrics for the load balancer itself and all members via the status worker interface.
  • Allowing to dynamically reconfigure load-balancing via the status worker interface.

Workers managed by the same load balancer worker are load-balanced (based on their configured balancing factors and current request or session load) and also secured against failure by providing failover to other members of the same load balancer. So a single Tomcat process death will not "kill" the entire site.

Some of the features provided by a load balancer are even interesting, when only working with a single member worker (where load balancing is not possible).

Basic Load Balancer Properties

A worker is configured as a load balancer by setting its worker type to lb.

The following table specifies some properties used to configure a load balancer worker:

  • balance_workers is a comma separated list of names of the member workers of the load balancer. These workers are typically of type ajp13. The member workers do not need to appear in the worker.list property themselves, adding the load balancer to it suffices.
  • sticky_session specifies whether requests with SESSION ID's should be routed back to the same Tomcat instance that created the session. You can set sticky_session to False when Tomcat is using a session manager which can share session data across multiple instances of Tomcat - or if your application is stateless. By default sticky_session is set to True.
  • lbfactor can be added to each member worker to configure individual strengths for the members. A higher lbfactor will lead to more requests being balanced to that worker. The factors must be given by integers and the load will be distributed proportional to the factors given. Higher factors lead to more requests.
  # The load balancer worker balance1 will distribute
  # load to the members worker1 and worker2
  worker.balance1.type=lb
  worker.balance1.balance_workers=worker1, worker2
  worker.worker1.type=ajp13
  worker.worker1.host=myhost1
  worker.worker1.port=8009
  worker.worker2.type=ajp13
  worker.worker1.host=myhost2
  worker.worker1.port=8009

Session stickyness is not implemented using a tracking table for sessions. Instead each Tomcat instance gets an individual name and adds its name at the end of the session id. When the load balancer sees a session id, it finds the name of the Tomcat instance and sends the request via the correct member worker. For this to work you must set the name of the Tomcat instances as the value of the jvmRoute attribute in the Engine element of each Tomcat's server.xml. The name of the Tomcat needs to be equal to the name of the corresponding load balancer member. In the above example, Tomcat on host "myhost1" needs jvmRoute="worker1", Tomcat on host "myhost2" needs jvmRoute="worker2".

For a complete reference of all load balancer configuration attributes, please consult the worker reference.

Advanced Load Balancer Worker Properties

The load balancer supports complex topologies and failover configurations. Using the member attribute distance you can group members. The load balancer will always send a request to a member of lowest distance. Only when all of those are broken, it will balance to the members of the next higher configured distance. This allows to define priorities between Tomcat instances in different data center locations.

When working with shared sessions, either by using session replication or a persisting session manager (e.g. via a database), one often splits up the Tomcat farm into replication groups. In case of failure of a member, the load balancer needs to know, which other members share the session. This is configured using the domain attribute. All workers with the same domain are assumed to share the sessions.

For maintenance purposes you can tell the load balancer to not allow any new sessions on some members, or even not use them at all. This is controlled by the member attribute activation. The value Active allows normal use of a member, disabled will not create new sessions on it, but still allow sticky requests, and stopped will no longer send any requests to the member. Switching the activation from "active" to "disabled" some time before maintenance will drain the sessions on the worker and minimize disruption. Depending on the usage pattern of the application, draining will take from minutes to hours. Switching the worker to stopped immediately before maintenance will reduce logging of false errors by mod_jk.

Finally you can also configure hot spare workers by using activation set to disabled in combination with the attribute redirect added to the other workers:

  # The advanced router LB worker
  worker.list=router
  worker.router.type=lb
  worker.router.balance_workers=worker1,worker2

  # Define the first member worker
  worker.worker1.type=ajp13
  worker.worker1.host=myhost1
  worker.worker1.port=8009
  # Define preferred failover node for worker1
  worker.worker1.redirect=worker2

  # Define the second member worker
  worker.worker2.type=ajp13
  worker.worker2.host=myhost2
  worker.worker2.port=8009
  # Disable worker2 for all requests except failover
  worker.worker2.activation=disabled

The redirect flag on worker1 tells the load balancer to redirect the requests to worker2 in case that worker1 has a problem. In all other cases worker2 will not receive any requests, thus acting like a hot standby.

A final note about setting activation to disabled: The session id coming with a request is send either as part of the request URL (;jsessionid=...) or via a cookie. When using bookmarks or browsers that are running since a long time, it is possible to send a request carrying an old and invalid session id pointing at a disabled member. Since the load balancer does not have a list of valid sessions, it will forward the request to the disabled member. Thus draining takes longer than expected. To handle such cases, you can add a Servlet filter to your web application, which checks the request attribute JK_LB_ACTIVATION. This attribute contains one of the strings "ACT", "DIS" or "STP". If you detect "DIS" and the session for the request is no longer active, delete the session cookie and redirect using a self-referential URL. The redirected request will then no longer carry session information and thus the load balancer will not send it to the disabled worker. The request attribute JK_LB_ACTIVATION has been added in version 1.2.32.

Status Worker properties

The status worker does not communicate with Tomcat. Instead it is responsible for the worker management. It is especially useful when combined with load balancer workers.

  # Add the status worker to the worker list
  worker.list=jkstatus
  # Define a 'jkstatus' worker using status
  worker.jkstatus.type=status

Next thing is to mount the requests to the jkstatus worker. For Apache web servers use the:

  # Add the jkstatus mount point
  JkMount /jkmanager/* jkstatus 

To obtain a higher level of security use the:

  # Enable the JK manager access from localhost only
 <Location /jkmanager/>
    JkMount jkstatus
    Order deny,allow
    Deny from all
    Allow from 127.0.0.1
 </Location>

Copyright © 1999-2014, Apache Software Foundation