Chapter 13. Production Setup
13.1. Introduction
We expect this document to evolve as more Immutant installations are put into production. Of course, "production" means something different to every organization, so it's difficult to be too specific, but we hope to present some general guidelines that can be successfully adapted to any environment.
13.2. Application Deployment
Immutant honors any active Leiningen profiles when your app is
deployed. Certain features, e.g. an nREPL service and Ring reloading
middleware, are enabled when the :dev
profile is active, which is
the default. We recommend deploying your applications to production
with a non-dev profile activated, e.g.
$ lein with-profile production immutant deploy # or $ lein with-profile production immutant archive
Alternatively, you could manually add a :lein-profiles
entry to
your deployment descriptor. This is the file that gets created by
the deploy
task. See the initialization and deployment chapters
for more details.
13.3. Application Configuration
Often, applications require environment-specific configuration when deployed. Clojure makes this pretty simple. We recommend using Clojure syntax or EDN for your config files, storing them in a known location, and slurping them in during your application's initialization.
Use some sort of "dev ops" system, e.g. Pallet/Chef/Puppet, for transferring the config files along with your application archives to your target hosts, and in your initialization, do something along these lines:
(def config (read-string (slurp "/etc/yourapp/config.clj")))
This assumes the contents of /etc/yourapp/config.clj
look
something like this:
{ :db-host 1.2.3.4 :db-user "myuser" :db-pass "mypass" }
Alternatively, you might take advantage of Immutant's registry namespace, through which you can access your Leiningen project map and your application's deployment descriptor. These are resolved according to whatever Leiningen profiles are active when you deploy the application (or are specified in the deployment descriptor).
(def deploy-descriptor (immutant.registry/get :config)) (def leiningen-project (immutant.registry/get :project))
See Initialization - Arbitrary Configuration Values for more on
providing configuration values via the deployment descriptor or
project.clj
.
13.4. Configuring JBoss as a Service
JBoss AS7 ships with an example System V style startup script:
$IMMUTANT_HOME/jboss/bin/init.d/jboss-as-standalone.sh
. You'll
also find jboss-as.conf
in that same directory. You may copy it to
/etc/jboss-as/
and edit further configuration for your Immutant
service there.
To have Immutant start when your server boots:
$ ln -s $IMMUTANT_HOME/jboss/bin/init.d/jboss-as-standalone.sh /etc/init.d/immutant
$ chkconfig --add immutant
$ service immutant start
Obviously, you'll need to tweak the above if you're using one of the more modern init daemon alternatives, e.g. upstart, systemd, etc.
To have it come up in clustered mode, set the following in
/etc/jboss-as/jboss-as.conf
:
JBOSS_CONFIG=standalone-ha.xml
13.5. Clustering without Multicast
Immutant uses JGroups for clustering.
By default, a clustered Immutant will attempt to discover and connect to its peers using IP multicast. When it's enabled, forming an Immutant cluster is easy, peasy, lemon squeezy!
But often, especially in cloud environments, IP multicast is not available, or perhaps even undesired when a system administrator requires direct control over the members of a cluster. In these cases it's possible to configure Immutant to use a predefined set of cluster members. Dynamic peer discovery is possible in a non-multicast environment using a gossip router or, if on Amazon EC2, the S3_PING JGroups protocol (see below).
When you want to set the hosts for your cluster, replace the MPING
protocol in the default configuration with TCPPING. You'll also want
to change both the default-stack
attribute of the JGroups
subsystem
and the jgroups-stack
descendants of the
hornetq-server
element from "udp" to "tcp". Edit
$IMMUTANT_HOME/jboss/standalone/configuration/standalone-ha.xml
to
make it look similar to this:
... <subsystem xmlns="urn:jboss:domain:jgroups:1.1" default-stack="tcp"> ... <stack name="tcp"> <transport type="TCP" socket-binding="jgroups-tcp"/> <protocol type="TCPPING"> <property name="initial_hosts"> 10.100.10.2[7600],10.100.10.3[7600] </property> </protocol> <protocol type="MERGE2"/> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/> ... </stack> </subsystem>
13.5.1. Clustering on Amazon EC2
Enabling clustering with dynamic discovery on EC2 amounts to
replacing the MPING protocol element of the "tcp" stack
configured in
$IMMUTANT_HOME/jboss/standalone/configuration/standalone-ha.xml
with S3_PING. And be sure to change the default-stack
attribute
of the subsystem
to "tcp".
... <subsystem xmlns="urn:jboss:domain:jgroups:1.1" default-stack="tcp"> <stack name="tcp"> <transport type="TCP" socket-binding="jgroups-tcp"/> <protocol type="S3_PING"> <property name="secret_access_key">YOUR_SECRET_ACCESS_KEY</property> <property name="access_key">YOUR_ACCESS_KEY</property> <property name="location">SOME_BUCKET_PATH</property> </protocol> <protocol type="MERGE2"/> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/> ... </stack> </subsystem>
13.5.2. HornetQ Configuration
Without multicast, you must change the HornetQ config to use the JGroups "tcp" stack instead of the default "udp" stack.
Search for jgroups-stack
in
$IMMUTANT_HOME/jboss/standalone/configuration/standalone-ha.xml
,
and you'll see this beneath both the broadcast-group
and
discovery-group
elements:
<jgroups-stack>${msg.jgroups.stack:udp}</jgroups-stack>
This ${property:default}
syntax refers to a Java system property
called msg.jgroups.stack
. If unset, the value following the colon
is used, so you must either set this system property to "tcp" on the command line, e.g. -Dmsg.jgroups.stack=tcp
, or replace "udp"
with "tcp" in the config file for both the broadcast-group
and
discovery-group
elements.
13.6. Apache mod_cluster
Reverse Proxy
An Immutant production server will typically have a request dispatching reverse proxy fronting the application, accepting web requests and handing them off to your application. There are many common reverse proxies, e.g. HAProxy and nginx, and they'll all work fine with Immutant, but the JBoss mod_cluster project is worth special mention because it is aware of the deployment state of each peer in your cluster. It is not enough for the node to simply be "up": requests won't be routed to nodes that don't have the corresponding application fully deployed.
Download and install mod_cluster
using the instructions linked
from its downloads page.
After installing it, check the configuration file,
/etc/httpd/conf.d/mod_cluster.conf
. With something similar to the
following settings, you should have Apache's httpd daemon accepting
web requests on your host and mod_cluster
dispatching those
requests to your Immutant[s]:
LoadModule slotmem_module modules/mod_slotmem.so LoadModule proxy_cluster_module modules/mod_proxy_cluster.so LoadModule advertise_module modules/mod_advertise.so LoadModule manager_module modules/mod_manager.so <Location /mod_cluster_manager> SetHandler mod_cluster-manager AllowDisplay On </Location> Listen 127.0.0.1:6666 <VirtualHost 127.0.0.1:6666> <Directory /> Order deny,allow Deny from all Allow from all </Directory> KeepAliveTimeout 60 MaxKeepAliveRequests 0 EnableMCPMReceive ManagerBalancerName immutant-balancer AllowDisplay On AdvertiseFrequency 5 </VirtualHost>