Getting Started: Caching

This is the next tutorial in our getting started series: an exploration of Immutant's caching features. JBoss AS7 -- and therefore, Immutant -- comes with the Infinispan data grid baked right in, obviating the need to manage a separate caching service like Memcached for your applications.

Infinispan is a state-of-the-art, high-speed, low-latency, distributed data grid. It is capable of efficiently replicating key-value stores -- essentially souped-up ConcurrentMap implementations -- across a cluster. But it can also serve as a capable in-memory data cache, too: providing features such as write-through/write-behind persistence, multiple eviction policies, and transactions.

Clustering Modes

Infinispan caches adhere to a particular mode of operation. In a non-clustered, standalone Immutant, :local is the only supported mode. But when clustered, you have other options.

  • :local -- This is what you get in non-clustered mode, roughly equivalent to a hash map with write-through/write-behind persistence, JTA/XA support, MVCC (non-blocking, thread-safe reads even during concurrent writes), and JMX manageability.
  • :invalidated -- This is the default clustered mode. It doesn't actually share any data at all, so it's very "bandwidth friendly". Whenever data is changed in a cache, other caches in the cluster are notified that their copies are now stale and should be evicted from memory.
  • :replicated -- In this mode, entries added to any cache instance will be copied to all other cache instances in the cluster, and can then be retrieved locally from any instance. Though simple, it's impractical for clusters of any significant size (>10), and its capacity is equal to the amount of RAM in its smallest peer.
  • :distributed -- This mode is what enables Infinispan clusters to achieve "linear scalability". Cache entries are copied to a fixed number of cluster nodes (default is 2) regardless of the cluster size. Distribution uses a consistent hashing algorithm to determine which nodes will store a given entry.


The first thing you must understand about Immutant's InfinispanCache is that it's mutable. This is sensible in a clustered environment, because the local process benefits from fast reads of data that may have been put there by a remote process. We effectively shift the responsibility of "sane data management", i.e. MVCC, from Clojure to Infinispan.

The second thing to know is that every Immutant cache has a cluster-scoped name and a mode. When you call immutant.cache/cache, the name is required, and it may refer to an existing cache that is already populated. The mode argument (one of :local, :invalidated, :replicated, or :distributed) is optional, defaulting to :invalidated if clustered and :local otherwise.

Because the cache implements many core Clojure interfaces, functions that typically return immutable copies will actually affect the cache contents:

UPDATE 3/22/2012: Due to feedback from our Clojure/West talk we no longer alter the Immutant caches through the core Clojure functions as shown below. See the latest docs for current info and examples.
  user> (def cache (immutant.cache/cache "test"))
  user> cache
  user> (assoc cache :a 1)
  {:a 1}
  user> (merge cache {:b 2, :c 3})
  {:c 3, :a 1, :b 2}
  user> (dissoc cache :c)
  {:a 1, :b 2}
  user> cache
  {:a 1, :b 2}
  user> (empty cache)
  user> cache

Further, the InfinispanCache supports a variety of put methods, some that expose the ConcurrentMap features of atomically storing entries based on their presence or absence. These all take lifespan options for time-to-live and max-idle expiration policies.


Memoization is an optimization technique associating a cache of calculated values with a potentially expensive function, incurring the expense only once, with subsequent calls retrieving the result from the cache. The keys of the cache are the arguments passed to the function.

Standards for caching and memoization in Clojure are emerging in the form of core.cache and core.memoize, respectively. Because the InfinispanCache implements clojure.core.cache/CacheProtocol it can act as an underlying implementation for clojure.core.memoize/PluggableMemoization. Immutant includes a higher-order memo function for doing exactly that:

(immutant.cache/memo a-slow-function "a name" :distributed)

An Example

Let's ammend the example from our clustering tutorial to demonstrate a replicated cache. We'll create a simple web app with a single request to which we'll pass an integer. The request handler will pass that number to a very slow increment function: it'll sleep for that number of seconds before returning its increment. For us, this sleepy function represents a particularly time-consuming operation that will benefit from memoization.

Of course we'll need a project.clj

(defproject example "1.0.0-SNAPSHOT"
  :dependencies [[org.clojure/clojure "1.3.0"]])

Next, the Immutant application bootstrap file, immutant.clj, into which we'll put all our code for this example.

(ns example.init
  (:use [ring.util.response]
  (:require [immutant.cache :as cache]
            [immutant.web :as web]))

;; Our slow function
(defn slow-inc [t]
  (Thread/sleep (* t 1000))
  (inc t))

;; Our memoized version of the slow function
(def memoized-inc (cache/memo slow-inc "sleepy" :replicated))

;; Our Ring handler
(defn handler [{params :params}]
  (let [t (Integer. (get params "t" 1))]
    (response (str "value=" (memoized-inc t) "\n"))))

;; Start up our web app
(web/start "/" (wrap-params handler))

Make sure you have a recent version of Immutant:

$ lein immutant install

And cd to the directory containing the above two files and deploy your app:

$ lein immutant deploy

Now bring up your simulated cluster. In one shell, run:

$ lein immutant run --clustered

In another shell, run:

$ lein immutant run --clustered -Djboss.socket.binding.port-offset=100

You should have one server listening on port 8080 and another on 8180. So in yet another shell, run this:

$ curl "http://localhost:8080/example/?t=5"

With any luck, that should return 6 after about 5 seconds. Now run it again and it should return 6 immediately. Now for the moment of truth: change the port to 8180. That should return 6 immediately, too! Each unique value of t should only sleep t seconds the first time called on any peer in the cluster.

Here's another trick. Fire off a request with t=20 or so, and wait a few seconds, but before it completes hit the same server again with the same t value. You'll notice that the second request will not have to sleep for the full 20 seconds; it returns immediately after the first completes.


Though tested and toyed with, this is seriously Alpha code, and the API is still coagulating, especially with respect to the options for the cache and memo functions, which should probably include :ttl and :idle themselves, for example. Other options may be introduced as more of Infinispan's features are exposed, e.g. transactions and persistence.

By the way, we've recently gotten a decent start on our user manual and api docs, so take a gander there for more details on caching or any of the other Immutant components. And remember to pester us in the usual ways if you have any questions.

Happy hacking!

Getting Started: Simulated Clustering

For this installment of our getting started series we'll experiment a bit with clustering, one of the primary benefits provided by the JBoss AS7 application server, upon which Immutant is built. AS7 features a brand new way of configuring and managing clusters called Domain Mode, but unfortunately its documentation is still evolving. If you insist, try this or possibly this.

We'll save Domain Mode with respect to Immutant for a future post. It's not required for clustering, but it is an option for easier cluster management. In this post, we'll reveal a trick to simulate a cluster on your development box so that you can experiment with Immutant clustering features, which we should probably enumerate now:

  • Automatic load balancing and failover of message consumers
  • HTTP session replication
  • Fine-grained, dynamic web-app configuration and control via mod_cluster
  • Efficiently-replicated distributed caching via Infinispan
  • Singleton scheduled jobs
  • Automatic failover of singleton daemons

Running an Immutant

As you know, installing Immutant is simple:

$ lein plugin install lein-immutant 0.4.1
$ lein immutant install

And running an Immutant is, too:

$ lein immutant run

By passing the --clustered option, you configure the Immutant as a node that will discover other nodes (via multicast, by default) to form a cluster:

$ lein immutant run --clustered

From the first line of its output, you can see what that command is really running:

$ $JBOSS_HOME/bin/ --server-config=standalone-ha.xml

Any options passed to lein immutant run are forwarded to, so run the following to see what those are:

$ lein immutant run --help

Simulating a Cluster


To run two immutant instances on a single machine, fire up two shells and...

In one shell, run:

$ lein immutant run --clustered

In another shell, run:

$ lein immutant run --clustered -Djboss.socket.binding.port-offset=100

Boom, you're a cluster!


Each cluster node requires a unique name, which is usually derived from the hostname, but since our Immutants are on the same host, we set the property uniquely.

Each Immutant will attempt to persist its runtime state to the same files. Hijinks will ensue. We prevent said hijinks by setting the property uniquely.

JBoss listens for various types of connections on a few ports. One obvious solution to the potential conflicts is to bind each Immutant to a different interface, which we could specify using the -b option.

But rather than go through a platform-specific example of creating an IP alias, I'm going to take advantage of another JBoss feature: the jboss.socket.binding.port-offset property will cause each default port number to be incremented by a specified amount.

So for the second Immutant, I set the offset to 100, resulting in its HTTP service, for example, listening on 8180 instead of the default 8080, on which the first Immutant is listening.

Deploy an Application

With any luck at all, you have two Immutants running locally, both hungry for an app to deploy, so let's create one.

We've been over how to deploy an application before, but this time we're gonna keep it real simple: create a new directory and add two files.

First, you'll need a project.clj

(defproject example "1.0.0-SNAPSHOT"
  :dependencies [[org.clojure/clojure "1.3.0"]])

Next, the Immutant application bootstrap file, immutant.clj, into which we'll put all our code for this example.

(ns example.init
  (:require [immutant.messaging :as messaging]
            [immutant.daemons :as daemon])

;; Create a message queue
(messaging/start "/queue/msg")
;; Define a consumer for our queue
(def listener (messaging/listen "/queue/msg" #(println "received:" %)))

;; Controls the state of our daemon
(def done (atom false))

;; Our daemon's start function
(defn start []
  (reset! done false)
  (loop [i 0]
    (Thread/sleep 1000)
    (when-not @done
      (println "sending:" i)
      (messaging/publish "/queue/msg" i)
      (recur (inc i)))))

;; Our daemon's stop function
(defn stop []
  (reset! done true))

;; Register the daemon
(daemon/start "counter" start stop :singleton true)

We've defined a message queue, a message listener, and a daemon service that, once started, publishes messages to the queue every second.

Daemons require a name (for referencing as a JMX MBean), a start function to be invoked asynchronously, and a stop function that will be automatically invoked when your app is undeployed, allowing you to cleanly teardown any resources used by your service. Optionally, you can declare the service to be a singleton. This means it will only be started on one node in your cluster, and should that node crash, it will be automatically started on another node, essentially giving you a robust, highly-available service.

In the same directory that contains your files, run this:

$ lein immutant deploy

Because both Immutants are monitoring the same deployment directory, this should trigger both to deploy the app.

Now watch the output of the shells in which your Immutants are running. You should see the daemon start up on only one of them, but both should be receiving messages. This is the automatic load balancing of message consumers.

Now kill the Immutant running the daemon. Watch the other one to see that the daemon will start there within seconds. There's your automatic failover. Restart the killed Immutant to see him start to receive messages again. It's fun, right? :)


So that's probably enough to show for now. Give it a try, and let us know if it worked for you the very first time. If it doesn't, please reach out to us in the usual ways and we'll be happy to get you going. Above all, have fun!

Getting Started: Scheduling Jobs

Note: this article is out of date. For more recent instructions on using scheduled jobs, see the tutorial.

This article covers job schedulding in Immutant, and is part of our getting started series of tutorials.

Jobs in Immutant are simply functions that execute on a recurring schedule. They fire asynchronously, outside of the thread where they are defined, and fire in the same runtime as the rest of the application, so have access to any shared state.

Jobs are built on top of the Quartz library, and support scheduling via a cron-like specification.

Why would I use this over quartz-clj or calling Quartz directly?

I'm glad you asked! There are several reasons:

  • Immutant abstracts away the complexity of Quartz's internals, so you don't have to worry about managing Schedulers and creating JobDetails, and provides enough functionality for a majority of use cases. For cases where you need advanced scheduling functionality, you can still use quartz-clj or the Quartz classes directly.
  • If you are using Immutant in a cluster, jobs that should fire only once per cluster (aka 'singleton jobs') are handled automatically (see below).
  • When your application is undeployed, your jobs are automatically unscheduled. Note that if you use quartz-clj or Quartz directly from your application, you'll need to clean up after yourself so you don't leave jobs lingering around since Immutant can't automatically unschedule them for you.

Scheduling Jobs

Scheduling a job is as simple as calling the schedule function from the namespace:

(require '[ :as jobs])
(jobs/schedule "my-job-name" "*/5 * * * * ?" 
                #(println "I was called!"))

The schedule function requires three arguments:

  • name - the name of the job.
  • spec - the cron-style specification string (see below).
  • f - the zero argument function that will be invoked each time the job fires.

Job scheduling is dynamic, and can occur anywhere in your application code. Jobs that share the lifecycle of your application are idiomatically placed in immutant.clj.

You can safely call schedule multiple times with the same job name - the named job will rescheduled.

Cron Sytanx

The spec attribute should contain a crontab-like entry. This is similar to cron specifications used by Vixie cron, anacron and friends, but includes an additional field for specifying seconds. It is composed of 7 fields (6 are required):

SecondsMinutesHoursDay of MonthMonthDay of WeekYear
0-590-590-231-311-12 or JAN-DEC1-7 or SUN-SAT1970-2099 (optional)

For several fields, you may denote subdivision by using the forward-slash (/) character. To execute a job every 5 minutes, */5 in the minutes field would specify this condition.

Spans may be indicated using the dash (-) character. To execute a job Monday through Friday, MON-FRI should be used in the day-of-week field.

Multiple values may be separated using the comma (,) character. The specification of 1,15 in the day-of-month field would result in the job firing on the 1st and 15th of each month.

Either day-of-month or day-of-week must be specified using the ? character, since specifying both is contradictory.

See the Quartz cron specification for additional details.

Unscheduling Jobs

Jobs can be unscheduled via the unschedule function:

(require '[ :as jobs])
(jobs/unschedule "my-job-name")

The unschedule function requires one argument:

  • name - the name of a previously scheduled job.

If the given name resolves to an existing job, that job will be unscheduled and the call will return true, otherwise nil is returned.

Jobs are automatically unscheduled when your application is undeployed.


When using Immutant in a cluster, you'll need to mark any jobs that should only be scheduled once for the entire cluster with the :singleton option:

(require '[ :as jobs])
(jobs/schedule "my-job-name" "*/5 * * * * ?" 
                #(println "I only fire on one node")
                :singleton true)

If :singleton is true, the job will be scheduled to run on only one node in the cluster at a time. If that node goes down, the job will automatically be scheduled on another node, giving you failover. If :singleton is false or not provided, the job will be scheduled to run on all nodes where the schedule call is executed.

Look for a future post in our Getting Started series on using Immutant in a cluster.

The Future

Currently, jobs can only be scheduled using CronTrigger functionality. We plan to add support for SimpleTrigger functionality at some point in the future, allowing you to do something similar to:

(require '[ :as jobs])
(jobs/schedule "my-at-job" (jobs/every "3s" :times 5)
                #(println "I fire 5 times, every 3 seconds"))

Since Immutant is still in a pre-alpha state, none of what I said above is set in stone. If anything does change, We'll update this post to keep it accurate.

If you have any feedback or questions, get in touch!

Getting Started: Messaging

Happy 2012! For the next installment of our getting started series we'll explore the messaging abstractions available to your Clojure apps when deployed on Immutant. Because Immutant is built atop JBoss AS7, it includes the excellent HornetQ messaging service built right in. Hence, there is nothing extra to install or configure in order for your applications to benefit from asynchronous messaging.

Destinations are either Queues or Topics

Two types of message destinations, or endpoints, are supported: queues and topics. A queue exhibits point-to-point semantics: a message sent to a queue will be delivered to a single recipient. A topic provides publish-subscribe semantics: messages sent to a topic will be delivered to all subscribed recipients. In both cases, the message producers have no direct knowledge of the message consumers.

Use the start function to define a messaging destination. A simple naming convention designates an endpoint as either a queue or a topic: if its name begins with /queue, it's a queue; if it begins with /topic, it's a topic.

(require '[immutant.messaging :as msg])
(msg/start "/queue/work")   ; to start a queue
(msg/start "/topic/news")   ; to start a topic

You can invoke start from anywhere in your application, but typically it's done in the immutant.clj initialization file, as described in an earlier tutorial.

While start has a complement, stop, you needn't call it directly. It will be invoked when your application is undeployed. And it's important to note that start is idempotent: if an endpoint has already been started, likely by a cooperating application, the call is effectively a no-op. Similarly, a call to stop will silently fail if the endpoint is in use by any other application. The last to leave will turn the lights out.

One Way to Produce Messages


Messages are sent to a destination, whether queue or topic, via a single function, publish, to which is passed the destination name and the message content, which can be just about anything. A number of optional key-value parameters may be passed as well.

  • :encoding may be either :clojure (the default), :json (useful with non-clojure consumers) or :text (no encoding)
  • :priority may be an integer between 0-9, inclusive. Convenient keyword values :low, :normal, :high and :critical correspond to 0, 4, 7 and 9, respectively. 4 is the default.
  • :ttl time-to-live may be specified in milliseconds, after which time the message is discarded if not consumed. Default is 0, i.e. forever.
  • :properties is a hash of arbitrary message metadata upon which JMS selector expressions may be constructed to filter received messages.

Some examples:

;; A simple string
(msg/publish "/queue/work" "simple string")
;; Notify everyone something interesting just happened
(msg/publish "/topic/news" {:event "VISIT" :url "/sales-inquiry"})
;; Move this message to the front of the line
(msg/publish "/queue/work" some-message :priority :high :ttl 1000)
;; Make messages as complex as necessary
(msg/publish "/queue/work" {:a "b" :c [1 2 3 {:foo 42}]})
;; Make messages consumable by a Ruby app
(msg/publish "/queue/work" {:a "b" :c [1 2 3 {:foo 42}]} :encoding :json)

Three Ways to Consume Messages


Block on a call to receive, passing a destination name and optionally, the following:

  • :timeout an expiration in milliseconds, after which nil is returned. Default is 0, i.e. wait forever
  • :selector a JMS expression used to filter messages according to the values of arbitrary :properties. For documentation on JMS selector syntax please see the javadoc for javax.jms.Message.


Pass a destination name and function to listen and the decoded content of a message sent to that destination will be passed to the function. Options include:

  • :concurrency the maximum number of listening threads that can simultaneouly call the function. Default is 1.
  • :selector same as :receive


Create a lazy sequence of messages via message-seq, which accepts the same options as receive.

Some examples:

;; Wait on a task
(let [task (msg/receive "/queue/work")]
  (perform task))

;; Case-sensitive work queues?
(msg/listen "/queue/lower" #(msg/publish "/queue/upper" (.toUpperCase %)))

;; Contrived laziness
(let [messages (message-seq queue)]
  (doseq [i (range 4)] (publish queue i))
  (= (range 4) (take 4 messages)))

Language Interop

One of our initial goals for Immutant messaging was simple interop between Ruby and Clojure applications deployed on a single platform. TorqueBox Ruby processors already grok the :json encoding and will automatically decode the message into the analogous Ruby data structures, so as long as you limit the content of your messages to standard collections and types, they are transparently interoperable between Clojure and Ruby in either direction. See the overlay post for more details on TorqueBox/Immutant integration.

Of course, the :json encoding enables other JVM-based languages -- anything you could conceivably cram into a war file -- to join in the fun, too. For non-JVM languages or external endpoints, something like the Pipes and Filters API's provided by Clamq could be used since we expose our JMS connection factory as immutant.messaging.core/connection-factory.

Anything Else?

Another advantage we get from AS7 is its clustering support. Once we work out some small integration bits, message distribution across a cluster of dynamic nodes will be automatically load-balanced and fault-tolerant, with minimal to no configuration required.

Of course, we still have other messaging features on our roadmap, e.g. XA transactions, durable subscribers and synchronous request/response, and we're looking for ways to make container-based deployment more developer-friendly, so there's still much to do. Feel free to follow along on Twitter, IRC, or the mailing list.

Getting Started: Installing Immutant v2

Greetings! This article covers the new, improved method for installing Immutant, and replaces the first in the getting started series of tutorials with Immutant. This entry covers setting up a development environment and installing Immutant. This tutorial assumes you are on a *nix system. It also assumes you have Leiningen installed. If not, follow these instructions, then come back here.

Installing the lein plugin

We provide a lein plugin for creating your Immutant applications and managing their life-cycles. As of this post, the latest version of the plugin is 0.3.1. Check clojars for the current version.

Let's install it as a global plugin:

 $ lein plugin install lein-immutant 0.3.1
Copying 3 files to /a/nice/long/tmp/path/lib
Including lein-immutant-0.3.1.jar
Including clojure-1.2.0.jar
Including clojure-contrib-1.2.0.jar
Including data.json-0.1.1.jar
Including fleet-0.9.5.jar
Including overlay-1.0.1.jar
Including progress-1.0.1.jar
Created lein-immutant-0.3.0.jar

Now, run lein immutant to see what tasks the plugin provides:

 $ lein immutant
Manage the deployment lifecycle of an Immutant application.

Subtasks available:
install    Downloads and installs Immutant
overlay    Overlays features onto ~/.lein/immutant/current or $IMMUTANT_HOME
env        Displays paths to the Immutant that the plugin can find
new        Creates a new project skeleton initialized for Immutant
init       Adds a sample immutant.clj configuration file to an existing project
deploy     Deploys the current project to the Immutant specified by ~/.lein/immutant/current or $IMMUTANT_HOME
undeploy   Undeploys the current project from the Immutant specified by ~/.lein/immutant/current or $IMMUTANT_HOME
run        Starts up the Immutant specified by ~/.lein/immutant/current or $IMMUTANT_HOME, displaying its console output

We'll only talk about the install and run tasks in this article - we covered the application specific management tasks in our second tutorial, and cover overlay in its own article.

Installing Immutant

Now we need to install an Immutant distribution. We've yet to make any official releases, but our CI server is setup to publish an incremental build every time we push to the git repo. The latest incremental build is always available at The previous version of this tutorial walked you through downloading and installing the latest incremental manually, but with the new install task, that's no longer necessary - the install task will download and install the latest incremental build for you. Let's see that in action:

 $ lein immutant install
Extracting /a/nice/long/tmp/path/lib/
Extracted /Users/tobias/.lein/immutant/releases/immutant-1.x.incremental.51
Linking /Users/tobias/.lein/immutant/current to /Users/tobias/.lein/immutant/releases/immutant-1.x.incremental.51

Part of the install process links the most recently installed version to ~/.lein/immutant/current so the plugin can find the Immutant install without requiring you to set $IMMUTANT_HOME. If $IMMUTANT_HOME is set, it will override the current link.

If you want to install a specific incremental build, specify the build number (available from our builds page) as an argument to lein:

 $ lein immutant install 50

You can also have it install to a directory of your choosing (if you want the latest build, specify 'latest' as the version):

 $ lein immutant install latest /some/other/path

~/.lein/immutant/current will be linked to that location.

Running Immutant

To verify that Immutant is properly installed, let's fire it up. To do so, use the lein immutant run command. This is a convenient way to start the Immutant's JBoss server, and will run in the foreground displaying the console log. You'll see lots of log messages that you can ignore - the one to look for should be the last message, and will tell you the Immutant was properly started:

 $ lein immutant run
Starting Immutant via /Users/tobias/.lein/immutant/current/jboss/bin/
(a plethora of log messages deleted)
09:18:03,709 INFO  [] (Controller Boot Thread) JBoss AS 7.1.0.Beta1 "Tesla" started in 2143ms - Started 149 of 211 services (61 services are passive or on-demand)

You can kill the Immutant with Ctrl-C.

Wrapping up

If you've done all of the above, you're now ready to deploy an application. We covered that in our second tutorial.

Since Immutant is still in a pre-alpha state, none of what I said above is set in stone. If anything does change, We'll update this post to keep it accurate.

If you have any feedback or questions, get in touch!