Clustering Immutants on OpenShift

Lately I've been spending a lot of time on OpenShift, building and testing a cartridge for Immutant that will properly form a cluster across multiple OpenShift gears. In this post, I'll go through the steps of deploying a simple application that demonstrates all of the Immutant clustering features running on the three small gears you get for free on OpenShift.

Here are the features we'll be demonstrating:

  • Load-balanced message distribution with automatic peer discovery
  • Replicated caching
  • Highly-Available, long-running daemons
  • HA scheduled jobs
  • Web load balancing and session replication

If you haven't already, go set up an OpenShift account and update your rhc gem to the latest version. I used 1.12.4 for this article. Below you'll see references to $namespace -- this corresponds to your OpenShift domain name, set by running rhc setup.

Note: If this is the first time you've used OpenShift, you'll need to visit the console and accept the usage agreement before running the rhc command.

Create a scaled OpenShift app

The Immutant cartridge is available here: https://github.com/immutant/openshift-immutant-cart. As described in its README, we create our app using the following command:

rhc app-create -s demo https://raw.github.com/immutant/openshift-immutant-cart/master/metadata/manifest.yml

We're calling our app demo and we're passing the -s option to make our app scalable. Notice that we're passing a raw URL to the cartridge's manifest.yml.

Small gears are pretty slow, but when app-create finally completes, you'll have a bare-bones, standard Leiningen application beneath the demo/ directory. At this point, you might tail your app's logs or ssh into your gear:

rhc tail demo
rhc ssh demo

The critical log file for Immutant on OpenShift is immutant/logs/server.log. Monitor this file until you eventually see the line, Deployed "your-clojure-application.clj". Then point a browser at http://demo-$namespace.rhcloud.com to see a simple welcome page.

Now we'll put some meat on our bare-bones app!

Push Me, Pull You

Typically, you will add the remote git repository for your real application to the local OpenShift repository you just created. We're going to use https://github.com/immutant/cluster-demo as our "real" application.

git remote add upstream -m master git@github.com:immutant/cluster-demo.git

Deployment of your app to OpenShift amounts to pulling from your real repository and pushing to OpenShift's.

git pull -s recursive -X theirs upstream master
git push

While waiting for that to complete, run rhc tail demo in another shell to monitor your log. This time, the Deployed "your-clojure-application.clj" message is going to scroll off the screen as the cluster-demo app starts logging its output. Eventually, the app should settle into a steady state looking something like this:

The cluster-demo app

If you can ignore the inconsistent thread identifiers in the above output, you'll notice there are exactly four types of messages: send, recv, web, and job. Noting the timestamps in the left column, a send is logged every 5 seconds, as is its corresponding recv, a web logged every 2 seconds, and a job every 20 seconds.

The cluster-demo app is comprised of the following:

  • A message queue named /queue/msg
  • A distributed cache named counters
  • A listener for the queue that prints the received message and the current contents of the cache
  • An HA daemon named counter that queues a cached value and increments it every 5 seconds
  • An HA scheduled job named ajob that increments another counter in the cache every 20 seconds
  • A web request handler mounted at / that logs its :path-info and returns the current values of the two cached counters
  • Another request handler mounted at /count that increments a counter in the user's web session.

All the code (~60 lines) is contained in a single file.

Programming is hard, let's build a cluster!

Now we're ready to form a cluster by adding a gear to our app:

rhc scale-cartridge immutant -a demo 2

Again, this will take a few minutes, and it may return an error even though the operation actually succeeded. You can run the following to see the definitive state of your gears:

rhc show-app --gears

This also gives you the SSH URLs for your two gears. Fire up two shells and ssh into each of your gears using those SSH URLs. Then tail the log on each:

tail -f immutant/logs/server.log

When the dust settles, you'll eventually see the gears discover each other, and you should see both gears logging recv messages, one getting the even numbers and one getting the odd. This is your automatic load-balanced message distribution.

Note also that the counters cache logged in the recv message is correct on both gears, even though it's only being updated by one. This is our cache replication at work.

Let's break stuff!

And see how robust our cluster is.

High Availability Daemons and Jobs

Of course, the send and job log entries should still only appear on our original gear, because those are our HA singletons. If that gear crashes, our daemon and job should migrate to the other gear. While logged into the gear running your singletons, run this:

immutant/bin/control stop

And watch the other gear's log to verify the daemon and job pick up right where they left off, fetching their counters from the replicated cache. That gear should be consuming all the queued messages, too. Now start the original gear back up:

immutant/bin/control start

Eventually, it'll start receiving half the messages again.

Web

You may be wondering about those web entries showing up in both logs. They are "health check" requests from the HAProxy web load balancer, automatically installed on your primary gear. You can always check the state of your cluster from HAProxy's perspective by visiting http://demo-$namespace.rhcloud.com/haproxy-status. If you see that page without intending to, it means something about your app is broken, so check immutant/logs/server.log for errors and make sure your app responds to a request for the root context, i.e. "/".

Let's try some web stuff. Use curl to hit your app while observing the logs on both gears:

curl http://demo-$namespace.rhcloud.com/xxxxxxxxxxxxxxxxxxxx
curl http://demo-$namespace.rhcloud.com/yyyyyyyyyyyyyyyyyyyy
curl http://demo-$namespace.rhcloud.com/zzzzzzzzzzzzzzzzzzzz

Use an obnoxious path to distinguish your request from the health checks. Repeat the command a few times to observe the gears taking turns responding to your request. Now try it in a browser, and you'll see the same gear handling your request every time you reload. This is because HAProxy is setting cookies in the response to enable session affinity, which your browser is probably sending back. And curl didn't.

Speaking of session affinity, let's break that while we're at it, by invoking our other web handler, the one that increments a counter in the user's web session: http://demo-$namespace.rhcloud.com/count

You should see the counter increment each time you reload your browser. (You'll need to give curl a cookie store to see it respond with anything other than "1 times")

Pay attention to which gear is responding to the /count request. Now stop that gear like you did before. When you reload your browser, you should see the other gear return the expected value. This is the automatic session replication provided by immutant.web.session/servlet-store.

Don't forget to restart that gear.

The Hat Trick

Hey, OpenShift is giving us 3 free gears, we may as well use 'em all, right?

rhc scale-cartridge immutant -a demo 3

When the third one finally comes up, there are a couple of things you may notice:

  • The health checks will disappear from the primary gear as HAProxy takes it out of the rotation when 2 or more other gears are available, ostensibly to mitigate the observer effect of the health checks.
  • Each cache key will only show up in the recv log messages on 2 of the 3 gears. This is because Immutant caches default to Infinispan's :distributed replication mode in a cluster. This enables Infinispan clusters to achieve "linear scalability" as entries are copied to a fixed number of cluster nodes (default 2) regardless of the cluster size. Distribution uses a consistent hashing algorithm to determine which nodes will store a given entry.

Now what?

Well, that was a lot to cover. I doubt many apps will use all these features, but I think it's nice to have a free playground on which to try them out, even with the resources as constrained as they are on a small gear.

Regardless, I'm pretty happy that Immutant is finally feature-complete on OpenShift now. :-)

Of course, I had a lot of help getting things to this point. Many folks on the OpenShift and JBoss teams were generous with their expertise, but the "three B's" deserve special mention: Ben, Bela, and Bill.

Thanks!

OpenShift, PostgreSQL and Poorsmatic

Today we'll get a Clojure application running in Immutant on OpenShift, persisting its data to a PostgreSQL database. We'll use Poorsmatic, the app I built in my recent talk at Clojure/Conj 2012.

Poorsmatic, a "poor man's Prismatic", is a truly awful content discovery service that merely returns URL's from Twitter that contain at least one occurrence of the search term used to find the tweets containing the URL's in the first place.

Got that? Don't worry. It doesn't matter.

Because Poorsmatic was contrived to be a pretty good example of many of Immutant's features, including topics, queues, XA transactions, HA services, and a few other things. In my talk I used Datomic as my database, but here we'll try a different approach, using Lobos for database migrations, the Korma DSL, and OpenShift's PostgreSQL cartridge for persistence.

Create an app on OpenShift

To get started on OpenShift you'll need an account, the command line tools installed, and a domain setup. Below you'll see references to $namespace -- this corresponds to your domain name.

Once you've setup your domain, create an app. Call it poorsmatic.

$ rhc app create -a poorsmatic -t jbossas-7

We're specifying the jbossas-7 OpenShift cartridge. That will create a sample Java application in the poorsmatic/ directory. But we don't want that. Instead, we'll use the Immutant Quickstart to add the Immutant modules to AS7 and replace the Java app with a Clojure app:

cd poorsmatic
rm -rf pom.xml src
git remote add quickstart -m master git://github.com/openshift-quickstart/immutant-quickstart.git
git pull --no-commit -s recursive -X theirs quickstart master
git add -A .
git commit -m "Add Immutant modules and setup Clojure project"

At this point, we could git push, and after a couple of minutes hit http://poorsmatic-$namespace.rhcloud.com to see a static welcome page. Instead, we'll configure our database and add the Poorsmatic source files before pushing.

Add the PostgreSQL cartridge

To add a PostgreSQL database to our app, we add a cartridge:

$ rhc cartridge add postgresql-8.4 -a poorsmatic

And boom, we have a database. We have to tweak it just a bit, though. So we're going to log into our app using the ssh URI from the output of the app create command (available via rhc app show -a poorsmatic or from the My Applications tab of the web UI). Here's the URI it gave me:

$ ssh a4117d5ebac04c5f8114f7a96eba2737@poorsmatic-jimi.rhcloud.com

Once logged in, we need to modify PostgreSQL's default configuration to enable distributed transactions, which Poorsmatic uses. We're going to set max_prepared_transactions to 10 and then restart the database:

$ perl -p -i -e 's/#(max_prepared_transactions).*/\1 = 10/' postgresql-8.4/data/postgresql.conf
$ pg_ctl restart -D $PWD/postgresql-8.4/data -m fast
$ exit

If you forget to do this, you'll see errors referencing max_prepared_transactions in the logs.

Add the Poorsmatic source to your app

We'll use git to pull in the Poorsmatic source code. You can use the same technique to get your own apps deployed to OpenShift:

$ git pull -s recursive -X theirs git://github.com/jcrossley3/poorsmatic.git korma-lobos

Note that we specified the korma-lobos branch.

Configure the app to use PostgreSQL

You'll see Leiningen profiles in project.clj that determine which database both the lobos and korma libraries will use. One of these profiles, :openshift, refers to the name of the PostgreSQL datasource configured in your .openshift/config/standalone.xml provided by the quickstart.

We'll activate the :openshift profile in deployments/your-clojure-application.clj:

{
 :root (System/getenv "OPENSHIFT_REPO_DIR")
 :context-path "/"
 :swank-port 24005
 :nrepl-port 27888

 :lein-profiles [:openshift]
}

Add your Twitter credentials

Finally, because Poorsmatic accesses Twitter's streaming API, you must create an account at http://dev.twitter.com and add a file called resources/twitter-creds that contains your OAuth credentials in a simple Clojure vector:

["app-key" "app-secret" "user-token" "user-token-secret"]

You may be concerned about storing sensitive information with your app, but remember that OpenShift secures your git repo with ssh public/private key pairs and only those people whose public keys you've associated with your app have access to it.

Push!

Now we can commit our changes and push:

$ git add -A .
$ git commit -m "Database config and twitter creds"
$ git push

And now we wait. The first push will take a few minutes. Immutant will be installed and started, your app deployed, the app's dependencies fetched, the database schema installed, etc. You should login to your app and view the logs while your app boots:

$ ssh a4117d5ebac04c5f8114f7a96eba2737@poorsmatic-jimi.rhcloud.com
$ tail_all

Eventually, you should see a log message saying Deployed "your-clojure-application.clj", at which point you can go to http://poorsmatic-$namespace.rhcloud.com, enter bieber and then watch your server.log fill up with more meaningless drivel than you ever dreamed possible.

And you may even see some bieber tweets. ;-)

Reload the web page to see the scraped URL's and their counts.

The REPL

You may have noticed the nREPL and Swank ports configured in the deployment descriptor above. They are not externally accessible. They can only be accessed via an ssh tunnel secured with your private key.

Run the following:

$ rhc port-forward -a poorsmatic

Depending on your OS, this may not work. If it doesn't, try the -L option:

$ ssh -L 27888:127.11.205.129:27888 a4117d5ebac04c5f8114f7a96eba2737@poorsmatic-jimi.rhcloud.com

But replace 127.11.205.129 with whatever rhc port-forward told you (or ssh to your instance and echo $OPENSHIFT_INTERNAL_IP). And obviously, you should use the ssh URI associated with your own app.

Once the tunnel is established, you can then connect to the remote REPL at 127.0.0.1:27888 using whatever REPL client you prefer.

Tune in next time...

Immutant's clustering capabilities yield some of its coolest features, e.g. load-balanced message distribution, highly-available services and scheduled jobs, etc. But clustering is a pain to configure when multicast is disabled. OpenShift aims to simplify that, but it's not quite there yet. In a future post, I hope to demonstrate those clustering features by creating a scaled OpenShift application, letting it deal with all the murky cluster configuration for you.

Stay tuned!