Unified Scala(tra) Deployments with SBT

If you use SBT, which you most probably do if you’re writing Scala repos, you’ve probably used the assembly fat-jar generator plugin for shipping and running your code. If this much is true, you’ve probably encountered merge conflicts and had to wrangle with a verbose build description and struggled.

We have a few RESTful microservices in our “corner of some datacenter someplace”™ doing various different things, and one of our go-to frameworks in Scala to use in engineering them is Scalatra. Our packaging up of Scalatra repos with launchers and dependencies has previously been done using the assembly sbt plugin and specifying the main class entry point. It’s worked great and will probably continue to be used as a strategy to package and ship our code. But we just discovered another really elegant way to do the same thing using a different and potentially simpler SBT plugin.

We use Chef to configure our boxes and deploy using Capistrano. This enables us to homogenize the process of spinning up boxes to run a small single-machine java service, deploying bytecode to run on them and starting applications. We prefer a unified shell script name, directory structure and package layout for our shipped compiled-code unit in order to make this all work reproducibly and straightforwardly. With assembly, you simply java -jar (*).jar your fat-jar in your script and all is well and good. We’ve just test trialed the excellent scalatra-sbt/Dist plugin to accomplish the same feat, here’s how we did it.

Scalatra-SBT

In essence, this plugin simply defines another sbt task that enables you to grab all of the stuff you need, e.g. class files, configuration files and dependency jars and get it all into a zip folder. Therein you’ll also have your shell script with the classpath defined (with all your dependencies) and your jvm runtime settings all handily ready to go. We added a feature (since merged) to enable us to rename the shell script to whatever you might want. This enables us have homogeneity in all of our scalatra repos by always calling our run script the same thing. Same script and same directory layout leads to an easy and reproducible (i.e. copy and pastable) cap deploy definition.

You can grab the dependency like this in your plugins.sbt file in the ./projects subdirectory of your sbt project directory:

1
addSbtPlugin("org.scalatra.sbt" % "scalatra-sbt" % "0.3.5")

SBT Definitions

Your Scalatra project’s settings might be a bunch of Seq mangled together, and adding Dist follows the same pattern. All you need to do is append DistPlugin.distSettings to it and specify the particulars:

1
2
3
settings = Defaults.defaultSettings ++ 
  ScalatraPlugin.scalatraWithJRebel ++ 
  scalateSettings ++ ...

and

1
2
3
4
5
6
7
8
DistPlugin.distSettings ++ Seq(
  mainClass in Dist := Some("com.example.MainLaunch"),
  memSetting in Dist := "2g",
  permGenSetting in Dist := "1024m",
  envExports in Dist := Seq("LC_CTYPE=en_US.UTF-8", "LC_ALL=en_US.utf-8"),
  javaOptions in Dist ++= Seq("-Xss4m", "-Dfile.encoding=UTF-8"),
  scriptName in Dist := "run_server",
  ...

Notice the run_server script definition that will appear in our ./target/projectName.zip/bin directory when the task is finished. If you don’t want to specify the scriptName it will default to be the project’s name instead.

But we’re not quite finished. There’s one more step to actually define the task to run. After your Project (...) definition, wherein you might define your credentials, publishing rules etc, you’ll need to append the following settings to define the artifact:

1
2
.settings(addArtifact(artifact in (Compile, dist), dist): _*).
  settings(addArtifact(Artifact("projectName", "zip", "zip"), dist): _*)

in order to run the dist plugin, you guessed it, just run dist at the sbt prompt and you’re away, the zip will appear in your ./target folder and you’re finished!

Further Reading

You can check out the scalatra-sbt plugin here: https://github.com/scalatra/scalatra-sbt