I have a friend who wants to test a tool I wrote in perl (yes yes, I know) but it seemed to me that it was going to be a big pain for him to have to install all the modules necessary to support it. And so, I thought, I had the perfect solution – I’d build a container with everything in it, and get him to run that instead. Simples!
The problem with this genius idea is that we both have MacBooks running OSX, and docker (well, the LXC Linux container format) requires, well, Linux. Thankfully, smarter people than I have attacked this problem and come up with the following workaround. I found the same thing in more than one place, but this particular quote is from David Gageot’s “Java Bien!” blog, “Setup Docker on OSX, The No-Brainer Way”. You can see why this would appeal to me. The steps basically are:
Install home-brew if not already:
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
Install Virtualbox (VM solution):
brew update brew tap phinze/homebrew-cask brew install brew-cask brew cask install virtualbox
Install boot2docker (sets up a minimal linux VM in virtual box):
brew install boot2docker boot2docker init boot2docker up
At the end of the install process, boot2docker tells you the command to configure the DOCKER_HOST environment parameter, for example:
Then finally, now that we have an environment ready to run docker, install docker!
brew install docker docker version
Now you can run docker as if it’s local, but in fact it’s instantiating containers in the virtual box linux VM.
If you just ran through the commands above, you will be ready to issue a docker command, as the VM has been spun up already. However, if you rebooted, it won’t be ready in which case, the “boot2docker up” command is the important one. So for example:
$ boot2docker up Waiting for VM and Docker daemon to start... ..................... Started. To connect the Docker client to the Docker daemon, please set: export DOCKER_HOST=tcp://192.168.59.103:2375 $ export DOCKER_HOST=tcp://192.168.59.103:2375
In this case, let’s grab an ubuntu image to test with.
$ docker pull ubuntu Pulling repository ubuntu [...]
As a point of interest, boot2docker installs things so that you don’t need to issue ‘sudo’ in front of all the docker commands. This is handy for my single user use case, so I’m not worried about it. Let’s now try to spin up the ubuntu container and test it with an interactive bash shell. I’ll also check for the presence of gcc and the JSON perl module, as I’ll be needing those later on:
$ docker run -i -t ubuntu /bin/bash [email protected]:/# [email protected]:/# uname -a Linux ed41e9ab29d8 3.16.1-tinycore64 #1 SMP Fri Aug 22 06:40:10 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux [email protected]:/# [email protected]:/# perl -e "use JSON;" Can't locate JSON.pm in @INC (you may need to install the JSON module) (@INC contains: /etc/perl /usr/local/lib/perl/5.18.2 /usr/local/share/perl/5.18.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.18 /usr/share/perl/5.18 /usr/local/lib/site_perl .) at -e line 1. BEGIN failed--compilation aborted at -e line 1. [email protected]:/# [email protected]:/# gcc bash: gcc: command not found
Am I the only one that giggles with excitement the first time something cool happens like a container starting up? I remain blown away at how fast containers are to use. Anyway, this is an base ubuntu container and it doesn’t have my gcc and perl modules; I need one with a bit more than that if I’m going to build my environment.
Building a Dockerfile
I’m no Docker expert, as will be evident, but I can fumble my way through a task. This might be a ridiculous way to set up a container, but it works. The format is pretty simple – based the image on ubuntu, add some software needed for the environment (or make sure it’s already there), install cpanminus, then use cpanminus to install a bunch of perl modules used by my tool-X script. All this gets put in a file called Dockerfile:
# Ubuntu image supporting the necessary libraries etc for tool-X # # Base image FROM ubuntu # Maintainer MAINTAINER John Herbert # Set stuff up RUN apt-get update RUN apt-get install -y perl RUN apt-get install -y git RUN apt-get install -y make RUN apt-get install -y libyaml-appconfig-perl RUN apt-get install -y build-essential # CPAN Minus RUN cpan App::cpanminus RUN cpanm LWP::UserAgent RUN cpanm JSON RUN cpanm JSON::Path RUN cpanm DateTime RUN cpanm MIME::Base64 RUN cpanm Data::Dumper
Now I can ask Docker to build my container with all these commands already run so that they’ll be present and usable. This isn’t, initially, the fastest process – at least it’s no faster than when you do it yourself. Where it gets smart is that the base OS (ubuntu) is already cached, so does not need to be downloaded again. Then each element that’s built can be cached for later use as well, so if you build a similar ubuntu container with these elements in them, they’ll be grabbed from the cache. That’s smart stuff. So let’s build that image and, give or take many many pages of output snipped, I should be able to see my container and run it to test:
$ docker build -t tool-x . Sending build context to Docker daemon 4.096 kB Sending build context to Docker daemon Step 0 : FROM ubuntu ---> 826544226fdc Step 1 : MAINTAINER John Herbert ---> Using cache ---> 4397ae7a31d0 Step 2 : RUN apt-get update ---> Running in d9f1789d4433 Ign http://archive.ubuntu.com trusty InRelease [...] etched 20.3 MB in 22s (884 kB/s) Reading package lists... ---> ce99abb82ea9 Removing intermediate container d9f1789d4433 Step 3 : RUN apt-get install -y perl ---> Running in 1639ff1360fb Reading package lists... Building dependency tree... [...] Step 8 : RUN cpan App::cpanminus ---> Running in 6a2b2b231980 [...] Step 14 : RUN cpanm Data::Dumper ---> Running in 361f57139b12 --> Working on Data::Dumper Fetching http://www.cpan.org/authors/id/S/SM/SMUELLER/Data-Dumper-2.151.tar.gz ... OK Configuring Data-Dumper-2.151 ... OK Building and testing Data-Dumper-2.151 ... OK Successfully installed Data-Dumper-2.151 (upgraded from 2.145) 1 distribution installed Successfully built ae8f3d1ec336 $
Let’s list the images first, and see if “tool-x” is indeed there:
$ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE tool-x latest ae8f3d1ec336 43 seconds ago 472.3 MB
Now I’ll run it, and test for the JSON perl module and gcc (as examples) to confirm that they were installed. You may recall that they weren’t present in the base ubuntu image from earlier:
$ docker run -i -t tool-x /bin/bash [email protected]:/# [email protected]:/# perl -e "use JSON;" [email protected]:/# [email protected]:/# gcc gcc: fatal error: no input files compilation terminated. [email protected]:/#
Bingo. Now I have a little container definition that sets up an environment for me. I can share my images or even just the Dockerfile definition with my coworker, and he’ll be able to mirror my dev environment on his Macbook.
This is, to be fair, all a little silly. I’ve had to just jump through a whole bunch of hoops here in order to do what? Right, to create an environment with some perl modules in it. OSX supports perl and CPAN, so it’s a breeze to simply install those modules. On the other hand, what if my coworker does not really want to do that? I liked the concept of being to hand over something prebuilt, and perhaps even script in cloning the project git repo as well so that he can jump in there and start running tool-X. But then, if he’s not interested in downloading Xcode (to get git on OSX) and installing some perl modules, then why on earth would he go through the pain of installing homebrew, boot2docker and docker? You’re right, he wouldn’t.
One of my favorite sources of good quotes is the late Professor Richard Feynman. He described how he ended up running the team operating a new IBM computer (we’re talking multi-stage punch card computing here, to do mathematical operations) at Los Alamos to do the calculations to support the development of the atomic bomb. He says:
[…] To figure out exactly what happened during the bomb’s explosion […] required much more calculating than we were capable of. A rather clever fellow by the name of Stanley Frankel realized that it could possibly be done on IBM machines. The IBM company had machines for business purposes, adding machines called tabulators for listing sums, and a multiplier that you put cards in and it would take two numbers from a card and multiply them.
Mr. Frankel […] began to suffer from the computer disease that anybody who works with computers now knows about. It’s a very serious disease and it interferes completely with the work. The trouble with computers is you play with them. They are so wonderful. You have these switches – if it’s an even number you do this, if it’s an odd number you do that – and pretty soon you can do more and more elaborate things if you are clever enough, on one machine.
And so after a while the whole system broke down. Frankel wasn’t paying any attention; he wasn’t supervising anybody. The system was going very, very slowly – while he was sitting in a room figuring out how to make one tabulator automatically print arctangent X, and then it would start and it would print columns and then bitsi, bitsi, bitsi, and calculate the arc-tangent automatically by integrating as it went along and make a whole table in one operation.
Absolutely useless. We had tables of arc-tangents. But if you’ve ever worked with computers, you understand the disease – the delight in being able to see how much you can do. But he got the disease for the first time, the poor fellow who invented the thing.
And so I was asked to stop working on the stuff I was doing in my group and go down and take over the IBM group, and I tried to avoid the disease. And, although they had done only three problems in nine months, I had a very good group.
If you can see the parallel there, you’re probably right. So for now this stays as a purely academic exercise, and I’ll continue to play with containers in my own little world. Then maybe in the future I’ll find somebody who already has docker running, and I’ll start sharing things with them – whether they want it or not.