Microservices Gone Wild – Tech Dive Part 1

Tech Dive - Microservices

I’ve heard a lot of noise about microservices in the last couple of years, perhaps most notably when I attended ONUG in Spring 2015 and Adrian Cockcroft from Battery Ventures (previously from Netflix) was pushing the idea of building applications using container-based microservices very convincingly. In this short series of posts, I’ll look at what microservices are, why you might want them (particularly in containers) and — because it would be no fun if this was all just theory — I’ll run through a demonstration where I take a simple monolithic application and successfully break it out into containerized microservices. I’ll share the code I use because I just know you’ll enjoy playing along at home.

Monolithic Applications

In order to consider the benefits of microservices it’s important first to get some context by looking at what is arguably the polar opposite, the monolithic application. I should preface this by saying that defining what constitutes a monolithic application can be a rather nuanced task, depending on the perspective from which one looks. For my purposes though, a monolithic application is typically one where the entire application is delivered in a single release. Even if the application is logically deployed across multiple nodes, if a new release means that all those nodes have to be upgraded at the same time in order to work together, the application in my opinion is fundamentally monolithic.

A monolithic application may also be one that runs as a single executable. An example of this might be a web application where data is passed through from the client, the data is sanitized and validated, the application queries a database, then processes the returned data and generates HTML to send back to the user. I should note that there’s nothing wrong with having all those functions in a single application, but from a development and maintenance perspective – especially with larger applications – it might by less ideal for a number of reasons.

Release Management

Let’s say that there’s a bug identified in the code that queries the database. In order to fix it, somebody makes the necessary corrections to the database code and submits the code for integration into the latest patch. Before releasing that code to the customer, it’s necessary to validate that:

  • the code builds without errors
  • the code functions correctly and produces correct results
  • the updated code does not interfere with the operation of the any other part of the code, including the input data processing and the HTML creation.

That means that in order to release the code with the database fix, it is necessary to test every other function of the code in addition to the database functions. The same irritation applies to any code alteration anywhere in the application: every code change will require full testing of every application function before the code can be released, and as features are added and refactoring and optimizations are made, the effort of testing every change and every potential interaction becomes exponentially more challenging.

Operational Resistance

There’s one more release problem with monolithic applications, and that’s controlling the number of changes that are being implemented in each release. When troubleshooting, as engineers we tend to run by the rule of change one thing at a time. However, for application code with contributions being made across the code base it’s unlikely that a new release would ever only affect one feature (like the database bug fix), and with each release containing multiple changes affecting many areas of the code, the risk of the new code having problem is significantly higher than most of us would like.

In the network arena, when considering a software upgrade for a router or switch, I’ve often asked something like Glad they fixed issue, but I wonder what else they broke in the process? and I’ll bet I’m not alone in doing so. The net result of this is a high resistance to change from the Operations teams. Each software upgrade becomes a high risk event, and is thus often avoided perhaps even for years. Why else would almost every network be running old versions of IOS and Junos, for example?

Modular Code

Writing code using a modular architecture, does not necessarily mean the creation of a non-monolithic application. Having code split up into a number of source files by function (e.g. classes, packages and modules) is a very sensible organizational tool for development, but here’s the question: when the code is compiled, do all those files have to be compiled at the same time as part of the release? If so, then it’s still a monolithic application, and the same problems outlined above still apply – a change in one module can have an impact on all the others. Modular code management is a great development practice though, and the logical separation of functions as modules is in many ways an important precursor to a microservice-based application.


There isn’t one formal definition for microservices, so the boundaries are a little bit fuzzy. Microservices are suspiciously similar in concept to a Service Oriented Architecture (SOA), and the differences between the two are the subject of much discussion which I’ll leave the reader to search for, as it’s way beyond the scope of this post to summarize them. For my purposes, however, I’m looking at microservices as being an application composed of a number of independently-developed and deployed services that communicate with each other over the network (typically using a lightweight protocol like REST/HTTP) to achieve the overall application’s goal. Why would anybody want to do that? Below, I’ve listed some good and bad points about using microservices.

The Good

  • Failure blast radius is reduced if a particular component crashes; it shouldn’t be able to take any other services down with it, versus a monolithic application where one crash can take down the whole application;
  • Issues with making sure all the components are ready (monolithic apps don’t have bits ‘missing’ because something went down), so have to handle errors in a different way;
  • Because each module is independent from the others, it’s a good fit for a Continuous Integration/Continuous Deployment (CI/CD) environment;
  • Highly scalable; if you need better performance from a module, spin up more of them and put them behind a load balanced VIP;
  • Highly distributable; latency aside, the components can reside anywhere, whether internal to a company, in the cloud, in multiple clouds, and so forth;
  • Allows developers specializing in each area to write the most relevant module(s;
  • Each service (if needed) can be written in a different language based on particular needs;
  • Very well suited to containerized deployment for scalability; even if all microservices are deployed on the same server, they can all be isolated in their own container (with a picky note that you’re never truly isolated in a container, at least in some ways).

The Bad

It wouldn’t be fair to discuss microservices without raising a few of the potential pitfalls, however:

  • Higher compute resource requirements because there are more independent systems running (though Docker helps reduce that need somewhat);
  • Programmers have to develop an effective API for communication to abstract the request from the implementation of that request (typical REST requirement);
  • The independence of data between microservices can make data exchange cumbersome, and where databases are involved it may make data integrity more challenging;
  • Code has to be able to cope with network failures, service failures, and degradation (e.g. higher latency of responses) and recover/handle it accordingly.
  • Sizing a microservice is critical. A service whose communication and operational overhead exceeds the function’s execution is considered too small to be a microservice. Some have titled such functions nanoservices, a rather pejorative indication that the function is just too small to be worth bothering with. The demo I’ll give in these posts will of course be a textbook example of a nanoservice where the level of effort required to access the services and process the result far outweigh the benefit of having that function separated from the main application process;
  • Scaling microservices brings additional complexity (e.g. configuring load balancers, container deployments and so forth).

Monolith or Microservices?

With all this information in hand, which is better, monolith or microservices? I hate to say it depends, but that’s really what it comes down to; it’s a case of determining what works best for a particular application within a particular organization. If you have nothing better to do with your time, I would recommend reading The Great Microservices Vs Monolithic Apps Twitter Melee which is a very good recounting of a battle of tweets between Adrian Cockcroft, a fan of microservices, and Etsy’s John Allspaw, a proponent of the monolithic application. The article also has a long list of further reading at the end, should you be interested. The obvious conclusion though is that there is a place for both types of application deployment and each has its own benefits and risks.


I promised a demonstration of microservices, so in the next post in this series I’ll set out the design for an annoyingly simple, pointless application, initially as a monolithic deployment and then as the series continues, I’ll rebuild the app using microservices and prove that it can be done even by an idiot like me. Stay with me and it will all shortly make sense, I hope. Please feel free to ask questions, share your thoughts, or disagree with me in the comments.




My attendance at ONUG NYC 2015 was sponsored by GestaltIT’s ONUG Spring 2015 Tech Talk Series.

Be the first to comment

Leave a Reply

Your email address will not be published.