When to use the microservice architecture: part 5 - the monolithic architecture and rapid, frequent, reliable and sustainable software delivery

This is part 5 in a series of blog posts about why and when to use the microservice architecture. Other posts in the series are:

  • Part 1 - the need to deliver software rapidly, frequently, and reliably
  • Part 2 - the need for sustainable development
  • Part 3 - two thirds of the success triangle - process and organization
  • Part 4 - architectural requirements for rapid, frequent, reliable and sustainable development

In this post, I describe the monolithic architecture and discuss how well it supports rapid, frequent, reliable and sustainable software delivery.

The monolithic architecture is not an anti-pattern

The monolithic architecture is architectural style that structures the application as a single executable or deployable unit. A Java application, for example, could consist of a WAR file or an executable JAR. A GoLang application would consist of a single executable.

The monolithic architecture has numerous benefits including simplicity. All of the code is typically in a single repository and so making changes is generally straightforward. Modules collaborate using language-level method or function calls, which are simple and efficient And, the application can maintain data consistency using ACID transactions, which are simple and familiar Moreover, if you are using a statically typed language, the compiler will enforce interfaces.

Consequently, the monolithic architecture is certainly not an anti-pattern. It’s a valid choice for many application. In particular, a small monolith is generally easy to develop, test, deploy and scale. It meets the architectural requirements for rapid, frequent, reliable and sustainable software delivery that I described in Part 4 of this series.

Consider, for example, a small eight person team doing trunk-based continuous deployment. Each developer typically commits to trunk at least once per day. Prior to committing, he or she runs the pre-commit tests. The commit triggers an automated deployment pipeline that builds, tests and updates production using a canary-based deployment strategy.

Since the team is small, the rate of change and the cost of coordinating changes is relatively low. Perhaps during each eight hour workday there are eight commits, which works out at one per hour. Each developer adds, perhaps, around 300 LOC/week and so, at least initially, the code base is relatively small. The code compiles quickly and the relatively small number of automated tests execute quickly on a developer’s laptop. Similarly, the automated deployment pipeline is fast and reliable. What’s more, the cost of rewriting a small code base to use a different technology stack would not prohibitively expensive.

Successful applications often outgrow their monolithic architectures

The problem, however, with the monolithic architecture is that successful applications have a habit of growing. Even when the team is small, the code base gradually gets larger and larger. What’s more, a single small team often grows into 10s or 100s of small, cross-functional teams - each one working of a particular area of the business. As a result, the growth rate of the application’s code base steadily increases over time. Since each developer is committing changes daily, there will eventually be large number of commits each day. And, what was once a small monolith, grows into a massive application. Some clients I’ve worked with have massive multi-gigabit WAR files.

If the application and its team continues to grow then sooner or later, the monolithic architecture becomes an obstacle to delivering software rapidly, frequently, reliably and sustainably. Let’s look at why.

The downwards spiral to a big ball of mud

In theory, you can preserve the modularity of an application over time. In practice, however, the application’s modularity often breaks down. Modern programming languages typically lack mechanisms for enforcing modularity. Or, perhaps deadline pressure causes a developer to take a shortcut and violate modularity. Over time, these changes accumulate and the application evolves into a Big Ball of Mud.

Once this happens the organization is in a world of pain. Developers become overwhelmed by the application’s complexity. The rate of change slows down. And, changes often result in unexpected bugs.

Increasingly obsolete technology stack

A key limitation of the monolithic architecture is that upgrading the application’s technology stack cannot be done incrementally. Since there is a single deployment unit, many technology decisions are global in nature, which prevents incremental upgrades. There is, for example, a single version of the language runtime. You cannot upgrade the language runtime version for just part of the application. Nor can you switch to a different runtime, one module at a time.

Also, you typically cannot use multiple versions of a library in a monolithic application. In a Java application, for example, there is a single class path and so there only be a single version of a library. As a result, you cannot incrementally upgrade the application to a new version. Instead, you must upgrade the entire application, which can be prohibitively time consuming if the newer version is not backwards compatible with the old version.

For example, let’s imagine that you want to implement a feature that requires a new library that has a transitive dependency on a newer major version of some other library that’s already being used by the application. Upgrading to a new major version potentially requires modifying numerous parts of the application - a potentially major undertaking. This type of upgrade can’t be done incrementally. You need to change all parts of the application at the same time. And, to make matters worse, it’s possible that not every team will benefit from the upgrade and so it might be challenging to convince all teams to agree to do the upgrade work simultaneously. As a result, you are typically locked into the increasingly obsolete technology stack that you chose at the start of development.

No independent deployments

Another drawback of the monolith architecture is that because there is a single executable/deployable, a team cannot deploy their changes independently. Instead, their code must first be packaged together with code developed by other teams. This lack of deployability requires a team to coordinate with other teams to deploy their changes.

Also, there is a risk that teams can interfere with, and slow each other down. For example, as the application’s team grows, it becomes increasingly likely that a developer cannot deploy their changes because another developer has broken the build. The risk of broken builds is especially high if the application has a single repository.

Slow builds and deployments

Another limitation of the monolith architecture is that as the application and its team grows in size it’s likely that development will eventually slow down. Since the compilation, assembly, test and build times are proportional to the application’s size, the deployment pipeline’s execution time and, hence, the lead time will increase as the application grows. Also, while deployment frequency increases as the number of developers grows, eventually it will plateau and start to decline primarily because the deployment pipeline will become a bottleneck.

As next two posts in this series describe, the precise reasons for the slowdown depend on the nature of the path from a developer’s laptop to production. For example, an application code might reside in a single code repository with a single deployment pipeline. Alternatively, each of the application’s each top-level modules might be reside in its own code repository that has its own build pipeline. However, in both cases, the final stages of the application’s deployment pipeline must

  1. Assemble the complete, deployable application, e.g. create a WAR file or executable JAR and possibly a Docker container
  2. Test the assembled application
  3. Deploy the application into production

The execution time of each of these steps is proportional to the size of the application. As a result, the execution time of each step will grow over time. Let’s look at each step in more detail.

Assembling the application gets slower

As you will see in the next two posts, the application’s modules (e.g. JAR files) might be built and tested concurrently. However, ultimately the deployment pipeline must assemble those modules into a single application (e.g. WAR file). The duration of this task is proportional to the size of the application. It’s not uncommon for an application to be tens or hundreds of megabytes. Some enterprises even have multi-gigabyte applications, which is a lot of data to move around the network. Consequently, the time to assemble the application will steadily increase as the application grows. Fortunately, however, assembling the application is likely to be much faster than the following two steps.

Slow test times

After assembling the application, the deployment pipeline must test it. In principle, the application-level test suite can assume that the modules have been thoroughly tested in isolation. As a result, it doesn’t need to comprehensively test the application. However, it’s likely that superficially verifying that that a large application works is likely to be time consuming. And as the application grows, testability declines and the test suite will take longer to execute.

To make matters worse, there is a risk that a large monolith application will take a long time to startup, which further increases the test times. The initialization phase of a Spring application, for example, uses high overhead mechanisms, such as reflection and class path scanning. As a result, it’s not uncommon for a large application to take several minutes to start. The startup time must added to the test execution time and the overall build time.

Also, a long startup time for an application has a couple of other drawbacks. It reduces developer productivity, since developers are blocked waiting for a local build on their laptop to complete. It also can reduce development frequency since a long start up time slow down the deployment pipeline. The tests take longer run. It also takes longer to deploy the application in to production. As a result, a long startup time can potentially limit the numbers of builds that can be done each day.

Deploying the application into production is potentially a bottleneck

As the application grows, not only will tests take a long time to run, it’s likely that the deployment frequency will plateau because step 3 (deployment) will become a bottleneck. That’s because while each commit could trigger a build that executes steps 1 (assembly) and 2 (test) concurrently with other builds, step 3 (deployment) typically needs to be serialized. For example, let’s imagine that the deployment pipeline uses a canary deployment strategy that gradually routes more and more traffic to the new version. Because you need to verify that the canary is healthy, the deployment process can take a significant amount of time. For example, the Flagger example takes 25 minutes. This would limit the deployment frequency to only 20/day (8 hours/25 minutes). Even with a 5 minute deployment, the number of daily deployments would be limited to 100.

Furthermore, if the deployment step becomes a bottleneck then this will also increase the lead time. That’s because the deployment step will behave like a queuing system. Once the time between commits is less than the deploy time, then commits will wait in a queue. The higher commit frequency and the longer the deployment time, the longer the wait time.

Summary

The monolithic architecture, which is an architectural style that structures an application as single executable/deployable, is not an anti-pattern. A small monolith meets the architectural requirements for rapid, frequent, reliable and sustainable software delivery that I described in Part 4 of this series. As the application and its team grows, however, it’s likely that the monolithic architecture becomes an obstacle to delivering software rapidly, frequently, reliably and sustainably, which is significant risk to the business.

That’s because a large monolith typically suffers from the following problems:

  • The monolithic architecture lacks the loosely coupling and modularity that enables you to incrementally upgrade the application’s technology stack. As a result, sustainably declines because since upgrading the technology stack becomes increasingly difficult.

  • Modularity often breaks down over time and developers become overwhelmed by the application’s complexity. As a result, making changes become slow and error prone.

  • A team cannot deploy their changes independently of other teams, which reduces their productivity.

  • As the application grows in size, the deployment pipeline slows down, and eventually becomes a bottleneck and causes the deployment frequency to plateau

In the next two posts, I’ll look at how to reduce the impact of these problems by structuring a monolith’s code base.

Acknowledgements

I’d like to thank the following for their insightful comments/feedback:



Copyright © 2024 Chris Richardson • All rights reserved • Supported by Kong.

About Microservices.io

Microservices.io is brought to you by Chris Richardson. Experienced software architect, author of POJOs in Action, the creator of the original CloudFoundry.com, and the author of Microservices patterns.

New workshop: Architecting for fast, sustainable flow

Enabling DevOps and Team Topologies thru architecture

DevOps and Team topologies are vital for delivering the fast flow of changes that modern businesses need.

But they are insufficient. You also need an application architecture that supports fast, sustainable flow.

Learn more and register for my June 2024 online workshops....

NEED HELP?

I help organizations improve agility and competitiveness through better software architecture.

Learn more about my consulting engagements, and training workshops.

LEARN about microservices

Chris offers numerous other resources for learning the microservice architecture.

Get the book: Microservices Patterns

Read Chris Richardson's book:

Example microservices applications

Want to see an example? Check out Chris Richardson's example applications. See code

Virtual bootcamp: Distributed data patterns in a microservice architecture

My virtual bootcamp, distributed data patterns in a microservice architecture, is now open for enrollment!

It covers the key distributed data management patterns including Saga, API Composition, and CQRS.

It consists of video lectures, code labs, and a weekly ask-me-anything video conference repeated in multiple timezones.

The regular price is $395/person but use coupon ILFJODYS to sign up for $95 (valid until April 12, 2024). There are deeper discounts for buying multiple seats.

Learn more

Learn how to create a service template and microservice chassis

Take a look at my Manning LiveProject that teaches you how to develop a service template and microservice chassis.

Signup for the newsletter


BUILD microservices

Ready to start using the microservice architecture?

Consulting services

Engage Chris to create a microservices adoption roadmap and help you define your microservice architecture,


The Eventuate platform

Use the Eventuate.io platform to tackle distributed data management challenges in your microservices architecture.

Eventuate is Chris's latest startup. It makes it easy to use the Saga pattern to manage transactions and the CQRS pattern to implement queries.


Join the microservices google group