Microservices in Promise and Practice

Are microservices the cure for the ague of monolithic applications, or do they bring their own problems with them that monolithic architectures have circumvented? Are they capable of delivering applications that are easier to maintain and develop? How can they avoid the failings of service-oriented architectures? Once more, Robert Sheldon gets to the heart of the technical issues.

A technology that’s been generating its own share of buzz within the development community is microservices, an architecture and development methodology that aims to tear down the traditional monolithic application structure in favor of small and lightweight distributed services. Microservices are being praised in some quarters for their ability to simplify development, speed up implementation, and provide a way to structure the teams and their efforts that is more in line with specific business functions, rather than basing them on the application stack.

So promising are microservice technologies that companies such as Netflix, Amazon, and eBay have gone all-out to incorporate them into their development efforts. Fortunately, they have the resources necessary to make such an aggressive strategy work. As most companies will quickly discover, it is a significant undertaking to implement an application as a set of distributed services. It is better that organizations planning to go the microservices route should get to know what challenges they face before heading in that direction.

The monolithic conundrum

Until recently, development teams generally adhered to a monolithic architectural model that often resulted in massive, inflexible applications built by huge teams divided along functional lines, such as the data tier, server-side layer, or user interface. Although the advent of Agile methodologies changed how groups might be organized, the goal often remained a monolithic solution built upon a centralized data tier that provided the core around which one or more applications revolved.

To support this model, sophisticated integrated development environments (IDEs) and other tools emerged for managing such tasks as application building, file versioning, product testing, bug reporting, and deploying to large-scale environments. The more committed the development world became to these tools (and the processes they supported), the more advanced and efficient the tools became, making it easier every day to build, deploy, and scale solutions.

Despite their capabilities, the tools still generally worked best for smaller applications. As solutions grew, so did the awareness that the monolithic model had a number of limitations. The codebase became extremely complex and difficult to understand, requiring specialized documentation and training just to bring new developers onboard. The teams themselves grew large and cumbersome with communication often breaking down within and between teams. Modifying the code became a significant investment in time and resources, with one small change requiring the entire application to be rebuilt and redeployed. Introducing new technologies could be prohibitive, if not outright impossible, locking teams into existing technologies for the life of the application.

To make matters worse, the large and complex codebase could often bog down the very tools designed to improve the development process. The IDEs became less responsive, and the file syncing and resolution processes more laborious. Building the application took hours, and running it in debug mode would be like accessing the Internet via a dial-up modem, and scaling such large solutions could result in inefficient use of resources. These challenges, when pitted against the fast-paced world of the Internet, cloud computing, and the proliferation of mobile apps, have forced development teams far and wide to look for a better approach.

Microservices to the rescue

As a way to counter the challenges of monolithic development, many teams are turning to microservices to help better control and streamline their efforts. The microservice approach breaks an application into small, single-purpose services that each targets a discrete function. The services are implemented independently of one another, providing a loosely coupled structure that forms a cohesive application based on standardized interfaces that support inter-service communication.

A development team builds, deploys, manages, and scales a microservice separately from all other microservice development efforts, building the service according to its individual requirements. A microservice is not locked into a common technology and it can be updated independently of the other services, without requiring the entire application to be redeployed. For example, a sales-related application might include a microservice to handle customers, one to manage product inventory, another to take care of the shipping processes, and so on.

Each microservice serves a specific function that runs as a stand-alone sub-application, communicating with the other microservices via a common interface, but always remaining independent. The service is small, lightweight, self-contained, and runs in its own process (often on its own server), maintaining firm boundaries from the other services. It can use the programming language and technologies most appropriate to its function, without regard to what other microservices have done. A development team should be able to update and deploy its microservice without impacting other services and without being impacted by changes in those services.

The microservice difference

Discussions around microservices invariably raise the issue of service-oriented architecture (SOA) and how the microservice model might differ. Certainly there are similarities between the two, with microservices sometimes referred to as a fined-grained SOA, and both have emerged from a desire to address the limitations of a monolithic structure.

This comparison can be problematic for microservice advocates because of the issues that arose with SOA, in part due to a lack of a clear consensus on what SOA means and how it should be used, resulting in misapplied technologies, complex implementations, and inconsistent (and nonexistent) standards. Any behemoth legacy app could slap on a few APIs and claim SOA sovereignty.

In comparison, microservices strive for greater simplicity and clear-cut boundaries, with limitations in scope and size, emphasizing the self-contained, independent nature of the microservice architecture, without locking into specific technologies or vendors.

One area in particularly where the microservice model makes a clean break with SOA is around communication protocols and the messaging layers between services. SOA relied heavenly on the Simple Object Access Protocol (SOAP) and middleware that followed the enterprise service bus (ESB) model, resulting in complex systems just to manage the interdependence between services.

Microservices take a much simpler approach, removing the logic from the messaging layer and putting it within the microservices themselves (dumb pipes, smart services). The exact strategy taken depends on whether a team wants to implement synchronous or asynchronous communication between the microservices. On the synchronous side, the favored approach is the HTTP-based Representative State Transfer (REST) protocol, which is easier to implement and much more lightweight than SOAP. If asynchronous communication is required, the team might turn to a lightweight messaging broker such as RabbitMQ. The goal is to let the microservices do the work and use the communication mechanism simply to send messages back and forth, unlike the typical SOA implementation.

But it’s not just the messaging capabilities that set microservices apart from SOA. An important part of microservice thinking is the development team itself. A microservices team follows the service throughout its entire lifecycle, from development through to operations. The team builds the microservice, implements it, and runs it, updating or replacing it when the time comes. Microservice teams are small, cross-functional, and organized along business considerations. The team becomes intimately acquainted with its product and how it’s being used on a day-to-day basis. Team organization did not enter the SOA picture.

The microservice advantage

The lightweight and modular nature of a microservice offers a number of benefits over the traditional monolithic structure. One important advantage is the ability to streamline development. Because the codebase is not as massive as that of a monolithic solution, IDEs and other development tools can process and manage the code faster and easier. In addition, a microservice is quicker to load and run, particularly in debugging mode, and the code in general is simpler to understand, making it easier for new and existing developers to understand the code on a line-by-line basis as well as comprehend the microservice as a whole.

The development teams too are smaller and easier to manage. Each team focuses only on building, deploying, and maintaining its assigned microservice, remaining flexible enough to respond quickly to any issues that might arise. In this way, the team is more in tune with the user experience, rather than developers being divorced from the implementation, as is often the case with traditional solutions. Each team is a complete DevOps structure, independent of other teams, able to rewrite or replace a service whenever necessary, without being encumbered by system-wide dependencies.

A team also has the flexibility to use whatever technologies are most appropriate to the microservice, rather than having to adhere to a solution-wide or organization-wide technology stack. The team can use the languages and tools that deliver the best result. Even data management is treated as a microservice component, rather than as a colossal system to support all services.

The microservice team also faces fewer deployment issues, at least at the microservice level. Testing, implementing, and updating are all limited to the scope of that microservice, without having to coordinate efforts with other teams. A team deploys its microservice independently of all other services. This separation makes it easier to implement a more granular release schedule. A team can make changes at the service level without requiring the entire application to be redeployed. Small changes are easier to make and can be made often, with services simpler to update or replace. In this way, the application never really goes offline, allowing for continuous delivery while keeping the system as a whole up and running.

Another big bonus that comes with microservices is better fault isolation. Because each microservice runs in its own process, if one of the services fails, it won’t bring down the entire application. You might lose a feature or set of features, but the application keeps running. Teams can also implement redundant services that kick in automatically if failure is detected, and when a fault is detected, the source of the problem is much easier to track because the microservice is already isolated. With a monolithic application, even a simple failure can bring down the entire system, and the source of that failure can be exceedingly difficult to find.

Teams can also scale microservices and allocate resources to them on an individual basis, providing each one with exactly the resources it needs. With a monolithic solution, you can end up allocating resources to entire layers in the application stack, when only certain components require those resources. Microservices can also be shared with different applications, and their modular nature makes it easier for them to adopt new technologies and experiment with new features.

Not so fast, microservices

As good as microservices sound on paper, they are not without their problems, the biggest of which is the added complexity of deploying and managing a distributed system. Development teams now have many more working parts to contend with. The microservices must be managed, communications facilitated, and all parts continuously monitored to track abnormal behavior and possible failures. Add to this mix multiple instances of individual services, network dependencies, and other operational overhead, and the simple elegance of the microservice structure loses a fair amount of its sheen.

To complicate matters, many microservice solutions rely on an event-driven architecture based on an API matrix that drives communication through asynchronous messaging. Because microservices are independent modules, APIs must be coarser-grained, which can be more difficult to work with, especially when there are mismatches in granularity. Remote calls in general are more costly operations than in-process communications.

In addition, boundaries can be difficult to define between microservices, particularly where business functionality cannot be easily delineated. If those boundaries are ineffective, the services will have to be refactored, but it can be difficult to refactor independent services that rely on remote communications. Modifications that change boundaries must be carefully coordinated with all players and take into account the various moving pieces, of which there can be many.

Deploying and managing a microservices environment calls for a high level of coordination between teams as well as sophisticated release and deployment processes. This can be a significant challenge for microservices because most development tools have been designed for monolithic systems, with little support for distributed applications. At the microservice level, the tools work great. It’s the multi-service, distributed nature that makes them a challenge. Teams might need to develop custom scripts or other solutions to manage these processes.

For example, testing becomes much more difficult with a distributed system, especially when it comes to automated tests. The microservices architecture, with its reliance on remote messaging, can make it difficult to re-create environments in a consistent way. Testing individual services usually presents little problem, but when you pull all the pieces together in a dynamic environment, more subtle issues can emerge that can be difficult to capture in automated tests.

Microservices can also result in increased resource usage. Although you can scale individual microservices and allocate resources as needed, collectively the microservices can result more resources being used over all, when compared to the monolithic system. For example, memory consumption can increase under the microservices model because of the distributed structure. Imagine having to contend with 20 services running in their own processes with their own failover and backup mechanisms, and that doesn’t even include the messaging, management, and load balancing layers.

What about the data?

A monolithic application often relies on a central data store for managing and protecting data as well as ensuring its integrity. Microservice proponents take a different view of data. Because microservices are all about modularity and loose coupling, development teams strive to eliminate dependencies on a centralized data store. Instead, each microservice is responsible for its own data solution.

According to microservice principles, the development team picks the storage technology best suited for its particular service, whether a relational database, a NoSQL data store, or another type of system-an approach sometimes referred to as polyglot persistence. The team owns the data model and its data and can update the data layout at any time.

The benefits of this approach, like microservices in general, are realized through its decoupled, modular nature, which simplifies development, deployment, and maintenance at the service level. Where things get tricky, however, is when trying to coordinate access across multiple services that need to work with the same data.

Suppose you have a microservices application that includes one service to manage customers, one to manage products, and one to manage orders. Each service maintains its own data store with its own defined structure. To generate an order, the order service must verify details about the product, such as inventory levels or feature types, and about the customers, such as store credits or shipping addresses. The order service must also be able to update data, including product inventory levels and customer contact information.

To be able to read and write data across services in this way, development teams must turn to such solutions as distributed transactions, database replication, remote procedure calls (RPCs), or application-level events.

Distributed transactions can help ensure that the data remains in a consistent state, without having to replicate data between services. Unfortunately, distributed transactions can be difficult to implement, particularly if the microservices are using different data storage technologies or the data is defined differently between services. In addition, the services involved in the transaction must all be available to make the transactions operate correctly. These types of dependencies are inconsistent with the idea that services should be loosely coupled.

Another possibility is to replicate the data between services. This avoids having to synchronize calls; however, as with distributed transactions, database replication relies on the services sharing similar technologies and data definitions, again coupling the services in a way that violates microservice principles.

Some teams try to avoid this problem by simply using RPCs to access the data in the service that contains the source data. That service has complete control over how the data can be accessed and updated. The advantage to this approach is that it relies on only one set of data, helping to ensure the integrity of that data within the service. Unfortunately, this approach can also increase response times and requires that the target service be available, forming another type of dependency.

Because of the limitations of these various approaches, many teams turn to a transactionless solution, settling for eventual consistency, rather than imposing the type of dependencies the other approaches require. One way to implement a transactionless solution is to use event-driven asynchronous replication. When data changes in the service responsible for that data, the service publishes an event announcing that the data has changed. Other services subscribe to the event and then update their own copies of the data. Publishers and consumers are completely decoupled, with the data eventually becoming consistent at some point in time. That means services have to be written in a way to accommodate eventual consistency and be able to roll back changes if needed.

If an application cannot tolerate the eventual consistency model, then it must look to one of the other solutions for handling data access, with RPC calls perhaps being the simplest method.

Regardless of which technologies are implemented, development teams must recognize that data management in a distributed environment is no small task and carries with it a number of pitfalls, not the least of which is the different ways each service can view the data. What is called a product in one service might be something very different for another service, and even if two services are talking about the same thing, the attributes themselves can be different or have slightly different meanings. Even the way IDs might be assigned to the products can vary among services, making it difficult to correlate the data across services.

When taking into account the differences in data storage technologies among services, the asynchronous nature of many systems, and the strategy of eventual consistency, data management can grow unimaginably complex. Even if services use the same storage systems, those can become out of sync and the data inconsistent. External tools (or additional development efforts) might be necessary to ensure that data remains consistent and viable among services. At the same time, each service must include the mechanisms necessary to ensure fault tolerance and availability and to be able to scale as necessary to support all read and write operations.

None of this is to say that microservices and complex data requirements are mutually exclusive, but rather that careful consideration must be given to how data will be treated within each service and across the entire application. The microservices ideal of loose coupling might prove beneficial on a number of levels, but only if data management is given the attention it requires.

Making microservices work

Despite that challenges that microservices present, many development teams think that microservices are still worth the effort. To help smooth the path to implementation, they can take a number of steps. One of the most important is to ensure that the services are partitioned in a way that provides the greatest benefit, with each one focused on a single business function, limited to a small set of responsibilities. The service should be independent enough so it can be easily upgraded or replaced, without impacting other services. Sometimes thinking in terms of resources (such as products or customers) or use cases (such as placing an order or adding a customer) can help define service boundaries.

Development teams should also design services to tolerate changes and failures from other services so the application remains viable. If a service should become unavailable, the other services should still be able to run, even if it means losing functionality. At the same time, teams must implement a system to monitor the services and log events so failures can be detected and addressed as quickly as possible.

Teams can also turn to such technologies as API gateways or application containers to address some of the microservice challenges. API gateways, for example, can help manage all the API calls that come with microservices, helping to mitigate issues such as network bottlenecks or resource contention. Containers, on the other hand, can make implementing and managing microservices a lot easier. The growing popularity and adoption of Docker containers might be coming just at the right time for the microservice world.

When deploying a microservices solution, teams must be careful that they’re not simply shifting the complexity from inside the monolithic application to the outside environment supporting the distributed application. They must also ensure that each team has the necessary DevOps expertise to manage the complexities of a microservices deployment and be willing to invest the time and resources necessary to do it right. Microservices are not about shortcuts or cutting corners. They’re about improved efficiency and manageability. The microservices architecture is a promising model, with plenty of potential, but it is a model still in its infancy, and there is much yet to learn about the best ways to fit all the pieces together.

Tags: ,

  • 14132 views

  • Rate
    [Total: 17    Average: 4.6/5]
  • Robert young

    I Can Still See The Emperor’s Vegetables
    Still siloed applications, it’s just that each silo is a sheaf of soda straws strapped together.

    — The service is small, lightweight, self-contained, and runs in its own process (often on its own server), maintaining firm boundaries from the other services. It can use the programming language and technologies most appropriate to its function, without regard to what other microservices have done.

    Anyone remember CORBA? Old wheel, meet new wheel.

    — A development team should be able to update and deploy its microservice without impacting other services and without being impacted by changes in those services.

    Gee. That’s what an Organic Normal Form™ schema does. And gives you data integrity without (much) code.

    — The goal is to let the microservices do the work and use the communication mechanism simply to send messages back and forth…

    OK. That’s precisely how Object Oriented Design/Programming was offered up. We’d (well, the best and brightest of us) build a set of Lego blocks, toss them in a box, and lesser of us would build applications from some set of said blocks. In reality, folks just make bespoke ValueObject and ActionObject as we go along, aka FORTRAN/COBOL in lower case letters. About -infinity reuse.

    — Because the codebase is not as massive as that of a monolithic solution

    Yeah, right. The codebase will be more massive, just as they are with java/web applications. Yes, a lot of the code is hidden behind the framework facades, but it’s still there, grinding away. One might note that Linus thought long about whether to build linux on a micro kernel or monolithic approach. Guess which way he went? Without a global state to rely on, the defensive programming requirements alone will double the size of a function point.

  • MartyP

    Same Models – Different Day
    This is just the pendulum swinging. This argument has gone back and forth over the years. Big App with a concentrated focus (where scope gets out of hand)? Or, lots of Small Apps to encompass ALL of the scope of the business (then just cobble them together)? Pick your poison.
    The one thing I disagree with is a decentralized database. In other words, many modules each with its own datastore. I have never seen complete separation of data. Has anyone ever seen two different depts that each maintain their own customer list? And, neither one will give up their tables/data. The coordination to keep data and related fields in sync would be enormous.
    Perhaps a hybrid with small apps and a central datastore would be the best way to go.

  • Hakim Ali

    Good Read
    Thanks for writing this, good read.

    BTW I also disagree with the decentralized database approach, but that too comes down to what is more important to the app: consistency or scalability?