Scaling Microservices Architectures in the Cloud

With the velocity of data growing at the rate of 50% per year, the issue of scaling a Microservices architectures is critical in todays’ demanding enterprise environments. Just creating the Microervices is not sufficient. Scaling a microservices architecture requires careful choices with respect to the underling infrastructure and as well as the strategy on how to orchestrate the Microservices after deployment.

Choosing the right Infrastructure topology

While designing an application composed of multiple Microservices, the architect has multiple deployment topology options with increasing levels of sophistication as discussed below:

1. Deployment on a single machine within the enterprise or cloud

Most legacy systems, and many existing systems today, are deployed using this simplest of topologies. A single, typically fairly powerful server with a multi-core/processor is chosen as the hardware platform and the user relies on symmetric multiprocessing on the hardware to execute as many operations concurrently as possible, while the Microservice client applications themselves may be hosted on different machines possibly hosted across multiple clouds. While this approach has worked for the first generation of emerging cloud applications, it will clearly not scale to meet increasing enterprise processing demands since the single server becomes a processing and latency bottle neck.

2. Deployment across a cluster of machines in a single enterprise or cloud environment

A natural extension of the initial approach is to deploy the underlying infrastructure that hosts the Microservices across a cluster of machines within an enterprise or private cloud.  This organization provides greater scalability, since machines can be added to the cluster to pick up additional load as required.  However, it suffers from the drawback that if the Microservice client applications are themselves distributed across multiple cloud systems, then the single cluster becomes a latency bottleneck since all communication must flow through this cluster. Even though network bandwidth is abundant and cheap, the latency of communication can lead to both scaling and performance problems as the velocity of data increases.

3. Deployment across multiple machines across the enterprise, private and public clouds

The communications latency problem of the ‘single cluster in a cloud’ approach described above is overcome by deploying the software infrastructure on multiple machines/clusters distributed across the enterprise and public/private clouds as required. Such an organization is shown in the figure below. This architecture ensures linear scalability because local Microservices in a single cloud/enterprise environment can communicate efficiently via the local infrastructure (typically a messaging engine for efficient asynchronous communication or, if the requirement is simple orchestration, then a request/reply REST processing engine). When a Microservice needs to send data to another Microservice in a different cloud, the transfer is achieved via communication between the “peers” of the underlying infrastructure platform. This leads to the most general-purpose architecture for scaling Microservices in the cloud, since it minimizes latency and exploits all of the available parallelism within the overall computation.

 

Cloud Diagram

 

Orchestration and Choreography: Synchronous vs. Asynchronous

In addition to the infrastructure architecture, the method of Orchestration/Choreography has significant affects on the overall performance of the Microservices application. If the Microservices are orchestrated using a classic synchronous mechanism (blocking calls, each waiting for downstream calls to return), potential performance problems can occur as the call-chain increases in size. A more efficient mechanism is to use an asynchronous protocol, such as JMS or any other enterprise-messaging protocol/tool (IBM MQ, MSMQ, etc.) to choreograph the Microservices. This approach ensures that there are no bottlenecks in the final application-system since most of the communication is via non-blocking asynchronous calls, with blocking, synchronous calls limited to things like user-interactions. A simple rule of thumb is to avoid as many blocking calls as one can.

 

Microservices Architecture: Scalability, DevOps, Agile development

With the emergence of the digital economy, there’s a big buzz around Cloud, Big Data and the API economy. In all the noise, we sometimes still forget that to deploy systems across these domains, one still has to has to create services and compose multiple services into a working distributed applications. Microservices have emerged as the latest trend in development based on these increasingly demanding requirements, based on the perceived failure of Enterprise-wide SOA (“Service Oriented Architecture”) projects.

SOA was all the hype since the early 2000’s but disappointed for many reasons in spite of many successful projects. SOA was (and still is in many quarters) perceived as being too complex: developers spent months just deciding on the number and nature of interfaces of a given service. Often, services were too big and had hundreds of interfaces, making them difficult to use; at the other extreme, some developers designed services that had just a few lines of code, making them too small. It was difficult for users to decide on and choose the granularity of a service for the most part. Microservices solve these and several other problems with classic SOA.

Before getting into details, it is important to define the modern meaning of the term “Application”. Applications are now “Collections of Components/Services, strung together via connections in the form of asynchronous message-flows and/or synchronous request/rely calls”. The participating Services may be distributed across different machines and different clouds (on-premise, hybrid and public).

The Emergence of Microservices
Microservices emerged from the need to ‘make SOA work’, to make SOA productive, fast and efficient and from a need to deploy and modify systems quickly. In short, Microservices support agile development. Key concepts that have emerged over the past ten years are:

(a) Coarse-grained, process-centric Components: Each Microservice typically runs in a separate process as distinct from a thread within a larger process. This ensures the component is neither too small nor too large. There isn’t any hard and fast rule here, except to ensure that Microservices are not ‘thread-level bits of code’.

(b) Data-driven interfaces with a few inputs and outputs. In practice, most productive Microservices typically have only a few inputs and outputs (often less than 4 each). The complexity of the SOA world in specifying tens or even hundreds of interfaces has disappeared. Importantly, inputs and outputs are also ‘coarse grained’ – typically XML or JSON documents, or data in any other format decided by the developer. Communication between Microservices is document-centric – an important feature of Microservices architecture. For those experienced enough to appreciate the point, one can think of a Microservice as a “Unix pipe” with inputs and outputs.

(c) No external dependencies: The implementation of each Microservice contains all the necessary dependencies, such as libraries, database-access facilities, operating-system specific files, etc. This ensures that each Microservice can be deployed anywhere over a network without depending on external resource libraries being linked in. The sole communication of a Microservice with the external world is via its input and output ‘ports’.

(d) Focused functionality: A Microservice is typically organized around a single, focused capability: e.g. access/update a Database, Cache input data, update a Bank account, Notify patients, etc. The lesson from “complex SOA” is that the Services should not become too large, hence the term “Microservices”

(e) Independent interfaces: Each Microservice typically has a GUI Associated with itself, for end-user interaction and configuration. There is no relationship between the GUIs of different Microservices – they can all be arbitrarily different. Each Microservice is essentially a ‘product’ in its own right in that it has defined inputs and outputs and defined, non-trivial functionality.

Benefits of Microservices and Microservices integration
In a Microservices architecture, Applications are composed by connecting instances of Microservices via message-queues (choreography) or via request/reply REST calls (orchestration). Compared to classical monolithic application design, this approach offers many benefits for development, extensibility, scalability and integration.

—> Easy application scalability : Since an application is composed of multiple micro services which share no external dependencies, scaling a particular micro service instance in the flow is greatly simplified: if a particular microservice in a flow becomes a bottleneck due to slow execution, that Microservice can be run on more powerful hardware for increased performance if required, or one can run multiple instances of the Microservice on different machines to process data elements in parallel.

Contrast the easy Microservices scalability with monolithic systems, where scaling is nontrivial; if a module has a slow internal piece of code, there is no way to make that individual piece of code run faster. To scale a monolithic system, one has to run a copy of the complete system on a different machine and even doing that does not resolve the bottleneck of a slow internal-step within the monolith.

It should be noted that integration platforms based on synchronous, process-centric technology such as BPEL, BPMN, and equivalents also suffer from this scalability problem. For instance, if a single step within a BPEL process is slow, there’s no way to scale that independent step in isolation. All steps within a process-flow need to be executed on the same machine by design, resulting in significant hardware and software costs to run replicated systems for limited scalability.

—> Simplified Application Updates : Since any given Microservice in the Application can be upgraded/replaced (even at runtime, provided the runtime system provides the required support) without affecting the other ‘parts’ (i.e Microservices) in the application, one can update a Microservices-based application in parts. This greatly aids agile development and continuous delivery.

—> Multi-Language development : since each Microservice is an independent ‘product’ (since it has no external implementation dependencies), it can be developed in any programming language; a single Microservices Application may thus include one or more Microservices developed in Java, C, C++, C#, Python and other languages supported by the Microservices platform

—> Easier Governance : since each Microservice runs in its own process and shares no dependencies with others, it can be monitored, metered and managed individually. One can thus examine the status, input, outputs and data flows through any Microservice instances in an application without affecting either the other Microservices or the running system.

—> Decentralized Data Management : In contrast to the classical three-tier development model which mandates a central data repository, thereby restricting the service-developer, each Microservice can store data as it pleases : a given Microservice can use SQL, a second MongoDB, a third a Mainframe database, etc. Just as there are no implementation-dependencies between Microservices, there are also no “database use” dependencies or even guidelines. The developer is thus free to make choices that fit the design of the particular Microservice at hand.

—> Automated Deployment and Infrastructure Automation : Implementation independence allows assemblies of Microservices can be ‘moved’ from one deployment environment to another (for example, Development —> QA —> Stating —> Production), based just on profile settings. This ‘one click’ move between environments greatly aids Dev-ops and agile development

Each of these topics can and likely will be the subject of a separate blog post. Until then, enjoy your development!