CTO and Principal Analyst, Intellyx
Nov 21, 2023 | 3 mins read
While Kubernetes can scale up to fleets of clusters, deploying many such fleets across different clouds in different geographies and connecting them all seamlessly remains a challenge.
The missing piece of this puzzle: automating event-driven microservices
To scale cloud native infrastructure across global, multi-cloud, hybrid deployments, enterprises must automatically generate messages, queues (topics) and application code for connecting the microservices they want to deploy in scalable Kubernetes clusters.
Event driven microservices are particularly well-suited for applications that rely on a complex set of interconnected services, as they use asynchronous communication mechanisms for loose coupling, scalability, resiliency, and other benefits.
Chaining events together in a flow can greatly simplify many types of communication patterns. Instead of having to model each communication between microservices as a request/response pattern, it can be simpler for many applications to model a communication as a series of one-way events that don’t require a response.
An event represents a state change to the business that the infrastructure must propagate to one or more microservices for processing. Such a business state change may require the execution of multiple microservices in a flow to process it correctly.
Having a business event trigger such a flow can greatly simplify the design and implementation of such an application.
Take a simple banking payment processing system, for example. Submitting a payment request is an event that changes the state of the bank’s business. The bank receives funds and then sends them out and debits them.
Executing this process, and the fees associated with it, results in a change to the bank’s accounting balances.
A payment request typically requires multiple steps between the submission and confirmation events, as the figure above illustrates.
A bank validates the request and checks the account for the person or business requesting the payment to see if they have sufficient funds (or credit).
The bank may check for a fraudulent payment request and potentially perform other checks depending on the type and size of the payment (e.g., an anti-money laundering or sanctions check). Each step in the process depends on the successful completion of the prior step.
Event Messages Executing a Simple Payment Processing Flow
The bank uses event messages to connect its microservices, instead of a request/response pattern (e.g. HTTP) or a remote procedure call (e.g. gRPC). This choice simplifies the complex connection pattern into a more manageable message flow across independently executable services. Using queued messages also improves reliability and scalability for this type of event processing.
Each stage in the payment process sends an event message to the microservice executing the next stage. There’s no need to send a reply message each time. Only at the end of the process, when all steps are complete, does the bank send a reply message to the payment submitter to confirm the payment.
Creating such complex applications using event-driven microservices simplifies them. Event messages propagate business state changes, triggering actions to process them, instead of the application querying a database to discover such events.
It’s important to model each microservice as a business domain function or subfunction. Input and output data drive the design of the function: the function accepts the input data, processes it, and creates the output data (if any).
Event messages exchange Input and output data among functions. Typically, an initial event triggers a processing flow in which the results of one microservice provide the data input to the next microservice, and so on.
Such event-driven applications also anticipate heavy volumes of traffic, processing billions of payments daily, and untold numbers of telemetry events flowing. Cloud native computing environments such as distributed Kubernetes clusters are the perfect infrastructure for handling such volume.
Let’s say you wanted to use an event broker such as Kafka or RabbitMQ for your event-driven microservices implementation. You would have to configure and provision the necessary topics (queues) and include the topic names specifically in the application code for reading from and writing to them.
You’ve automated everything else you need for developing and deploying event-driven microservices in the cloud: CI/CD pipelines for building, testing, and deploying containers, provisioning Kubernetes, executing security tests, configuring observability, etc.
You can provision Kafka as a service using APIs and automation, but the creation, management, and service interaction with topics remains a manual exercise.
Automating these actions is the missing piece of the cloud native computing puzzle for event-based microservices.
You select and connect your microservices using the Fiorano Event Process Composer, define the data model, and deploy the microservices to Kubernetes clusters.
Fiorano automatically generates the messages, queues, and the application code you need to write to and read from them.
The event-driven microservices pattern is gaining popularity, not only for its quantitative benefits such as scalability and reliability, but also because it’s a better way to design and implement complex application flows.
While cloud native infrastructure technologies offer comprehensive automation for application development, testing, deployment, and observability, provisioning the queues (topics) for event-driven communication typically remains a manual process, as does including the queue names in the application code.
The advantages to the event-driven microservices pattern of a technology capable of abstracting away and automating this part of the process provides the missing piece to the cloud native computing puzzle.
Copyright © Intellyx LLC. Fiorano is an Intellyx customer. Intellyx retains final editorial control of this article. No AI was used to write this article.