Why traditional ESBs are a mismatch for Cloud-based Integration

Cloud ESB

The explosive adoption of cloud-based applications by modern enterprises has created an increased demand for cloud-centric integration platforms.  The cloud poses daunting architectural challenges for integration technology like: decentralization, unlimited horizontal scalability, elasticity and automated recovery from failures. The traditional ESBs were never designed to solve these issues.  Here are a few reasons why ESBs are not the best bet for cloud-based integration

Performance and Scalability
Most ESBs do simplify integration but use a hub-and-spoke model that limits scalability since the hub becomes a communication bottleneck.  To scale linearly in the cloud, one requires a more federated, distributed, peer-to-peer processing approach towards integration with automated failure recovery. Traditional ESBs lack this approach.

JSON and REST
ESBs evolved when XML was the dominant data-exchange format for inter-application communication and SOAP the standard protocol for exposing web services. The world has since moved on to JSON and today, mobile and enterprise APIs are exposed using REST protocols. ESBs that are natively based on XML and SOAP are less relevant in today’s cloud-centric architecture.

Security and Governance
These are key concerns for any enterprise that chooses to move to cloud. With multiple applications in the cloud, enterprises are not always comfortable with centralized security hubs. Security and governance need to be decentralized to exploit the elasticity of the cloud. Old-guard middleware products were typically deployed within the firewall and were never architected to address the issues of decentralized security and governance.

Latency and Network connectivity
When your ESB lives in the external cloud, latency becomes a critical challenge as end-points are increasingly distributed across multiple public and private clouds. Traversing a single hub in such an environment leads to unpredictable and significant performance problems which can only be addressed with new designs built ground-up with Cloud challenges in mind.

Digitization can revolutionize customer experience

We now live in a competitive world where competitors and peers continue to raise the bar of customer experience. Businesses are always looking to deepen engagements with their target audience but that target audience’s expectations have changed, thanks to the digital customer experience.

A common characteristic of successful businesses is the ability to adapt. Anything digital can and will be recorded, archived, analyzed, and shared. With the proliferation of digital channels, businesses are challenged to find better ways to authentically engage across channels, be it with customers, partners or even employees. To tap into new revenue growth potential, companies must adopt new customer centric practices, including offering an integrated customer experience across digital and analog channels to meet customer preferences.

Digital transformation on the customer experience level is not just a matter of the front-end and customer-facing functions; this is just part of a transformational challenge on the level of technology and processes. It’s a matter of the whole organization and requires involving back-end transformations as well. It requires an enterprise-wide approach or better, a roadmap towards such a holistic approach. Digital transformation requires a strategy with a fully integrated operating model, a technology that can rapidly and scalably provision connections with proliferating cloud, mobile, Internet of Things (IoT) and business partners’ APIs.

As businesses dive deep in providing the best customer experience, there is a greater need to integrate systems to cope with fast-moving phenomena, such as cloud and mobile, involving cloud-to-cloud and cloud-to-on premises integration. These complex connections can easily be established with a hybrid integration platform, a new way to connect cloud-based, mobile and on-premises resources. Hybrid integration platforms such as Fiorano Cloud can deal with the increasing volume, speed and variety of information that new digital channels bring, while supporting the multichannel architecture associated with mobile and other multichannel initiatives.

Here is an example of how digitization can significantly improve customer experience; Delaware North, a global leader in hospitality and food service recently revolutionized its customer experience by deploying the Fiorano platform, a digital business backplane to efficiently track individual customer venues and provide relevant Business Intelligence to their customers, also a competitive edge for Delaware North. The Business Intelligence and real-time data provided by Delaware North to its clients is critical to their marketing campaigns, allowing them to assess operational efficiency and make adjustments to improve top line revenues. The new infrastructure of customer-centric interconnected systems allows operational excellence, optimization, efficiency and opens up new areas of opportunity. Delaware North intends to steadily extend the digital business backplane across its global locations which can ready their systems for today’s highly connected and digitized economy.

Processes, data, agility, prioritization, technology, integration, information, business and IT alignment and digitization among others are all conditions for better customer experiences.

Scaling Microservices Architectures in the Cloud

With the velocity of data growing at the rate of 50% per year, the issue of scaling a Microservices architectures is critical in todays’ demanding enterprise environments. Just creating the Microervices is not sufficient. Scaling a microservices architecture requires careful choices with respect to the underling infrastructure and as well as the strategy on how to orchestrate the Microservices after deployment.

Choosing the right Infrastructure topology

While designing an application composed of multiple Microservices, the architect has multiple deployment topology options with increasing levels of sophistication as discussed below:

1. Deployment on a single machine within the enterprise or cloud

Most legacy systems, and many existing systems today, are deployed using this simplest of topologies. A single, typically fairly powerful server with a multi-core/processor is chosen as the hardware platform and the user relies on symmetric multiprocessing on the hardware to execute as many operations concurrently as possible, while the Microservice client applications themselves may be hosted on different machines possibly hosted across multiple clouds. While this approach has worked for the first generation of emerging cloud applications, it will clearly not scale to meet increasing enterprise processing demands since the single server becomes a processing and latency bottle neck.

2. Deployment across a cluster of machines in a single enterprise or cloud environment

A natural extension of the initial approach is to deploy the underlying infrastructure that hosts the Microservices across a cluster of machines within an enterprise or private cloud.  This organization provides greater scalability, since machines can be added to the cluster to pick up additional load as required.  However, it suffers from the drawback that if the Microservice client applications are themselves distributed across multiple cloud systems, then the single cluster becomes a latency bottleneck since all communication must flow through this cluster. Even though network bandwidth is abundant and cheap, the latency of communication can lead to both scaling and performance problems as the velocity of data increases.

3. Deployment across multiple machines across the enterprise, private and public clouds

The communications latency problem of the ‘single cluster in a cloud’ approach described above is overcome by deploying the software infrastructure on multiple machines/clusters distributed across the enterprise and public/private clouds as required. Such an organization is shown in the figure below. This architecture ensures linear scalability because local Microservices in a single cloud/enterprise environment can communicate efficiently via the local infrastructure (typically a messaging engine for efficient asynchronous communication or, if the requirement is simple orchestration, then a request/reply REST processing engine). When a Microservice needs to send data to another Microservice in a different cloud, the transfer is achieved via communication between the “peers” of the underlying infrastructure platform. This leads to the most general-purpose architecture for scaling Microservices in the cloud, since it minimizes latency and exploits all of the available parallelism within the overall computation.

 

Cloud Diagram

 

Orchestration and Choreography: Synchronous vs. Asynchronous

In addition to the infrastructure architecture, the method of Orchestration/Choreography has significant affects on the overall performance of the Microservices application. If the Microservices are orchestrated using a classic synchronous mechanism (blocking calls, each waiting for downstream calls to return), potential performance problems can occur as the call-chain increases in size. A more efficient mechanism is to use an asynchronous protocol, such as JMS or any other enterprise-messaging protocol/tool (IBM MQ, MSMQ, etc.) to choreograph the Microservices. This approach ensures that there are no bottlenecks in the final application-system since most of the communication is via non-blocking asynchronous calls, with blocking, synchronous calls limited to things like user-interactions. A simple rule of thumb is to avoid as many blocking calls as one can.

 

API Management for Everyone

API Management

Today people don’t like talking about ESBs anymore. Instead, the buzz is around cloud, big data, the application programming interface (API) economy, and digital transformation. Application integration is still a core enterprise IT competency, of course, but much of what we’re integrating and how we’re integrating it has shifted from the back office to the omnichannel digital world.

And here’s Fiorano, with one foot still in the traditional ESB space, especially in the developing world where even basic integration is a challenge – and the other foot squarely in the modern digital world. Now they’re launching an API management tool into a reasonably mature market.

On first glance, this move might seem rather foolish, as this market is already crowded, with each of the aforementioned behemoths participating, as well as CA, Axway, Intel, SOA Software, Apigee, WSO2, MuleSoft, and several others, who have all been hammering out the details for a few years now.

But there’s method to Fiorano’s madness. That critical architectural decision that enabled them to compete a dozen years ago has turned out to be extraordinarily prescient, as it separates their approach to API management from the pack as both more cloud-friendly as well as user-friendly than the rest.

Peer-to-Peer with Queues

The secret to Fiorano’s product successes is its unique queue-based, peer-to-peer architecture. Queuing technology, of course, has been with us for decades, but traditionally provided reliability only to point-to-point integrations.

The rise of ESBs in the 2000s saw many vendors building centralized queue-based buses that basically followed a star topology. To scale such architectures and avoid single points of failure required various complex (read: expensive and proprietary) machinations that limited the scalability of the approach.

By building a peer-to-peer architecture, in contrast, Fiorano never relied on a single centralized server to run their bus. Instead, the platform would spawn peers as needed that knew how to interact with each other directly, thus avoiding the central chokepoint inherent to competitors’ architectures. The queues connecting the peers to each other as well as to other endpoints provided the reliability and fault tolerance to the architecture.

The result is an approach that is inherently cloud-friendly – even though the minds at Fiorano built it before the cloud hit the marketplace. Each peer can go on premise or in a cloud instance, and thus scale elastically with the cloud.

Today, as the cloud becomes a supporting player in the digital world and user preferences drive an explosion of technology touchpoints, Fiorano has managed to put in place the underlying technology that now supports the API management needs of modern digital environments.

The API Management Story

I also covered the API Management market starting in 2002, when vendors called it the Web Services Management market. Then it transformed into SOA Management, then Runtime SOA Governance, and now API Management (although Gartner awkwardly uses the term Application Services Governance).

After all, Web Services are a type of API, and managing them is an aspect of governance. Today, we’d rather refer to services as APIs in any case, as our endpoints are more likely to be RESTful, HTTP-based interfaces than SOAP-based Web Services.

This rather convoluted evolutionary path for the API Management market explains why there are so many players – and why many of them are the old guard incumbents. But it also indicates that many of the products in the market are likely to have older technology under the covers, perhaps better suited for first-generation SOA technologies than the modern cloud/digital world.

Fiorano, however, has avoided this trap because of their cloud/digital friendly architecture, as the diagram below illustrates. At the heart of the Fiorano API Management Architecture are both the Gateway Servers, which handle the run time management tasks, as well as the Management Servers, tasked with supporting policy creation, publication, and deployment.

Both types of servers take advantage of Fiorano’s peer-to-peer architecture, allowing cloud-based elasticity and fault tolerance, the flexibility to deploy on-premise or in the cloud, as well as unlimited linear scalability.