Lukas Weber
Senior Enterprise Architect
Jan 16, 2024  | 2  mins read

The Information Technology—IT—sector is often abuzz with multiple “buzzwords” describing the latest trends, reflecting current innovations, and shaping how we think and interact with technology. The most relevant examples include “cloud native computing,” “Big Data,” “Artificial Intelligence (AI),” and “machine learning,” among others.

Moreover, each of these terms encapsulates a set of complex ideas, technologies, and methodologies that are not only driving the evolution of digital landscapes in all economic contexts. For instance, cloud native computing describes the principles and practice of developing software that builds “ scalable applications in modern, dynamic environments” that run on public, private, and hybrid clouds.

The cloud native architecture—or stack—primarily comprises individual containerized microservices orchestrated by Kubernetes. Other accompanying technologies include serverless functions, “cloud native processors, and immutable infrastructure deployed via declarative code,” designed to automate the application's deployment, scaling, monitoring, and maintenance, thereby minimizing an organization’s operational burden.

The Challenges Driving Cloud Native

This discussion would not be complete without touching on—at the very least—the challenges faced by modern organizations of all shapes and sizes, including:

  • The requirement for organizations to operate in various centers across the globe. Remote working, online shopping, global banking, as well as supply chains and logistics operations shipping goods across multiple geographic regions are all part of our post-pandemic world.
  • The need for software applications to be dynamic and flexible. This requirement results from rapidly evolving customer and employee needs, mandating organizations to build inherently dynamic, agile, and flexible applications.
  • Massively scalability across the organization’s software ecosystem is required to cater to an ever-increasing number of users, data, and interactivity.
  • The requirement is to provide a hybrid infrastructure consisting of both on-premises and cloud infrastructure, catering to older and newer technologies.
  • The need is to process transactions and analyze data in real-time—or near real-time, continuously providing management or decision-makers with up-to-date information.
  • There is an imperative to comply with geo-locational data security and regulatory requirements.

Why Cloud Native?

Succinctly stated, the cloud native paradigm solves all these challenges.

Therefore, the question that must be asked and answered is not whether cloud native will solve these challenges; rather, it must be how the cloud native architecture resolves these challenges.

Note: As defined above, it is essential to remember that cloud native is a paradigm or framework that sets out how to build software applications. These applications are not limited to running only on cloud-based servers.

As we have already defined the cloud native paradigm, let’s cite a use case that describes how this architecture—or framework—solves these challenges, with the principle of hybridism as its foundation.

Imagine a global financial payments platform—like PayPal—developed using a containerized microservices-based architecture and has customers worldwide across six continents, each with its own set of requirements and complexities. For instance, processing—and storing European Union customer data is bound by the GDPR. Additionally, India has its requirements, as its laws require it to operate as a domestic payment gateway.

Moreover, global time zones can also cause a significant challenge to the requirement of real-time transaction processing. For instance, customers in New Zealand (UTC +12 hours) are 22 hours ahead of Hawaii, USA (UTC-10 hours).

Therefore, the most prominent challenges faced in this example are massive scalability, globalism, dynamicism, hybridism, global regulatory requirements, and real-time transaction processing.

Let’s see how the cloud native paradigm solves these problems, starting with the concept of a hybrid infrastructure as its foundation.

The principle of hybridism means that the application runs on a variety of servers located on-premises and in the cloud—both private and public clouds. When partnered with the principles of globalism, this model describes a containerized microservices-based architecture that runs on servers located on-premises and in the cloud across all geographical regions.

For example, returning to our use case, this financial payments platform will run on servers located in data centers on Indian soil. Moreover, microservices and databases processing data from EU customers will be situated in the EU, meeting the requirements set out in the GDPR and ensuring that customer PII—Personal Identifiable Information—does not leave the European Union.

Not only do localized hybrid servers solve regional regulatory requirements, but they also provide for real-time transaction processing and data analysis. Therefore, as noted above, New Zealand customers 22 hours ahead of Hawaii can transact in real-time—or near real-time during their country’s office hours and not have to wait nearly 22 hours for transaction processing if the organization’s servers are only located in Hawaii.

Note: These examples are possibly hyperbolic. However, their intention is merely to describe the principle of real-time transaction processing.

The same principles apply to the requirements of dynamicism. Because the application is developed according to the cloud native formula—a containerized microservices-based architecture, it is easy to add new functionality as containerized microservices and update existing microservices without affecting the application’s uptime metrics.

In Conclusion…

Circling back to this article’s title: “When is Cloud Native not on the Cloud,” the answer posed by this question is as elucidated—and inferred—throughout this content: Cloud native is a paradigm—an architecture that provides for the automated deployment via declarative code, monitoring, and maintenance of containerized microservices; all orchestrated by a container orchestration platform like Kubernetes.

Fiorano takes this concept one step further by adding an abstraction layer above the Kubernetes platform, simplifying the architectural complexity of the orchestration of globally distributed microservices. This abstraction layer is a cloud native integration platform that abstracts the API endpoints of each microservice, solving the requirement of massive scalability as it provides the functionality to scale resources in and out — based on user requirements.

© 2025 Fiorano Software and Affiliates. All Rights Reserved. Privacy Statement | Terms of Use