Skip to main content
Skip table of contents

Scaling

Contents

Scalability

The Distributed Process model implemented by the Fiorano Platform automatically ensures that as many operations as possible are run in parallel. The data flows between applications are represented by graphs, and all independent trees within the application graph represent concurrent computations. Within a single tree, operations are implicitly serialized based on in-built data-flow dependencies.
Scalability is critical to ensure that the platform scales both with current projects (likely in themselves to be highly distributed, probably across company boundaries) and with future growth. The Fiorano platform addresses scalability issues as explained in the below sections.

Transparent Resource Addition

Fiorano Platform promotes a linear 'build as you grow' model, which allows an enterprise to add software resources in the form of Fiorano Peers at network end-points to absorb additional load on the platform. For instance, if the load on a given set of Peers processing data is determined to be too high, new Peers can be added incrementally to the network at runtime without disrupting existing services and distributed processes. Since Fiorano platform Peers reuse existing enterprise hardware, resource addition typically does not involve additional hardware deployments, unless explicitly required.

Dynamic Change Support

Distributed processes and applications deployed on the Fiorano platform can be extended and modified dynamically at runtime by adding or removing the new services and data routes without stopping or disrupting existing processes in any way. Existing services within an application can be individually controlled via the start/stop/upgrade/modify semantics, allowing incremental, dynamic, and runtime changes to distributed processes.

Parallel Data Flow

With dispersed computation and parallel data flow between nodes, Fiorano Peers scale naturally and seamlessly with the addition of new Peer nodes and Enterprise Services across the network. Information and data flowing between distributed services does not have to pass through a central hub because each Fiorano Peer incorporates a JMS-compliant messaging server, allowing direct Peer-to-Peer connections to be set up on-the-fly between any set of Peers across the network. This on-demand creation of Peer-to-Peer data-flow connections is unique to the Fiorano platform and enables linear scalability and performance as new peers are added to the system. Furthermore, since peers can be hosted on existing (already reasonably powerful) hardware at the end-point of the network, enterprises do not have to purchase expensive hardware each time if there is an increase in performance requirements.

Server-level Load Balancing

Load balancing is supported at the service level as well as the server levels.

Scaling by adding more peers to the network

At the server level, the Fiorano peer-to-peer architecture enables increasing loads to be seamlessly distributed across the network through the dynamic addition of peer servers. Since data flows between distributed processes are not routed through a central hub, the Fiorano architecture avoids load-related faults that plague most existing integration infrastructure solutions. The Peer-to-Peer architecture also allows dynamic load-balancing to be added to running applications that are already in process.

To scale an Event Process across multiple peers, add more peers to the network and then re-deploy some existing running components onto these new peers as follows:

  1. Add the new peers to the network using the Administration tools.
  2. Stop one or more of the components in the flow, right-click the component and select Kill option from the drop-down menu; this stops the components' execution on its current peer. Now change the Peer Server name on which the component is to be redeployed by selecting the appropriate new Fiorano Peer Server target from the available list of peers in the Properties panel for the service instance.

Thread Count of Components

By default, a service component is single threaded, that is, there is only one document being processed by a service component at any given point of time. If the CPU is not being fully utilized during a load test, then the performance and throughput can be increased by increasing the number of sessions/threads for a component. This linearly increases the number of data elements being processed by the component concurrently. The number of threads per component can be set in the Properties window for the Input Port of components.

While optimizing threads in this manner, always begin with the bottleneck component, since there will typically be only one bottleneck component in a given process at a time. Once the first bottleneck component is fixed, the bottleneck typically moves to another component in the flow. This technique helps optimizing the number of sessions for each component based on the ability of the hardware to process the data flowing through the Event Process.


Figure 1: Input port properties of DB component with number of sessions set to '5'


Refer the Load Balancing section for service-level load balancing by distributing load across multiple service instances.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.