Skip to main content
Skip table of contents

High Availability Server in Shared Mode

Contents


Figure 1: Sharing of servers

In this mode of High availability, the primary-secondary broker pair shares a common database and do not replicate data over the network. If the primary fails, all Fiorano applications fail over from the primary and reconnect to the designated secondary backup broker. The primary and secondary broker-pair use the network channel between them to routinely seek the heartbeat of the other and watch for any break in connection to switch states. A locking mechanism (as explained in the Working with a LockFile section) is also employed to determine the state of the servers. The database which is common to both the servers is referred to as the shared database.

Shared HA Precondition

The shared database is critical for the servers to function, as the servers store all data in it. It is mandatory for the Shared Database to be always accessible to the servers. Unavailability of the shared database could lead to data loss and data corruption.

Server States

A server at any point of time can be in the following states:

  • Active
  • Passive
  • Activating – The server switches to this state as soon as it acquires the lock. Once all its services are activated, it switches to Active State.

The following diagram explains the transition to various states:


Figure 2: Transition to various states

  • failure detected – refers to the link between the servers being broken
  • Lock Lost – lock over the LockFile is lost
  • Lock Obtained – lock obtained over the LockFile

When the server starts up, the server tries to acquire a lock on the lock file. If it acquires the lock successfully, it switches to the ACTIVATING state. It then switches to ACTIVE state once all its services have been activated. Unlike in replicated HA, where the servers wait for each other to come up (that is, in WAITING state), a server in shared mode does not need to wait for its backup server to come up because they share a common database and no database synchronization is required which is the case for servers working in replicated mode.

After switching to the ACTIVE state, the server keeps trying to connect to its backup server. If the backup server starts up, the backup server switches to the PASSIVE state. At this point, if there is a failure of the ACTIVE server, the Hot Standby PASSIVE Server is ready to move into the ACTIVE state and starts accepting requests from the clients.

Configuring Fiorano 11 High Availability Servers

The Fiorano 11 installer comes with preconfigured profiles for shared mode which are ready to run on a single machine.

The default profiles to be used for Primary and Secondary servers are listed below.

Server

Location of Profile

Fiorano Enterprise Server HA Primary

$Fiorano_home/esb/server/profiles
/haprofile_shared/primary/FES

Fiorano Enterprise Server HA Secondary

$Fiorano_home/esb/server/profiles
/haprofile_shared/secondary/FES

Fiorano Peer Server HA Primary

$Fiorano_home/esb/server/profiles
/haprofile_shared/primary/FPS

Fiorano Peer Server HA Secondary

$Fiorano_home/esb/server/profiles
/haprofile_shared/secondary/FPS

To launch the server on one of these profiles, use the script below in $Fiorano_home/esb/server/bin:

CODE
server.bat/sh

Since the shared HA pair use a common database, the location of the database has to be specified while starting up each server.
The -dbPath command line option is used for specifying the location of the shared database.

Examples:

To launch the primary server of Fiorano Enterprise shared -HA-profile

CODE
server.bat -mode fes -profile haprofile_shared/primary -dbPath D:/sharedDb

To launch the secondary server of Fiorano Peer shared-HA –profile

CODE
server.bat -mode fps -profile haprofile_shared/secondary -dbPath E:/sharedDb

While running the servers on the same machine, both the HA servers should have their databases pointing to the same physical directory.

Configuration Steps

  • Setting up the lock file
  • Setting up the shared database
  • Configuring the profile
  • Changing the location of log files.

Setting up the Lock File

For more information, refer to the Working with a LockFile section.

Setting up the shared database

A directory is shared on a third machine with Read/Write permission using the NFS protocol. If the operating system supports NFS - version 4, then it is recommended that the shared database be shared using NFS version 4 else it should be shared using NFS –version 3.

The user has to make sure that the operating system hosting the server supports the protocol used for sharing the shared database.

For more information on how to share a directory using NFS v4, refer to http://www.brennan.id.au/19-Network_File_System.html#nfs4.

For sharing a directory on Windows OS using NFS, the user has to download and install Windows Services for Unix / Subsystem for UNIX-based Applications provided by Microsoft on the machine. The Client for NFS and Server for NFS packages must be installed. Some Microsoft Operating Systems have these packages by default like Windows Server 2003 R2 and Windows Vista Enterprise and Ultimate Editions.

Following are the steps to share the directory using NFS v3 on Windows OS:

  1. Create a Group of hosts that can access the share.
  2. Open Services for Unix Administration dialog box and perform the following actions:
    1. Click the Server For NFS node.
    2. Click the Client Groups tab for option to add/remove/rename groups.
    3. Add a group and add clients (IP Addresses) to the group.
    4. Click Apply.


      Figure 3: Group "sharedHA" having two hosts

  3. Create a user – mapping:
    For a windows user "John S", to modify an nfs share, the uid(s) of the user sharing the directory and the uid of the user accessing the share need to be the same. If the server is running on a UNIX OS, since Windows users have no uid, a mapping is needed between the Unix user and Windows User, that is, the user John S has to be mapped to the Unix user.
    Following are the steps to create the mapping:
    1. Click the User Name Mapping node in the Services for Unix Administration dialog box and then click the Configuration tab.
    2. Configure the service either using NIS or by using the password/group files.
    3. If chosen to create a mapping using the password and group files, the files /etc/group and /etc/passwd on the Unix Machine should be copied to the Windows machine and their paths should be provided in the corresponding boxes and the changes should be applied.


      Figure 3: Configuration section in the Services for Unix Administration dialog box
       
    4. Click the Maps node to create simple or advanced maps. In an advanced mapping, users can list the Windows and UNIX users and create a mapping. The figure below illustrates the User Mapping Window having an advanced map between the Windows user John S and a UNIX user having uid 510 and name 'hatest'. By doing so, users having uid equal to 510 on machines which belong to the access group of a share will be given read/write access to the share.

  4. Share the directory using the 'nfsshare' command. Open a command prompt and type the following command:

    CODE
    nfsshare sharedDB=C:/shared/db -o rw=sharedHA 
    • sharedDB refers to the share name.
    • C: /shared/db refers to the path of the shared directory
    • rw=sharedHA specifies that read & write permissions should be given to the group sharedHA
    • Type the command nfshare /? for help on how to use the nfsshare command
Mounting the shared database on the Machine hosting the server:

The nfs share can be accessed on UNIX/Solaris OS by mounting it. The mount command is used for this purpose. On Windows, the nfs share can be accessed by adding it as a network drive.

Examples:

  • If the shared database is on Windows & the HA server is on UNIX:
CODE
mount -t nfs -o rw 192.168.1.213:sharedDB /home/testUser/sharedDB

The IP Address '192.168.1.213' refers to IP address of the Windows Machine hosting the shared database. 'sharedDB' is the sharename of the directory being shared on the Windows Machine & '/home/testUser/sharedDB' is the path on the local machine where the share will be mounted.

The path on the local machine where the shared database is mounted should given as the value for the '-dbPath' command line option while starting the server.

To start up the FES shared HA Primary profile, type the command:

CODE
server.sh -mode fes -profile haprofile_shared/primary -dbPath /home/testUser/sharedDB

If the shared database is on UNIX & the HA server is on UNIX:

CODE
mount -t nfs4 -o rw 192.168.1.213:/ /home/testUser/sharedDB

The IP Address 192.168.1.213 refers to IP address of the UNIX Machine hosting the shared database which is being shared using NFSv4 and /home/testUser/sharedDB is the path on the local machine where the share will be mounted.

The path on the local machine where the shared database is mounted should be given as the value for the -dbPath command line option while starting the server.

To start up the FPS shared HA Primary profile, use the command:

CODE
server.sh -mode fps -profile haprofile_shared/primary -dbPath /home/testUser/sharedDB

If the shared database is on UNIX/Windows and the HA server is on Windows:

Suppose the uid of the user sharing the database is 510, a mapping should be created between the windows user & the UNIX user using the Services for Windows Administration software. Once, the mapping is created the user should map the shared directory to a network drive. On adding a network drive, a Confirmation dialog box appears as shown in the figure below to display the mapping used for accessing the nfs share.


The added network drive path should given as the value for the -dbPath command line option while starting the server.
If the shared database is mapped to a network drive letter say Z:, to start up FES shared HA Primary profile, type the command:

CODE
server.bat -mode fes -profile haprofile_shared/primary -dbPath Z:/

Configuring the FES/FPS HA Profile

Fiorano 11 gives the ability to configure the HA through Fiorano Studio to simplify the configuration in offline mode.

To configure FES/FPS HA, perform the following steps:

  1. Open the HA profile (shared)
  2. Right-click the profile and select FES/FPS Shared HA from pop-up menu.
     


  3. The FES/FPS Shared HA dialog box appears. Various configurable parameters can be seen in the dialog box. These parameters are same as the parameters specified for a profile in replicated mode. The user can also configure both profiles in a single dialog. Please refer to Configuring High Availability Servers section for the description of each parameter and how to configure both profiles in a single dialog box.
  4. Save the profile.

Changing the location of log files

The log files for the servers running in replicated mode are created in their respective databases. Since, the servers running in shared mode share a common database, the log files by default are also shared by both servers. This configuration has to be changed for the primary & secondary server before startup.

This can be done by modifying the file 'Configs.xml' which is located under the 'conf' directory of the profile location. i.e. $Fiorano_Home/esb/server/profiles/<profile_name>/<profile_type>/<server_type>/conf

Where

  • <profile_name> is the name of the profile.
  • <profile_type> is the type of the profile, that is, primary or secondary.
  • <server_type> is the type of the server, that is, FES or FPS.

Given below is a screen shot of the file Configs.xml. The value of the attribute FileName of the child nodes Appender in the file (which have been circled in the below screenshot) decides the location of the log file.

It is recommended that the server should have its log files on the same machine as the server. Set the value of the attribute FileName to the path of the log file on the local machine.

The screenshot below shows the modified Configs.xml where the log files are located at /home/sharedHA.


Save the file and start the server.

Verifying HA Setup

On starting the Fiorano 11 Server that is part of an HA pair, the server prints debug information about its own state (ACTIVE, PASSIVE, and ACTIVATING). It also prints information about its backup server state whenever it detects a change.

Sample of statements on the console

CODE
[Tue Apr 28 16:57:50 IST xxxx] New status of remote server = PASSIVE
[Tue Apr 28 16:58:07 IST xxxx] New status of remote server = ACTIVATING
[Tue Apr 28 16:58:23 IST xxxx] New status of remote server = ACTIVE
[Tue Apr 28 16:57:52 IST xxxx] Primary Server switched to ACTIVATING

The Console includes statements as shown below:

Primary Server switched to ACTIVE or Secondary Server switched to PASSIVE, which indicate that the pair has successfully connected. Also, a statement gets printed when the lock is successfully acquired over the lockfile on the active server's console.

Example: Successfully acquired lock on: Z:\lock.txt.

The figure below illustrates a successfully started Fiorano HA Peer Server in shared mode:

Shutting down the HA Server

For details on how to shutdown the servers, please refer to the following sections:

Troubleshooting Steps

"SocketBindException" indicating that the HA Port is already bound:

This exception indicates that some other program running on the HA port or the last instance of the server is not properly killed. Stop/kill the application which is holding up the port and restart the server or choose a different HA port. But changing this means that there needs to be a change in the Backup Server configuration for its Backup Server port.

Both servers go to Active state in shared mode respectively if the network link between them is broken. This can happen if servers do not refer to the same LockFile.

The server throw an exception "log4j: ERROR Failed to flush writer, java.io.IOException: Stale NFS file handle". This can occur if the log files are present on the machine hosting the shared database and they have rolled over. Rolling over of log files sometimes results in having stale file handles (invalid file handles). To avoid this, refer to the Changing the location of log files section.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.