A
cluster is a set of connected Smartswitch instances that are installed on a set of servers (either physical or virtual). Each node of a cluster might have:
- private IP addresses (mandatory);
- shared IP addresses (optional, depends on the cluster scheme chosen).
There are different types of Smartswitch cluster that aim different purposes. Here they are listed in ascending order of complexity. Also these schemes can be combined together, making even more complex schemes.
Component-distributed cluster
In this scheme Smartswitch components can be distributed across several nodes.
These components are: SBC (Session Border Controller), web interface, database engine. There is only one instance of each component in the cluster.
This scheme might be useful for those who have significant load for some of the system components. For example, SBC and database engine take many CPU. So, placing them on separate nodes will give benefits.
Prerequisites:
- nodes quantity: 2-3
- nodes should be placed in the same IP network
- IP addresses count = nodes count
Geographically distributed cluster
In this scheme nodes of a cluster are located in different geographical locations, but management is performed on one central node. SBC, database engine and web-interface are installed on each node.
This mode might be useful in case if you provide services in different geographical regions and want to be close to your customers and providers, which might be different for different districts. Being close to you customers and providers means that you will provide less jitter, less delay and more voice quality.
Also this scheme might be useful for those who wish to minimize dependency on the co-location provider. In case if one of the providers fails - you still have the other one.
Prerequisites:
- nodes quantity: 2+
- your providers should enable all your cluster's IPs in their IP ACL.
- IP addresses count = nodes count
Enhancements over a 1-server installation:
- in case if any of the nodes fails - all other nodes work as usual;
- you can manually spread load over the cluster nodes
Limitations:
- you can commit configuration changes only on master (management) node. You can view configuration and statistics and generate reports on each node
- In case if management node fails - you won't be able to change configuration or view statistics until management node is up again;
Fail-safe cluster (shared IP)
This scheme is useful for those who wish to provide a fail-safe services to the customers.
Fail-safe operation is achieved by shared IP take-over in case if one nodes of a cluster fails. Customers should originate calls to the shared IP, however cluster nodes will originate calls to your providers using their private IPs, so providers have to setup their IP ACL accordingly.
Prerequisites:
- nodes quantity: 2+
- your providers should enable all your cluster nodes IPs in their IP ACL.
- IP addresses count = nodes count + 1
- nodes of a cluster have to be in the same IP network
Enhancements over a 1-server installation:
- in case if any of the nodes fails - all other nodes work as usual
- shared IP automatically moves to one of the slave servers in case if master slave fails - therefore no visible by customer service denial takes place.
- you can still use private IPs of the nodes - and use them to manually spread the load across the cluster.
Limitations:
- outbound SIP registrations won't transfer to the slave node in case if master node fails. So scheme when you register on your provider won't be fail-safe
- In case if management node fails - you won't be able to change configuration or view statistics until management node is up again
- you can commit configuration changes only on master (management) node. You can view configuration and statistics and generate reports on each node
Fail-safe cluster (DNS SRV)
This scheme is useful for those who wish to provide a fail-safe services to the customers.
Fail-safe operation is achieved by setting server preference order in DNS SRV records of your DNS zone. Customers should support DNS SRV lookup in this case and consult DNS before originating calls. Cluster nodes will originate calls to your providers using their private IPs, so providers have to setup their IP ACL accordingly.
Prerequisites:
- nodes quantity: 2+
- your providers should enable all your cluster nodes IPs in their IP ACL.
- IP addresses count = nodes count
- your customers should support DNS SRV lookup
Enhancements over a 1-server installation:
- in case if any of the nodes fails - all other nodes work as usual
- customer automatically switches to available nodes.
Limitations:
- outbound SIP registrations won't transfer to the slave node in case if master node fails. So scheme when you register on your provider won't be fail-safe
- In case if management node fails - you won't be able to change configuration or view statistics until management node is up again
- you can commit configuration changes only on master (management) node. You can view configuration and statistics and generate reports on each node
- not all equipment/software supports DNS SRV lookup, so this method is not available for those partners who doesn't support this feature
Automatic load-balancing cluster (DNS round-robin)
This scheme is useful for those who have a significant call volumes (500+ simultaneous calls).
You can balance the load across several servers using DNS round-robin.
Customers should originate calls to the cluster domain name, however cluster nodes will originate calls to your providers using their private IPs, so providers have to setup their IP ACL accordingly.
Prerequisites:
- nodes quantity: 2+
- setup you DNS zone for round-robin load balancing.
- tell your customers the domain name of a cluster instead of IP and customer should use lookup by domain name for each call
- your providers should enable all your cluster nodes IPs in their IP ACL.
- IP addresses count = nodes count
Enhancements over a 1-server installation:
- load is automatically spreaded between nodes
- you can still use private IPs of the nodes to manually spread load;
Limitations:
- you can commit configuration changes only on master (management) node. You can view configuration and statistics and generate reports on each node
- In case if management node fails - you won't be able to change configuration or view statistics until management node is up again;
- in case if one of the nodes fails - calls will be still sent to him. This will result ASR degradation that will notice your customers.
- not all equipment/software supports DNS lookup, so this method is not available for those partners who doesn't support this feature
Automatic load-balancing cluster (DNS SRV)
This scheme is useful for those who have a significant call volumes (500+ simultaneous calls).
You can balance the load across several servers using DNS SRV lookup
Originating customers should be configured for the cluster domain name and to use DNS SRV. Cluster nodes will originate calls to your providers using their private IPs, so providers have to setup their IP ACL accordingly.
Prerequisites:
- nodes quantity: 2+
- setup SRV records in your DNS zone for load balancing.
- tell your customers the domain name of a cluster instead of IP and customer should use DNS SRV lookup by domain name for each call
- your providers should enable all your cluster nodes IPs in their IP ACL.
- IP addresses count = nodes count
Enhancements over a 1-server installation:
- load is automatically spreaded between nodes
- you can still use private IPs of the nodes to manually spread load;
Limitations:
- you can commit configuration changes only on master (management) node. You can view configuration and statistics and generate reports on each node
- In case if management node fails - you won't be able to change configuration or view statistics until management node is up again;
- in case if one of the nodes fails - calls will be still sent to him. This will result ASR degradation that will notice your customers.
- not all equipment/software supports DNS SRV lookup, so this method is not available for those partners who doesn't support this feature
Taking into account that each scheme has benefits and limitation, it may be useful to use combination of these schemes. For example, you may want to have fail-safe cluster (DNS SRV + shared IP) with automatic load balancing (DNS SRV). This way, fail-safe operation will be supported both for clients that support DNS SRV and those who not. Load balancing will be supported for clients that support DNS SRV.