High Availability for Slave Shards
When you enable database replication for your database, RS replicates your data to a slave node to make sure that your data is highly available. If the slave node fails or if the master node fails and the slave is promoted to master, the remaining master node is a single point of failure.
You can configure high availability for slave shards (slave HA) so that the cluster automatically migrates the slave shards to an available node. An available node is a node that:
- Meets slave migration requirements, such as rack-awareness.
- Has enough available RAM to store the slave shard.
- Does not also contain the master shard.
In practice, slave migration creates a new slave shard and replicates the data from the master shard to the new slave shard. For example:
Node:2 has a master shard and node:3 has the corresponding the slave shard.
- Node:2 fails and the slave shard on node:3 is promoted to master.
- Node:3 fails and the master shard is no longer replicated to the slave shard on the failed node.
If slave HA is enabled, a new slave shard is created on an available node.
The data from the master shard is replicated to the new slave shard.
- Slave HA follows all prerequisites of slave migration, such as rack-awareness.
- Slave HA migrates as many shards as possible based on available DRAM in the target node. When no DRAM is available, slave HA stops migrating slave shards to that node.
Configuring High Availability for Slave Shards
Using rladmin or the REST API, slave HA is controlled on the database level and on the cluster level. You can enable or disable slave HA for a database or for the entire cluster.
When slave HA is enabled for both the cluster and a database, slave shards for that database are automatically migrated to another node in the event of a master or slave shard failure. If slave HA is disabled at the cluster level, slave HA will not migrate slave shards even if slave HA is enabled for a database.
By default, slave HA is enabled for the cluster and disabled for each database so that o enable slave HA for a database, enable slave HA for that database.
To enable slave HA for a cluster using rladmin, run:
rladmin tune cluster slave_ha enabled
To disable slave HA for a specific database using rladmin, run:
rladmin tune db <bdb_uid> slave_ha disabled
Slave HA Configuration Options
You can see the current configuration options for slave HA with:
rladmin info cluster
By default, slave HA has a 10-minute grace period after node failure and before new slave shards are created. To configure this grace period from rladmin, run:
rladmin tune cluster slave_ha_grace_period <time_in_seconds>
Slave shard migration is based on priority so that, in the case of limited memory resources, the most important slave shards are migrated first. Slave HA migrates slave shards for databases according to this order of priority:
slave_ha_priority - The slave shards of the database with the higher slave_ha_priority integer value are migrated first.
To assign priority to a database, run:
rladmin tune db <bdb_uid> slave_ha_priority
- CRDBs - The CRDB synchronization uses slave shards to synchronize between the replicas.
- Database size - It is easier and more efficient to move slave shards of smaller databases.
- Database UID - The slave shards of databases with a higher UID are moved first.
Both the cluster and the database have cooldown periods. After node failure, the cluster cooldown period prevents another slave migration due to another node failure for any database in the cluster until the cooldown period ends (Default: 1 hour).
After a database is migrated with slave HA, it cannot go through another slave migration due to another node failure until the cooldown period for the database ends (Default: 2 hours).
To configure this grace period from rladmin, run:
For the cluster:
rladmin tune cluster slave_ha_cooldown_period <time_in_seconds> ```
For all databases in the cluster:
rladmin tune cluster slave_ha_bdb_cooldown_period <time_in_seconds> ```
The following alerts are sent during slave HA activation:
- Shard migration begins after the grace period
- Shard migration fails because there is no available node (Sent hourly)
- Shard migration is delayed because of the cooldown period