This chapter describes the procedure for configuring local-disk failover for a forest. For details about how failover works and the requirements for failover, see High Availability of Data Nodes With Failover. For details on configuring shared-disk failover, see Configuring Shared-Disk Failover for a Forest. This chapter includes the following sections:
For other failover administrative procedures that apply to both local-disk and shared-disk failover, see Other Failover Configuration Tasks.
failover enablebutton is set to
failover enablebutton and the
failover hostssection at the bottom.
failover enable. Note that
failover enablemust be set to
trueat both the forest and the group level for failover to be active.
forest replicas. You can set one or more replica forests.
The forest is now configured with the specified replica forests. You must attach the primary forest to a database before you can use the forest, but it is ready and set up for failover. You cannot attach the replica forests to a database; they will automatically be kept up-to-date as you update content in the primary forest.
If a forest fails over to a failover host, causing a replica forest to take the role of the primary forest, the replica forest will remain in the open state until the host unmounts the forest. If you have a failed over forest and want to revert it back to the original primary host (unfailover the forest), you must either restart the forest that is open or restart the host in which the forest open. You should only do this if the original primary forest has a state of sync replicating, which indicates that it is up-to-date and ready to take over. After restarting the forest that is currently open, the forest will automatically open on the primary host (if the original primary forest is in the state sync replicating). Make sure the primary host is back online and corrected before attempting to unfailover the forest. To check the status of the hosts in the cluster, see the Cluster Status Page in the Admin Interface. To check the status of the forest, see the Forest Status Pages in the Admin Interface.
unmounted, the forest might not have completed mounting. Refresh the page and the Mount State should indicate that the forest is
The forest is restarted, and if the primary host is available, the primary host will mount the forest. If the primary host is not available, the first failover host will try to mount the forest, and so on until there are no more failover hosts to try. If you look in the
ErrorLog.txt log file for the primary host, you will see messages similar to the following:
2010-09-13 20:16:47.751 Info: Mounted forest myFailoverForest locally on /space/marklogic/Forests/myFailoverForest 2010-09-13 20:16:47.751 Info: Forest replica accepts forest myFailoverForest as the master with timestamp 2564905551526239330
2010-09-13 20:16:47.751 Info: Forest failover1 accepts forest myFailover as the master with timestamp 2564905551526239330 2010-09-14 17:01:29.651 Info: Forest replica1 starting synchronization to forest failover1 2010-09-14 17:01:29.666 Info: Forest replica1 starting bulk replication to forest failover1 2010-09-14 17:01:29.776 Info: Forest replica1 needs to replicate 0 fragments to forest failover1 2010-09-14 17:01:29.776 Info: Forest replica1 finished bulk replicated 0 fragments to forest failover1 2010-09-14 17:01:29.807 Info: Forest replica1 finished bulk replication to forest failover1 2010-09-14 17:01:29.822 Info: Forest replica1 finished synchronizing to replica forest failover1 2010-09-14 17:09:26.638 Info: Forest replica1 accepts forest failover1 as the master with precise time 12845094147890000