Skip to main content

Administrating MarkLogic Server

Example—Rolling Upgrade

The following procedure is a simplified, step-by-step process for a rolling upgrade on a small, three-host cluster. Here is the general outline: 1) Back up all of your hosts; 2) Make any changes to software applications; 3) Proceed with the rolling upgrade, failing over and upgrading each node; 4) Verify that you can commit the upgrade; 5) Change the cluster effective version to the new version; and 6) Do any necessary cleanup.

In addition, prior to starting the upgrade, you may need to modify some of your existing software to run in a mixed version cluster. See Interaction with Other MarkLogic Features for details.

Note

  • MarkLogic 9 will not work on Red Hat Enterprise Linux 6. See Supported Platforms in the Release Notes for more information.

  • When an OS upgrade is required, perform a separate rolling upgrade from the upgrade of the MarkLogic Server.

  • Prior to any activity, for clusters configured with local disk failover, it is recommended that the entire system be restored to the state in which primary forests are acting as primary forests and replica forests are acting as replica forests and in which replica forests are all in the mount state of sync replicating. For clusters configured with shared disk failover, it is recommended that forests be mounted on their primary hosts.

  • If you are upgrading from MarkLogic 8 or lower, you cannot use the Management REST API to read the cluster configuration. However, you may use the Admin API to read the cluster configuration.

To perform the rolling upgrade, follow these steps:

  1. Back up all hosts in your existing cluster. See Backing Up and Restoring a Database for details on backing up your hosts.

  2. Modify any code that needs to be modified. See Interaction with Other MarkLogic Features for a list of potential software issues.

  3. Start the rolling upgrade on the host that contains your primary security and schema forests.

  4. Configure your load balancer to avoid sending further transactions towards the host to be upgraded.

  5. Wait for all existing and queued transactions towards the target host to complete before proceeding with the next steps.

  6. [UPGRADING THE FIRST NODE ONLY] Trigger the forest failover for your security, schema, and other auxiliary forests.

    You can use this API:

    curl -X POST --anyauth --user admin:admin -d "state=restart" "http://node1:8002/manage/v2/forests/Security"
    curl -X POST --anyauth --user admin:admin -d "state=restart" "http://node1:8002/manage/v2/forests/Schemas"

    Using this API prioritizes the failover of your security and schema forests over the failover of your content forests. This priority order minimizes the impact of the time needed to remount the replica auxiliary forests.

    Note

    • While your replica security forest is remounting, no one can authenticate.

    • While your replica schema forest is remounting, no TDEs, redaction, etc. are available.

    • While your replica trigger forest is remounting, no pre- or post-commit triggers can occur.

    • While your replica module forest is remounting, no data services, REST API extensions, etc. are available.

  7. Take down the host and start the upgrade:

    1. Stop MarkLogic. Use this cURL command so that you can also take advantage of the fast failover feature:

      curl -X POST --anyauth --user admin:admin -d "state=shutdown&failover=true" "http://node1:8002/manage/v2/hosts/node1"

      Note

      • The failover parameter was added to POST:/manage/v2/hosts/{id|name} in MarkLogic version 9.0-5. The above call will fail in previous versions of MarkLogic.

      • "Fast" does not mean "instantaneous." It will still take some time to remount the replica forests as primary.

    2. Uninstall the existing RPM:

      rpm uninstall MarkLogic-8.0-1.x86_64.rpm

    3. Install the new RPM:

      rpm install MarkLogic-10.0-3.x86_64.rpm

    4. Bring the host back up, and start MarkLogic:

      sudo /sbin/service MarkLogic start

  8. Wait for the forests on this node to catch up with replication. Local forests' mount state should be in sync replicating:

    This command

    curl --anyauth --user admin:admin "http://{host}:8002/manage/LATEST/forests/{id|name}?view=status&format=json"

    should return a result including this property:

    {
      ...
       "status-properties": {
        "state": {
          "units": "enum",
          "value": "sync replicating"
        }
      }
      ...
    }
  9. [IMMEDIATELY AFTER UPGRADING FIRST NODE] Trigger the forest failover for your replica security, schema, and other auxiliary forests that are acting as primary using this REST API:

    curl -X POST --anyauth --user admin:admin -d "state=restart" "http://node1:8002/manage/v2/forests/Security-replica"
    curl -X POST --anyauth --user admin:admin -d "state=restart" "http://node1:8002/manage/v2/forests/Schemas-replica"

    Using this API prioritizes the failover of your security and schema forests over the failover of your content forests. This priority order minimizes the impact of the time needed to remount the primary auxiliary forests.

    Note

    • While your primary security forest is remounting, no one can authenticate.

    • While your primary schema forest is remounting, no TDEs, redaction, etc. are available.

    • While your primary trigger forest is remounting, no pre- or post-commit triggers can occur.

    • While your primary module forest is remounting, no data services, REST API extensions, etc. are available.

  10. Repeat Step 3 - Step 8 for each of the hosts in the cluster. (You will need to perform the upgrade process node by node.)

  11. Trigger a failover for the replica forests that are acting as primary, especially the security and schema forests:

    curl -X POST --anyauth --user admin:admin -d "state=restart" "http://{nodeX}:8002/manage/v2/forests/{content-replica-X}"
    
  12. When you have completed all of the host upgrades, check the software version and the effective version for the cluster, and then commit the upgrade:

    1. Use this query to check if the cluster is ready to commit the upgrade. It will return true when the cluster is ready:

      xquery version "1.0-ml"; 
      import module namespace admin = "http://marklogic.com/xdmp/admin" 
        at "/MarkLogic/admin.xqy";
      admin:can-commit-upgrade()
      
    2. Upgrade the security database on the host cluster:

      curl -X POST --anyauth --user admin:admin \
        --header "Content-Type:application/json" \
        -d '{"operation": "security-database-upgrade-local-cluster"}'\
        "http://localhost:8002/manage/v2"
    3. After committing the upgrade, verify it by retrieving the effective software version of the cluster:

      Use this query:

      xquery version "1.0-ml"; 
      import module namespace admin = "http://marklogic.com/xdmp/admin" 
        at "/MarkLogic/admin.xqy";
      let $config := admin:get-configuration()
      return 
        admin:cluster-get-effective-version($config)
      

      Or use this cURL command:

      curl -X POST --anyauth --user admin:admin -d "state=shutdown&failover=true" "http://node1:8002/manage/v2/clusters/{cluster-name}?format=json"

      The cluster version, returned in the property effective-version, should match the intended version. For example, if your target version is 10.0-5, make sure that the effective-version is 10000500.