xdmp.databaseBackup( $forestIDs as String, $pathname as String, [$journal-archiving as Boolean?], [$journal-archive-path as String?], [$lag-limit as String?] ) as String
Starts an asynchronous backup of the specified list of forests to the backup data directory. Optionally starts journal archiving of the specified list of forests to the specified journal archive directory. Returns a job ID that uniquely identifies the backup task.
|$forestIDs||A sequence of forest IDs.|
|$pathname||A backup data directory pathname. The directory must exist and be writable by the operating system user under which MarkLogic Server is running. The directory cannot be the MarkLogic Server install directory or the MarkLogic Server data directory. The directory specified can be an operating system mounted directory path, it can be an HDFS path, or it can be an S3 path. For detail on using HDFS and S3 storage in MarkLogic, see Disk Storage Considerations.|
Whether or not to enable journal archiving. Defaults to |
|$journal-archive-path||Path to where archived journals are stored. Defaults to the backup data directory.|
|$lag-limit||Maximum difference in seconds that the archived journal can lag behind its forest's active journal. Defaults to 15.|
You cannot restore to a read-only forest.
Reindexing will stop while a backup or restore is in progress.
The backup directory must exist on each host that has a forest specified in the database backup call (that is, the d-nodes in which the forests being backed up are hosted).
If enabling journal archiving, all forests must belong to the same database.
xdmp.databaseBackup([11183608861595735720,898513504988507762], "/backups/Data"); => 33030877979801813489
xdmp.databaseBackup(xdmp.databaseForests(xdmp.database("Documents")), "/backups/Data", fn:true(), "/backups/JournalArchiving", 15); => 437302857479804813287