Skip to main content

MarkLogic Server on Kubernetes

Troubleshooting

Note

For the commands below, provide the namespace name if the chart is deployed to a different namespace than the current kubectl context. Use -n <your-namespace> to apply the command to a specific namespace, or use --all-namespaces (-A) to apply the command to all namespaces.

Retrieve the status of deployed resources

To get the status of the Helm deployment, enter this command:

helm list

To get the status of all the pods in the current namespace, enter this command:

kubectl get pods

Note

The commands above will get all the pods running in the current namespace.

To get the status of all the pods in a MarkLogic deployment, enter this command:

kubectl get pods --selector="app.kubernetes.io/name=marklogic" --all-namespaces

To list all the pods for a specific release:

kubectl get pods --selector="app.kubernetes.io/instance=<RELEASE-NAME>

To get detailed information, use the kubectl describe command:

kubectl describe pods <POD-NAME>

Note

After entering this command, you can use the Events list at the bottom for debugging.

Statuses for MarkLogic pods

Pending

This status indicates that the pod has been accepted by the Kubernetes system, but the container within the pod has not started yet. If a pod is stuck in this phase, use the kubectl describe command to get more information. Often, a detailed warning is listed in the Events list at the bottom. For example, if none of the nodes meet the scheduling requirements, a FailedScheduling warning event appears in the Events list.

Running

This status indicates that the pod has been scheduled to a node and that all the containers in the pod are running.

Access logs

To access container logs for specific pod, use this command:

kubectl logs <pod-name>

To access all the logs in MarkLogic server, follow these steps:

  1. Use the kubect exec command to get access into a specific MarkLogic container:

    kubectl exec -it <POD-NAME> -- /bin/bash

  2. Go to /var/opt/MarkLogic/Logs/ to view all the logs.

Note

It is recommended that you set up log forwarding in production environments.

Common issues

ImagePullBackOff

  • When a pod enters ImagePullBackOff status, Kubernetes was unable to download the container image for the pod's container. This could be caused by a network issue or incorrect image tags.

  • By default, the image registry is Docker Hub. Test the connection from the node to Docker Hub to make sure that the Kubernetes node has access to the registry.

  • If you provide a customized value for the image repository or tag during the installation, use this command to test if the image is valid:

    kubectl run marklogic --image=marklogicdb/marklogic-db:latest

CrashLoopBackOff

When a pod enters CrashLoopBackOff status, the pod's containers have exited with an error, causing Kubernetes to restart them.

This issue could be caused by several reasons:

  • Probe Failure - The MarkLogic container uses a liveness probe to perform a container health check. If the liveness probe fails a certain number of times, the container will restart.

  • Insufficient Resources, such as CPU or Memory - Double-check the resource limits and requests specified in the values.yaml file.

  • Application Failure - Check the container or MarkLogic Server logs to see if there are any errors or messages related to the crashes.

Note

To see MarkLogic Server for a crashed container, you need a logs forwarder solution. (FluentBit is enabled in the Helm chart).

Common debugging practices

  1. Get pod statuses by using kubectl get pods.

  2. Get detailed information by using kubectl describe pods.

  3. Get container logs and MarkLogic logs.

Recommend guides for debugging in Kubernetes 

For more information about how to troubleshoot in Kubernetes, see A visual guide on troubleshooting Kubernetes deployments