Administrator action
1. Determine whether a node is in the minority or majority group, by running the following command from the node that is
reporting the error:
sysctl efs.gmp.has_quorum
●
If the command returns
0
, the error occurred on the minority group. The message might continue until the cluster is
healthy. No further action is required.
●
If the command returns
1
, the node is in the majority group. Proceed to step 2.
2. Determine whether the number of snapshots exceeds the system-wide and directory limits. (The system-wide limit is
20,000, and the directory limit is 1,000.)
3. If the number of snapshots is at or exceeds the limits, you can delete the extraneous snapshots. If the snapshots are within
system limits, the event will automatically clear.
If the event persists, gather logs, and then contact Technical Support for additional troubleshooting. For instructions on how to
gather cluster logs, see
.
600010002
The snapshot daemon failed to delete an expired snapshot.
Description
The system cannot remove an expired snapshot lock. This error can occur when a disk is unwritable.
If the cluster is split, this error might occur on the minority group, or the group that contains fewer than half of the nodes. In
this case, the message persists until the cluster is healthy, and you can safely ignore the error.
Administrator action
1. Determine whether a node is in the minority or majority group, by running the following command from the node that is
reporting the error:
sysctl efs.gmp.has_quorum
●
If the command returns
0
, the error occurred on the minority group. The message might continue until the cluster is
healthy. No further action is required.
●
If the command returns
1
, the node is in the majority group. Proceed to step 2.
2. If the error occurred on the majority group, perform the following tasks:
●
Confirm that the cluster contains free disk space. If the cluster is more than 99 percent full, delete files or add storage
capacity to the cluster. You can view the percentage of available disk space on the cluster by running the following
command:
isi status -q
●
Verify that the
isi_job_d
process is running on all nodes by first logging in to any node through a secure shell (SSH)
connection or the serial console and then running the following command:
isi_for_array -s 'pgrep isi_job_d|wc -l'|grep '[^0-9]0$'
Nodes that are listed in the output do not have the isi_job_d process running, and cannot run any system jobs.
If the event persists, gather logs, and then contact Technical Support for additional troubleshooting. For instructions on how to
gather cluster logs, see
.
100
Software events