Issue:
Admins have seen this event while describing a pod and are curious if it is a sign of problems:
Warning FailedMount 55s (x2 over 1m2s) kubelet MountVolume.MountDevice failed for volume "domino-shared-store-domino-compute" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name efs.csi.aws.com not found in the list of registered CSI drivers
Root Cause:
This often a short-term warning that can be ignored. The above would only be seen in EKS (Amazon) based clusters. If the "FailedMount" warnings stop being issued to the events and mounts attach, then this problem can be ignored because it is temporary, a short-term occurrence while CSI is exposing the file system to a node's pods.
Resolution:
If the problem is not temporary and your pod continues to have FailedMount
then run kubectl get csidrivers.storage.k8s.io
to confirm the efs.csi.aws.com driver returns in the list and is actually installed. If not, your not may not have been provisioned correctly.
If the above cmd does show efs.csi.aws.com in the list, then you should inspect the pods containing the letters 'csi' kubectl get po -A | grep csi
reviews pod describes and log. Making sure you include the kube-system namespace in your search and review the "controllers" as well as the *csi* pod on the node which your failing pod has been assigned to. If you need help contact Technical Support, including your screenshots, describes, logs, etc.
Notes/Information:
any useful, additional background if necessary.
Comments
0 comments
Please sign in to leave a comment.