Skip to content

Bad magic number in super-block #486

@bernardgut

Description

@bernardgut

Not sure if this is the right place to post since this might be a lower-level issue. If I need to post this somewhere else let me know

I was migrating my existing storage to new encrypted sc when this happened in the middle of a velero backup. Pod never comes online and status shows :

 k describe pod -n velero enc-mariadb-0-kfgnf                                 
Name:             enc-mariadb-0-kfgnf
Namespace:        velero
Priority:         0
Service Account:  velero-server
Node:             n3/2a02:169:25d8:0:eee7:a7ff:fe10:bbb8
Start Time:       Fri, 27 Feb 2026 11:18:01 +0100
Labels:           velero.io/data-upload=enc-mariadb-0-kfgnf
                  velero.io/exposer-pod-group=snapshot-exposer
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    DataUpload/enc-mariadb-0-kfgnf
Containers:
  0c3999fa-896a-4fd5-99b2-707481c15b2a:
    Container ID:  
    Image:         velero/velero:v1.17.1
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /velero
      data-mover
      backup
    Args:
      --volume-path=/0c3999fa-896a-4fd5-99b2-707481c15b2a
      --volume-mode=Filesystem
      --data-upload=enc-mariadb-0-kfgnf
      --resource-timeout=10m0s
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:
      VELERO_NAMESPACE:                velero (v1:metadata.namespace)
      NODE_NAME:                        (v1:spec.nodeName)
      VELERO_SCRATCH_DIR:              /scratch
      AWS_SHARED_CREDENTIALS_FILE:     /credentials/cloud
      GOOGLE_APPLICATION_CREDENTIALS:  /credentials/cloud
      AZURE_CREDENTIALS_FILE:          /credentials/cloud
      ALIBABA_CLOUD_CREDENTIALS_FILE:  /credentials/cloud
    Mounts:
      /0c3999fa-896a-4fd5-99b2-707481c15b2a from 0c3999fa-896a-4fd5-99b2-707481c15b2a (rw)
      /credentials from cloud-credentials (rw)
      /host_plugins from host-plugins (rw)
      /host_pods from host-pods (rw)
      /scratch from scratch (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v9vqt (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   False 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  0c3999fa-896a-4fd5-99b2-707481c15b2a:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  enc-mariadb-0-kfgnf
    ReadOnly:   false
  cloud-credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  velero
    Optional:    false
  host-pods:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  
  host-plugins:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins
    HostPathType:  
  scratch:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-v9vqt:
    Type:                     Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:   3607
    ConfigMapName:            kube-root-ca.crt
    Optional:                 false
    DownwardAPI:              true
QoS Class:                    BestEffort
Node-Selectors:               kubernetes.io/os=linux
Tolerations:                  node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Topology Spread Constraints:  kubernetes.io/hostname:ScheduleAnyway when max skew 1 is exceeded for selector velero.io/exposer-pod-group=snapshot-exposer
Events:
  Type     Reason                  Age                 From                     Message
  ----     ------                  ----                ----                     -------
  Normal   Scheduled               11m                 default-scheduler        Successfully assigned velero/enc-mariadb-0-kfgnf to n3
  Normal   SuccessfulAttachVolume  11m                 attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-4346d8b1-7d4e-434e-94ae-4a4a443987a9"
  Warning  FailedMount             77s (x13 over 11m)  kubelet                  MountVolume.SetUp failed for volume "pvc-4346d8b1-7d4e-434e-94ae-4a4a443987a9" : rpc error: code = Internal desc = NodePublishVolume failed for pvc-4346d8b1-7d4e-434e-94ae-4a4a443987a9: failed to run fsck on device '/dev/drbd1040': failed to run fsck: output: "fsck from util-linux 2.41
fsck.ext2: Bad magic number in super-block while trying to open /dev/drbd1040
/dev/drbd1040: 
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

", exit status 8

on the other hand the PVC is created and has no special status in its events.
This is a transient pod with a transient pvc created on the fly to move some data to S3. I have used this storage class to create workloads 10 mins before without issue.

Also of note: I moved the cluster to use luks layer for encryption on all storage classes (including the one here) very recently. Not sure if related

Any idea what could cause this on a new pvc mount ?

Thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions