Issue Running K3s on Jetson Orin NX 16GB with JetPack 6.0

Hello NVIDIA Community,

I recently flashed my Jetson Orin NX 16GB module with JetPack 6.0. After installing K3s, all pods remain in the “Pending” state and fail to start. The container logs show the following error:

failed to create containerd task: failed to create shim task: OCI runtime create failed:
runc create failed: unable to start container process: error during container init:
error setting cgroup config for procHooks process:
openat2 /sys/fs/cgroup/kubepods/burstable/pod1678ec31-2edb-4f22-91f4-eef212583aae/fc1f81c9d18632ea18be29ec6365285b44459909c166274f245520529a538812/cpu.max

I’ve also installed the NVIDIA runtime, but that didn’t solve the issue.

Has anyone encountered a similar situation or have any insights on how to resolve it? Any help or pointers would be greatly appreciated.

Thank you!

Hello,

Thanks for visiting the NVIDIA Developer forums! Your topic will be best served in the Jetson category.

I have moved this post for better visibility.

Cheers,
Tom

Hi,

Could you share the command you used for installing?

We can run it successfully with the below steps:

$ sudo apt install curl
$ curl -sfL https://get.k3s.io | sh -s - --docker
$ sudo k3s kubectl get pods --all-namespaces
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   local-path-provisioner-6c86858495-9ntdn   1/1     Running     0          54s
kube-system   coredns-6799fbcd5-pm67n                   1/1     Running     0          54s
kube-system   helm-install-traefik-crd-kfcl8            0/1     Completed   0          54s
kube-system   helm-install-traefik-ps6cr                0/1     Completed   1          54s
kube-system   svclb-traefik-22f80e0d-x2d78              2/2     Running     0          20s
kube-system   traefik-f4564c4f4-tdsdt                   1/1     Running     0          20s
kube-system   metrics-server-67c658944b-dbhlc           1/1     Running     0          54s

Thanks.

Hello

As shown, the pod remains in the ContainerCreating phase.

NAMESPACE     NAME                                      READY   STATUS              RESTARTS   AGE
kube-system   coredns-ccb96694c-nqv74                   0/1     ContainerCreating   0          66s
kube-system   helm-install-traefik-crd-ncx7m            0/1     ContainerCreating   0          67s
kube-system   helm-install-traefik-lhd6p                0/1     ContainerCreating   0          67s
kube-system   local-path-provisioner-5b5f758bcf-qslcx   0/1     ContainerCreating   0          66s
kube-system   metrics-server-7bf7d58749-sq7v2           0/1     ContainerCreating   0          66s

Below is an example of the detailed output from describing one of the pods, coredns-ccb96694c-nqv74 in the kube-system namespace:


$ sudo kubectl describe pod coredns-ccb96694c-nqv74 -n kube-system

Name:                 coredns-ccb96694c-nqv74
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      coredns
Node:                 node
Start Time:           Mon, 10 Mar 2025 08:20:51 +0000
Labels:               k8s-app=kube-dns
                      pod-template-hash=ccb96694c
Annotations:          <none>
Status:               Pending
IP:                   
IPs:                  <none>
Controlled By:        ReplicaSet/coredns-ccb96694c
Containers:
  coredns:
    Container ID:  
    Image:         rancher/mirrored-coredns-coredns:1.12.0
    Image ID:      
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=2s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /etc/coredns/custom from custom-config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2smzx (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   False 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  custom-config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns-custom
    Optional:  true
  kube-api-access-2smzx:
    Type:                     Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:   3607
    ConfigMapName:            kube-root-ca.crt
    ConfigMapOptional:        <nil>
    DownwardAPI:              true
QoS Class:                    Burstable
Node-Selectors:               kubernetes.io/os=linux
Tolerations:                  CriticalAddonsOnly op=Exists
                              node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                              node-role.kubernetes.io/master:NoSchedule op=Exists
                              node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Topology Spread Constraints:  kubernetes.io/hostname:DoNotSchedule when max skew 1 is exceeded for selector k8s-app=kube-dns
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  6m37s  default-scheduler  Successfully assigned kube-system/coredns-ccb96694c-nqv74 to node

Meanwhile, the API server occasionally appears unavailable

sudo kubectl get pods --all-namespaces
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

Hi,

Do you find any error/warning in the log when the pods stay in creating stage?

$ kubectl logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]

Thanks.