Completely purge and reinstall nvidia gpu operator

I believe this question should go under nvidia gpu opeator because I think TAO toolkit is not direcly causing the problem (just the victim).

This symptom of the issue is simar to this issue but I get no logs or no activity in the pods

g@gsrv:~$ kubectl get pods -n tao-gnet
NAME                                            READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-78d54fbd-t6lwh         1/1     Running   0          55m
tao-toolkit-api-app-pod-5ffc48cd57-xxlj2        1/1     Running   0          53m
tao-toolkit-api-workflow-pod-6dbc7c8f98-wp4sm   1/1     Running   0          53m

When investigating the issue the only lead was the health.txt

Healthy at 2023-08-21T16:44:40.554640
Workflow has waken up
Healthy at 2023-08-21T16:44:40.558992
Found 2 pending jobs
Healthy at 2023-08-21T16:44:40.559038
c31a57f2-6e79-47e3-ab4b-a7e96a6a633a with action train: Checking dependencies
Healthy at 2023-08-21T16:44:40.559069
Total dependencies: 5
Healthy at 2023-08-21T16:44:40.737201
Unmet dependency: gpu
Healthy at 2023-08-21T16:44:40.737254
f7b5f7c3-4b6b-40e0-99a1-9e1fb86b8296 with action train: Checking dependencies
Healthy at 2023-08-21T16:44:40.737281
Total dependencies: 5
Healthy at 2023-08-21T16:44:40.892604
Unmet dependency: gpu
Healthy at 2023-08-21T16:44:40.892643
Workflow going to sleep

Then I tried running some seperate gpu jobs and they failed which lead me towards the gpu-operator

Then without thinking too much I tried deleting and reinstalling the gpu operator helm chart

when I try to delete and reinstall gpu operator I get the following error (I did use the command kubectl delete crd clusterpolicies.nvidia.com to delete the crd before installing the gpu-operator)

uninstall command

helm uninstall gpu-operator-1691603607 -n gpu-operator

install command
( my usual cluster setup is very similar to this guide.)

helm install --wait --generate-name \
     -n gpu-operator --create-namespace \
      nvidia/gpu-operator \
      --set driver.enabled=false \
      --set toolkit.enabled=false
g@gsrv:~$  kubectl get pods -n gpu-operator
NAME                                                              READY   STATUS                  RESTARTS       AGE
gpu-feature-discovery-mctvh                                       0/1     Init:0/1                0              9m9s
gpu-operator-1692635184-node-feature-discovery-master-c5cbd2d4j   1/1     Running                 0              9m32s
gpu-operator-1692635184-node-feature-discovery-worker-4mfbk       1/1     Running                 0              9m32s
gpu-operator-1692635184-node-feature-discovery-worker-zrndp       1/1     Running                 0              9m32s
gpu-operator-865c55b5b-hbjzm                                      1/1     Running                 0              9m32s
nvidia-dcgm-exporter-5vj9n                                        0/1     Init:0/1                0              9m9s
nvidia-device-plugin-daemonset-tdrnb                              0/1     Init:0/1                0              9m9s
nvidia-mig-manager-jf9gl                                          0/1     Init:0/1                0              9m9s
nvidia-operator-validator-z92cw                                   0/1     Init:CrashLoopBackOff   4 (7m1s ago)   9m9s

I had similar issues and usually this goes away if I reinstall the k8 cluster (this is not a big deal as we have a local k8 cluster) but it is getting too annoying as this takes times and I have to reinstall other charts.

Is there a way to properly purge the gpu-operatoor and reinstall it without having to reset the k8 cluster?

Note: the gpu node in question is a nvidia dgx station A100

Best,
Ganindu.

Please take a look at ESPCommunity to check if it can help you.

Thanks for the reply Morgan! I tried that but failed,

output of helm ls -n gpu-operator

NAME                   	 NAMESPACE   	REVISION	UPDATED                                	STATUS  	CHART               	APP VERSION
gpu-operator-1692637233	 gpu-operator	1       	2023-08-21 17:00:34.153530319 +0000 UTC	deployed	gpu-operator-v23.6.0	v23.6.0    

uninstalling with helm uninstall --wait -n gpu-operator gpu-operator-1692637233

release "gpu-operator-1692637233" uninstalled

Tried to find if clusterroles were hidden away with

kubectl get clusterroles | grep gpu

I got

gpu-operator-1692635184-node-feature-discovery                         2023-08-21T16:26:30Z

Tried the same for clusterrole binding

kubectl get clusterrolebinding | grep gpu
gpu-operator-1692635184-node-feature-discovery                    ClusterRole/gpu-operator-1692635184-node-feature-discovery                         16h
gpu-operator-1692635184-node-feature-discovery-topology-updater   ClusterRole/gpu-operator-1692635184-node-feature-discovery-topology-updater        16h

deleted the clusterrolebindings I came across

kubectl delete  ClusterRolebinding gpu-operator-1692635184-node-feature-discovery gpu-operator-1692635184-node-feature-discovery-topology-updater
clusterrolebinding.rbac.authorization.k8s.io "gpu-operator-1692635184-node-feature-discovery" deleted
clusterrolebinding.rbac.authorization.k8s.io "gpu-operator-1692635184-node-feature-discovery-topology-updater" deleted

then deleted the gpu-operator namespace

However the instructions below were unsucessfull

"
Delete Custom deployments for GPU Operator and Node Feature Discovery
Kubectl delete deploy gpu-operator

kubectl delete deploy gpu-operator-node-feature-discovery-master

"

there was no resource called “deploy” (Note: kubectl get deployments -A did not return anything associated to gpu-operator)

The I reinstlled the helm chart, but keep having the same error!

then I tried to describe the failing node?

g@gsrv:~$ kubectl describe pod  -n gpu-operator nvidia-operator-validator-5r4vr
Name:                 nvidia-operator-validator-5r4vr
Namespace:            gpu-operator
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 dgx/172.16.3.2
Start Time:           Tue, 22 Aug 2023 09:50:57 +0000
Labels:               app=nvidia-operator-validator
                     app.kubernetes.io/managed-by=gpu-operator
                     app.kubernetes.io/part-of=gpu-operator
                     controller-revision-hash=5fc6f598dc
                     helm.sh/chart=gpu-operator-v23.6.0
                     pod-template-generation=1
Annotations:          cni.projectcalico.org/containerID: b0e60190519cdf4bbad42d7da484f97767c976a6bdb6ba8bd1824ef2c346fe38
                     cni.projectcalico.org/podIP: 192.168.251.153/32
                     cni.projectcalico.org/podIPs: 192.168.251.153/32
Status:               Pending
IP:                   192.168.251.153
IPs:
 IP:           192.168.251.153
Controlled By:  DaemonSet/nvidia-operator-validator
Init Containers:
 driver-validation:
   Container ID:  containerd://f15cb4b7ad11a4fb48013a549ff833665b0f8c45b14bf03eaa5c0fabaea85a94
   Image:         nvcr.io/nvidia/cloud-native/gpu-operator-validator:v23.6.0
   Image ID:      nvcr.io/nvidia/cloud-native/gpu-operator-validator@sha256:b65eb649188193f39e169af5650acfc7aa3cc32d2328630118702f04fdc4afc1
   Port:          <none>
   Host Port:     <none>
   Command:
     sh
     -c
   Args:
     nvidia-validator
   State:          Terminated
     Reason:       Error
     Exit Code:    1
     Started:      Tue, 22 Aug 2023 10:01:50 +0000
     Finished:     Tue, 22 Aug 2023 10:01:52 +0000
   Last State:     Terminated
     Reason:       Error
     Exit Code:    1
     Started:      Tue, 22 Aug 2023 09:56:42 +0000
     Finished:     Tue, 22 Aug 2023 09:56:44 +0000
   Ready:          False
   Restart Count:  7
   Environment:
     WITH_WAIT:  true
     COMPONENT:  driver
   Mounts:
     /host from host-root (ro)
     /host-dev-char from host-dev-char (rw)
     /run/nvidia/driver from driver-install-path (rw)
     /run/nvidia/validations from run-nvidia-validations (rw)
     /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vw8v2 (ro)
 toolkit-validation:
   Container ID:  
   Image:         nvcr.io/nvidia/cloud-native/gpu-operator-validator:v23.6.0
   Image ID:      
   Port:          <none>
   Host Port:     <none>
   Command:
     sh
     -c
   Args:
     nvidia-validator
   State:          Waiting
     Reason:       PodInitializing
   Ready:          False
   Restart Count:  0
   Environment:
     NVIDIA_VISIBLE_DEVICES:  all
     WITH_WAIT:               false
     COMPONENT:               toolkit
   Mounts:
     /run/nvidia/validations from run-nvidia-validations (rw)
     /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vw8v2 (ro)
 cuda-validation:
   Container ID:  
   Image:         nvcr.io/nvidia/cloud-native/gpu-operator-validator:v23.6.0
   Image ID:      
   Port:          <none>
   Host Port:     <none>
   Command:
     sh
     -c
   Args:
     nvidia-validator
   State:          Waiting
     Reason:       PodInitializing
   Ready:          False
   Restart Count:  0
   Environment:
     WITH_WAIT:                    false
     COMPONENT:                    cuda
     NODE_NAME:                     (v1:spec.nodeName)
     OPERATOR_NAMESPACE:           gpu-operator (v1:metadata.namespace)
     VALIDATOR_IMAGE:              nvcr.io/nvidia/cloud-native/gpu-operator-validator:v23.6.0
     VALIDATOR_IMAGE_PULL_POLICY:  IfNotPresent
     VALIDATOR_RUNTIME_CLASS:      nvidia
   Mounts:
     /run/nvidia/validations from run-nvidia-validations (rw)
     /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vw8v2 (ro)
 plugin-validation:
   Container ID:  
   Image:         nvcr.io/nvidia/cloud-native/gpu-operator-validator:v23.6.0
   Image ID:      
   Port:          <none>
   Host Port:     <none>
   Command:
     sh
     -c
   Args:
     nvidia-validator
   State:          Waiting
     Reason:       PodInitializing
   Ready:          False
   Restart Count:  0
   Environment:
     COMPONENT:                    plugin
     WITH_WAIT:                    false
     WITH_WORKLOAD:                false
     MIG_STRATEGY:                 single
     NODE_NAME:                     (v1:spec.nodeName)
     OPERATOR_NAMESPACE:           gpu-operator (v1:metadata.namespace)
     VALIDATOR_IMAGE:              nvcr.io/nvidia/cloud-native/gpu-operator-validator:v23.6.0
     VALIDATOR_IMAGE_PULL_POLICY:  IfNotPresent
     VALIDATOR_RUNTIME_CLASS:      nvidia
   Mounts:
     /run/nvidia/validations from run-nvidia-validations (rw)
     /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vw8v2 (ro)
Containers:
 nvidia-operator-validator:
   Container ID:  
   Image:         nvcr.io/nvidia/cloud-native/gpu-operator-validator:v23.6.0
   Image ID:      
   Port:          <none>
   Host Port:     <none>
   Command:
     sh
     -c
   Args:
     echo all validations are successful; sleep infinity
   State:          Waiting
     Reason:       PodInitializing
   Ready:          False
   Restart Count:  0
   Environment:    <none>
   Mounts:
     /run/nvidia/validations from run-nvidia-validations (rw)
     /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vw8v2 (ro)
Conditions:
 Type              Status
 Initialized       False 
 Ready             False 
 ContainersReady   False 
 PodScheduled      True 
Volumes:
 run-nvidia-validations:
   Type:          HostPath (bare host directory volume)
   Path:          /run/nvidia/validations
   HostPathType:  DirectoryOrCreate
 driver-install-path:
   Type:          HostPath (bare host directory volume)
   Path:          /run/nvidia/driver
   HostPathType:  
 host-root:
   Type:          HostPath (bare host directory volume)
   Path:          /
   HostPathType:  
 host-dev-char:
   Type:          HostPath (bare host directory volume)
   Path:          /dev/char
   HostPathType:  
 kube-api-access-vw8v2:
   Type:                    Projected (a volume that contains injected data from multiple sources)
   TokenExpirationSeconds:  3607
   ConfigMapName:           kube-root-ca.crt
   ConfigMapOptional:       <nil>
   DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              nvidia.com/gpu.deploy.operator-validator=true
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                            node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                            node.kubernetes.io/not-ready:NoExecute op=Exists
                            node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                            node.kubernetes.io/unreachable:NoExecute op=Exists
                            node.kubernetes.io/unschedulable:NoSchedule op=Exists
                            nvidia.com/gpu:NoSchedule op=Exists
Events:
 Type     Reason     Age                  From               Message
 ----     ------     ----                 ----               -------
 Normal   Scheduled  11m                  default-scheduler  Successfully assigned gpu-operator/nvidia-operator-validator-5r4vr to dgx
 Normal   Pulled     9m28s (x5 over 11m)  kubelet            Container image "nvcr.io/nvidia/cloud-native/gpu-operator-validator:v23.6.0" already present on machine
 Normal   Created    9m28s (x5 over 11m)  kubelet            Created container driver-validation
 Normal   Started    9m28s (x5 over 11m)  kubelet            Started container driver-validation
 Warning  BackOff    54s (x48 over 10m)   kubelet            Back-off restarting failed container
g@gsrv:~$ 

Then I tried if the problem is related to the daemonsets

g@gsrv:~$ kubectl get daemonsets -n gpu-operator
NAME                                                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                      AGE
gpu-feature-discovery                                   1         1         0       1            0           nvidia.com/gpu.deploy.gpu-feature-discovery=true   6m12s
gpu-operator-1692697849-node-feature-discovery-worker   2         2         2       2            2           <none>                                             6m14s
nvidia-dcgm-exporter                                    1         1         0       1            0           nvidia.com/gpu.deploy.dcgm-exporter=true           6m12s
nvidia-device-plugin-daemonset                          1         1         0       1            0           nvidia.com/gpu.deploy.device-plugin=true           6m12s
nvidia-mig-manager                                      1         1         0       1            0           nvidia.com/gpu.deploy.mig-manager=true             6m11s
nvidia-operator-validator                               1         1         0       1            0           nvidia.com/gpu.deploy.operator-validator=true      6m12s

I wonder if the Node selectior being “<none>” is a part of the problem?

do you think it is worth opening an enterprise support case? (I can reset the cluster but I feel like there must be an easy way)

Cheers,
Ganindu.

P.S.

I also followed the instructions given from nvidia to uninstall the chart. It all went smooth except for trying to unload the driver at the end.

When trying to delete I encounterd

sudo rmmod nvidia_modeset nvidia_uvm nvidia
rmmod: ERROR: Module nvidia_modeset is in use by: nvidia_drm
rmmod: ERROR: Module nvidia is in use by: nvidia_modeset

so i chose to reboot the node. and then again reinstalled the chart as I’ve mentioned above but I keep getting the error.

I am afraid it is expected since in the bottom the note mentions that After un-install of GPU Operator, the NVIDIA driver modules might still be loaded. Either reboot the node or unload them using the following command: $ sudo rmmod nvidia_modeset nvidia_uvm nvidia "

Could you share all the original status after you “reinstall the chart”?
For example, $ kubectl get pods -n gpu-operator

After reinstalled the chart, there will be error by default? Or there will be error after you try to purge the gpu operator with some commands? It is better to share the step-by-step guide for me to try to reproduce. BTW, in the beginning when you “try to delete and reinstall gpu operator I get the following error”, could you share the error log?

Yeah I’ve done that. (That’s what I’m actually trying to say) but still no joy! :(

EDIT: Apologies!! I didn’t see your full message on email.

only saw the following bit

" I am afraid it is expected since in the bottom the note mentions that After un-install of GPU Operator, the NVIDIA driver modules might still be loaded. Either reboot the node or unload them using the following command: $ sudo rmmod nvidia_modeset nvidia_uvm nvidia ""

Hi,
I just reinstall TAO 5.0 and then purge the nvidia gpu operator successfully. Attach my steps.
20230823_purge_nvidia_gpu_operator.txt (16.5 KB)

Thanks a lot @Morganh!! highly appriciate the level of support!!

I followed the steps in the file you uploaded, please check below!

Pre checks

listing the chart

g@gsrv:~$ helm ls -n gpu-operator
NAME                    NAMESPACE     REVISION  UPDATED                                 STATUS    CHART                 APP VERSION
gpu-operator-1692720059 gpu-operator  1         2023-08-22 16:01:04.446737327 +0000 UTC deployed  gpu-operator-v23.6.0  v23.6.0 

checking pods

g@gsrv:~$  kubectl get pods -n gpu-operator
NAME                                                              READY   STATUS                  RESTARTS        AGE
gpu-feature-discovery-hg6vw                                       0/1     Init:0/1                0               16h
gpu-operator-1692720059-node-feature-discovery-master-74b78zmhw   1/1     Running                 0               16h
gpu-operator-1692720059-node-feature-discovery-worker-7lqbt       1/1     Running                 0               16h
gpu-operator-1692720059-node-feature-discovery-worker-rxqvv       1/1     Running                 0               16h
gpu-operator-7b8668c994-kccdk                                     1/1     Running                 0               16h
nvidia-dcgm-exporter-w58h4                                        0/1     Init:0/1                0               16h
nvidia-device-plugin-daemonset-mc9fp                              0/1     Init:0/1                0               16h
nvidia-mig-manager-rvf28                                          0/1     Init:0/1                0               16h
nvidia-operator-validator-vt9r2                                   0/1     Init:CrashLoopBackOff   193 (69s ago)   16h

checking clusterroles

g@gsrv:~$  kubectl get clusterroles | grep gpu
gpu-operator                                                           2023-08-22T16:01:07Z
gpu-operator-1692635184-node-feature-discovery                         2023-08-21T16:26:30Z   (Note: could this may be the cause because we have two of these running?)
gpu-operator-1692720059-node-feature-discovery                         2023-08-22T16:01:07Z
nvidia-gpu-feature-discovery                                           2023-08-22T16:01:26Z

checking clusterrolebindings

g@gsrv:~$ kubectl get clusterrolebinding | grep gpu
gpu-operator                                                      ClusterRole/gpu-operator                                                           16h
gpu-operator-1692720059-node-feature-discovery                    ClusterRole/gpu-operator-1692720059-node-feature-discovery                         16h
gpu-operator-1692720059-node-feature-discovery-topology-updater   ClusterRole/gpu-operator-1692720059-node-feature-discovery-topology-updater        16h
nvidia-gpu-feature-discovery                                      ClusterRole/nvidia-gpu-feature-discovery                                           16h

checking deployments, daemonsets and crds

g@gsrv:~$ kubectl get deployments -A
NAMESPACE          NAME                                                    READY   UP-TO-DATE   AVAILABLE   AGE
calico-apiserver   calico-apiserver                                        2/2     2            2           13d
calico-system      calico-kube-controllers                                 1/1     1            1           13d
calico-system      calico-typha                                            1/1     1            1           13d
clearml            clearml-apiserver                                       1/1     1            1           12d
clearml            clearml-fileserver                                      1/1     1            1           12d
clearml            clearml-mongodb                                         1/1     1            1           12d
clearml            clearml-webserver                                       1/1     1            1           12d
gpu-operator       gpu-operator                                            1/1     1            1           16h
gpu-operator       gpu-operator-1692720059-node-feature-discovery-master   1/1     1            1           16h
k8-storage         nfs-subdir-external-provisioner                         1/1     1            1           12d
kube-system        coredns                                                 2/2     2            2           13d
nuclio             nuclio-controller                                       1/1     1            1           11d
nuclio             nuclio-dashboard                                        1/1     1            1           11d
nuclio             nuclio-test-nuctl-function-1                            1/1     1            1           5d18h   (Note: This is still workking because this just needs cpu)
nuclio             nuclio-test-nuctl-function-2-retinanet                  0/1     1            0           4d23h   (Note: This is not working because this needs gpu)
tao-gnet           ingress-nginx-controller                                1/1     1            1           41h
tao-gnet           tao-toolkit-api-app-pod                                 1/1     1            1           41h
tao-gnet           tao-toolkit-api-workflow-pod                            1/1     1            1           41h
tigera-operator    tigera-operator                                         1/1     1            1           13d




g@gsrv:~$ kubectl get daemonsets -A
NAMESPACE       NAME                                                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                      AGE
calico-system   calico-node                                             2         2         2       2            2           kubernetes.io/os=linux                             13d
calico-system   csi-node-driver                                         2         2         2       2            2           kubernetes.io/os=linux                             13d
gpu-operator    gpu-feature-discovery                                   1         1         0       1            0           nvidia.com/gpu.deploy.gpu-feature-discovery=true   16h
gpu-operator    gpu-operator-1692720059-node-feature-discovery-worker   2         2         2       2            2           <none>                                             16h
gpu-operator    nvidia-dcgm-exporter                                    1         1         0       1            0           nvidia.com/gpu.deploy.dcgm-exporter=true           16h
gpu-operator    nvidia-device-plugin-daemonset                          1         1         0       1            0           nvidia.com/gpu.deploy.device-plugin=true           16h
gpu-operator    nvidia-mig-manager                                      1         1         0       1            0           nvidia.com/gpu.deploy.mig-manager=true             16h
gpu-operator    nvidia-operator-validator                               1         1         0       1            0           nvidia.com/gpu.deploy.operator-validator=true      16h
kube-system     kube-proxy                                              2         2         2       2            2           kubernetes.io/os=linux                             13d


g@gsrv:~$ kubectl get crd 
NAME                                                  CREATED AT
apiservers.operator.tigera.io                         2023-08-09T17:40:28Z
bgpconfigurations.crd.projectcalico.org               2023-08-09T17:40:27Z
bgpfilters.crd.projectcalico.org                      2023-08-09T17:40:28Z
bgppeers.crd.projectcalico.org                        2023-08-09T17:40:28Z
blockaffinities.crd.projectcalico.org                 2023-08-09T17:40:28Z
caliconodestatuses.crd.projectcalico.org              2023-08-09T17:40:28Z
clusterinformations.crd.projectcalico.org             2023-08-09T17:40:28Z
clusterpolicies.nvidia.com                            2023-08-22T16:01:02Z
felixconfigurations.crd.projectcalico.org             2023-08-09T17:40:28Z
globalnetworkpolicies.crd.projectcalico.org           2023-08-09T17:40:28Z
globalnetworksets.crd.projectcalico.org               2023-08-09T17:40:28Z
hostendpoints.crd.projectcalico.org                   2023-08-09T17:40:28Z
imagesets.operator.tigera.io                          2023-08-09T17:40:28Z
installations.operator.tigera.io                      2023-08-09T17:40:28Z
ipamblocks.crd.projectcalico.org                      2023-08-09T17:40:28Z
ipamconfigs.crd.projectcalico.org                     2023-08-09T17:40:28Z
ipamhandles.crd.projectcalico.org                     2023-08-09T17:40:28Z
ippools.crd.projectcalico.org                         2023-08-09T17:40:28Z
ipreservations.crd.projectcalico.org                  2023-08-09T17:40:28Z
kubecontrollersconfigurations.crd.projectcalico.org   2023-08-09T17:40:28Z
networkpolicies.crd.projectcalico.org                 2023-08-09T17:40:28Z
networksets.crd.projectcalico.org                     2023-08-09T17:40:28Z
nodefeaturerules.nfd.k8s-sigs.io                      2023-08-22T09:50:50Z
nodefeatures.nfd.k8s-sigs.io                          2023-08-09T17:53:28Z
nuclioapigateways.nuclio.io                           2023-08-11T11:08:55Z
nucliofunctionevents.nuclio.io                        2023-08-11T11:08:55Z
nucliofunctions.nuclio.io                             2023-08-11T11:08:55Z
nuclioprojects.nuclio.io                              2023-08-11T11:08:55Z
tigerastatuses.operator.tigera.io                     2023-08-09T17:40:29Z

Deleting

finding the generated chart name

g@gsrv:~$ helm list -n gpu-operator | grep gpu | awk '{print $1}'
gpu-operator-1692720059

deleting the chart

g@gsrv:~$ helm delete gpu-operator-1692720059
Error: uninstall: Release not loaded: gpu-operator-1692720059: release: not found
g@gsrv:~$ helm delete gpu-operator-1692720059

because I saw artifacts from gpu-operator-1692635184 I tried deleting it again

helm delete  gpu-operator-1692635184 -n gpu-operator                                   
Error: uninstall: Release not loaded: gpu-operator-1692635184: release: not found

deleting the stray clusterrole I found earlier to make sure it is all gone.

g@gsrv:~$ kubectl delete clusterroles gpu-operator-1692635184-node-feature-discovery
clusterrole.rbac.authorization.k8s.io "gpu-operator-1692635184-node-feature-discovery" deleted

uninstalling (but using the delete command as you’ve done)

helm delete gpu-operator-1692720059 -n gpu-operator
release "gpu-operator-1692720059" uninstalled

deleting the crd

g@gsrv:~$ kubectl delete crd clusterpolicies.nvidia.com
customresourcedefinition.apiextensions.k8s.io "clusterpolicies.nvidia.com" deleted

a check

g@gsrv:~$ helm uninstall --wait  gpu-operator-1692635184 -n gpu-operator
Error: uninstall: Release not loaded: gpu-operator-1692635184: release: not found
g@gsrv:~$ helm uninstall --wait gpu-operator-1692720059 -n gpu-operator
Error: uninstall: Release not loaded: gpu-operator-1692720059: release: not found

Then I even deleted the namespace just to be sure it’s all gone

g@gsrv:~$ kubectl delete namespace gpu-operator
namespace "gpu-operator" deleted

Checking it is all gone.

g@gsrv:~$  helm ls -n gpu-operator
NAME  NAMESPACE REVISION  UPDATED STATUS  CHART APP VERSION


g@gsrv:~$ kubectl get clusterroles | grep gpu
g@gsrv:~$ kubectl get clusterrolebinding | grep gpu
g@gsrv:~$ kubectl get clusterrolebinding | grep nv
g@gsrv:~$ kubectl get clusterroles | grep nv
g@gsrv:~$ kubectl get crd 
NAME                                                  CREATED AT
apiservers.operator.tigera.io                         2023-08-09T17:40:28Z
bgpconfigurations.crd.projectcalico.org               2023-08-09T17:40:27Z
bgpfilters.crd.projectcalico.org                      2023-08-09T17:40:28Z
bgppeers.crd.projectcalico.org                        2023-08-09T17:40:28Z
blockaffinities.crd.projectcalico.org                 2023-08-09T17:40:28Z
caliconodestatuses.crd.projectcalico.org              2023-08-09T17:40:28Z
clusterinformations.crd.projectcalico.org             2023-08-09T17:40:28Z
felixconfigurations.crd.projectcalico.org             2023-08-09T17:40:28Z
globalnetworkpolicies.crd.projectcalico.org           2023-08-09T17:40:28Z
globalnetworksets.crd.projectcalico.org               2023-08-09T17:40:28Z
hostendpoints.crd.projectcalico.org                   2023-08-09T17:40:28Z
imagesets.operator.tigera.io                          2023-08-09T17:40:28Z
installations.operator.tigera.io                      2023-08-09T17:40:28Z
ipamblocks.crd.projectcalico.org                      2023-08-09T17:40:28Z
ipamconfigs.crd.projectcalico.org                     2023-08-09T17:40:28Z
ipamhandles.crd.projectcalico.org                     2023-08-09T17:40:28Z
ippools.crd.projectcalico.org                         2023-08-09T17:40:28Z
ipreservations.crd.projectcalico.org                  2023-08-09T17:40:28Z
kubecontrollersconfigurations.crd.projectcalico.org   2023-08-09T17:40:28Z
networkpolicies.crd.projectcalico.org                 2023-08-09T17:40:28Z
networksets.crd.projectcalico.org                     2023-08-09T17:40:28Z
nodefeaturerules.nfd.k8s-sigs.io                      2023-08-22T09:50:50Z
nodefeatures.nfd.k8s-sigs.io                          2023-08-09T17:53:28Z
nuclioapigateways.nuclio.io                           2023-08-11T11:08:55Z
nucliofunctionevents.nuclio.io                        2023-08-11T11:08:55Z
nucliofunctions.nuclio.io                             2023-08-11T11:08:55Z
nuclioprojects.nuclio.io                              2023-08-11T11:08:55Z
tigerastatuses.operator.tigera.io                     2023-08-09T17:40:29Z

Then I shut down the dgx (to unload the drivers if they were lingering about)

Reinstalling the chart

  1. Turned the DGX back on
  2. Waited the cluster (approx 15 minutes) to do negotiaions (in case it needed some time for that)

make sure everything is up and running

g@dgx:~$  kubectl get pods -A 
NAMESPACE          NAME                                                      READY   STATUS    RESTARTS       AGE
calico-apiserver   calico-apiserver-db54b987d-m66zz                          1/1     Running   0              13d
calico-apiserver   calico-apiserver-db54b987d-ncspt                          1/1     Running   0              13d
calico-system      calico-kube-controllers-666f5dcd4d-kj7fs                  1/1     Running   0              13d
calico-system      calico-node-j2ljx                                         1/1     Running   6 (27m ago)    13d
calico-system      calico-node-trx99                                         1/1     Running   0              13d
calico-system      calico-typha-585d9c9df4-x9c6k                             1/1     Running   0              13d
calico-system      csi-node-driver-slh5f                                     2/2     Running   0              13d
calico-system      csi-node-driver-wf8n9                                     2/2     Running   12 (27m ago)   13d
clearml            clearml-apiserver-76ff97d7f7-wcn6v                        1/1     Running   0              21m
clearml            clearml-elastic-master-0                                  1/1     Running   0              15m
clearml            clearml-fileserver-ff756c4b8-fk59x                        1/1     Running   0              21m
clearml            clearml-mongodb-5f9468969b-bmc6s                          1/1     Running   0              21m
clearml            clearml-redis-master-0                                    1/1     Running   0              15m
clearml            clearml-webserver-7f5fb5df5d-qpkbl                        1/1     Running   0              21m
default            cuda-vectoradd                                            0/1     Pending   0              21h
default            gpu-test-job-v2zxg                                        0/1     Pending   0              42h
k8-storage         nfs-subdir-external-provisioner-5669cc5b6-77gz5           1/1     Running   1 (15m ago)    21m
kube-system        coredns-57575c5f89-9flb2                                  1/1     Running   0              13d
kube-system        coredns-57575c5f89-nrd5f                                  1/1     Running   0              13d
kube-system        etcd-gsrv                                                 1/1     Running   0              13d
kube-system        kube-apiserver-gsrv                                       1/1     Running   0              13d
kube-system        kube-controller-manager-gsrv                              1/1     Running   0              13d
kube-system        kube-proxy-tzhrp                                          1/1     Running   6 (27m ago)    13d
kube-system        kube-proxy-z4hxr                                          1/1     Running   0              13d
kube-system        kube-scheduler-gsrv                                       1/1     Running   0              13d
nuclio             nuclio-controller-679c44dcdc-nsmtm                        1/1     Running   0              21m
nuclio             nuclio-dashboard-6496cdfd66-ktnkb                         1/1     Running   1 (15m ago)    21m
nuclio             nuclio-test-nuctl-function-1-84b6bd65bd-wz9hm             1/1     Running   0              21m
nuclio             nuclio-test-nuctl-function-2-retinanet-7d8545d7db-fv87n   0/1     Pending   0              41h
tao-gnet           ingress-nginx-controller-78d54fbd-g9nrm                   1/1     Running   0              21m
tao-gnet           tao-toolkit-api-app-pod-5ffc48cd57-7d4mx                  1/1     Running   0              21m
tao-gnet           tao-toolkit-api-workflow-pod-6dbc7c8f98-5dl8n             1/1     Running   0              21m
tigera-operator    tigera-operator-959786749-ctprw                           1/1     Running   0              13d

Then I reinstlled the chart! (This time I used a new namespace gpu-operator-nvidia instead the previous value gpu-operator to be extra safe)

helm install --wait --generate-name \
     -n gpu-operator-nvidia --create-namespace \
      nvidia/gpu-operator \
      --set driver.enabled=false \
      --set toolkit.enabled=false

Still no joy!! :(

then I looked for logs

  1. find all pods in the namespace
g@gsrv:~$ kubectl get pods -n gpu-operator-nvidia
NAME                                                              READY   STATUS                  RESTARTS        AGE
gpu-feature-discovery-c24v4                                       0/1     Init:0/1                0               18m
gpu-operator-1692786693-node-feature-discovery-master-f947hcgkb   1/1     Running                 0               18m
gpu-operator-1692786693-node-feature-discovery-worker-flhl8       1/1     Running                 0               18m
gpu-operator-1692786693-node-feature-discovery-worker-thvxj       1/1     Running                 0               18m
gpu-operator-5747c5f6db-2cvht                                     1/1     Running                 0               18m
nvidia-dcgm-exporter-zcxbb                                        0/1     Init:0/1                0               18m
nvidia-device-plugin-daemonset-mqdxh                              0/1     Init:0/1                0               18m
nvidia-mig-manager-5n7sl                                          0/1     Init:0/1                0               18m
nvidia-operator-validator-ztgbb                                   0/1     Init:CrashLoopBackOff   8 (2m51s ago)   18m

Then got logs for all pods starting from the running ones

logs from gpu-operator-1692786693-node-feature-discovery-master-f947hcgkb

g@gsrv:~$ kubectl logs  -n gpu-operator-nvidia gpu-operator-1692786693-node-feature-discovery-master-f947hcgkb
W0823 10:31:41.346829       1 main.go:56] -featurerules-controller is deprecated, use '-crd-controller' flag instead
I0823 10:31:41.347026       1 nfd-master.go:181] Node Feature Discovery Master v0.13.1
I0823 10:31:41.347038       1 nfd-master.go:185] NodeName: "gsrv"
I0823 10:31:41.347048       1 nfd-master.go:186] Kubernetes namespace: "gpu-operator-nvidia"
I0823 10:31:41.347112       1 nfd-master.go:1091] config file "/etc/kubernetes/node-feature-discovery/nfd-master.conf" not found, using defaults
I0823 10:31:41.347303       1 nfd-master.go:1145] master (re-)configuration successfully completed
I0823 10:31:41.347319       1 nfd-master.go:202] starting nfd api controller
I0823 10:31:41.376543       1 component.go:36] [core][Server #1] Server created
I0823 10:31:41.376576       1 nfd-master.go:292] gRPC server serving on port: 8080
I0823 10:31:41.376641       1 component.go:36] [core][Server #1 ListenSocket #2] ListenSocket created
I0823 10:31:42.375973       1 nfd-master.go:601] will process all nodes in the cluster

not sure config file "/etc/kubernetes/node-feature-discovery/nfd-master.conf" not found, using defaults is a problem as mentioned in the quote below from topic 226781?

I checked the /etc/kubernetes/node-feature-discovery directores in the two nodes

gsrv (master node)

g@gsrv:~$ tree  /etc/kubernetes/node-feature-discovery
/etc/kubernetes/node-feature-discovery
├── features.d
└── source.d

2 directories, 0 files

dgx (gpu node)

g@dgx:~$ tree  /etc/kubernetes/node-feature-discovery
/etc/kubernetes/node-feature-discovery
├── features.d
└── source.d

2 directories, 0 files

maybe the nfd-master.conf got deleted or fomr some config error it is looking for a file that never existed?

However when I check for configmaps there seems to be one created with the generated name.

g@gsrv:~$ kubectl get configmap -n gpu-operator-nvidia
NAME                                                                   DATA   AGE
default-gpu-clients                                                    1      70m
default-mig-parted-config                                              1      70m
gpu-operator-1692786693-node-feature-discovery-master-conf             1      70m
gpu-operator-1692786693-node-feature-discovery-topology-updater-conf   1      70m
gpu-operator-1692786693-node-feature-discovery-worker-conf             1      70m
kube-root-ca.crt                                                       1      70m
nvidia-device-plugin-entrypoint                                        1      70m
nvidia-mig-manager-entrypoint                                          1      70m

when I run kubectl edit configmap -n gpu-operator-nvidia gpu-operator-1692786693-node-feature-discovery-master-conf It opens up for editing. so maybe this is not an issue?

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  nfd-master.conf: |-
    extraLabelNs:
    - nvidia.com
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: gpu-operator-1692786693
    meta.helm.sh/release-namespace: gpu-operator-nvidia
  creationTimestamp: "2023-08-23T10:31:39Z"
  labels:
    app.kubernetes.io/instance: gpu-operator-1692786693
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: node-feature-discovery
    app.kubernetes.io/version: v0.13.1
    helm.sh/chart: node-feature-discovery-0.13.1
  name: gpu-operator-1692786693-node-feature-discovery-master-conf
  namespace: gpu-operator-nvidia
  resourceVersion: "3422257"
  uid: 37c2b25f-3b04-4028-a8be-f95a353544fc                                     

logs from gpu-operator-1692786693-node-feature-discovery-worker-flhl8

g@gsrv:~$ kubectl logs  -n gpu-operator-nvidia gpu-operator-1692786693-node-feature-discovery-worker-flhl8 
I0823 10:31:41.387093       1 nfd-worker.go:222] Node Feature Discovery Worker v0.13.1
I0823 10:31:41.387150       1 nfd-worker.go:223] NodeName: 'gsrv'
I0823 10:31:41.387158       1 nfd-worker.go:224] Kubernetes namespace: 'gpu-operator-nvidia'
I0823 10:31:41.387779       1 nfd-worker.go:518] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed
I0823 10:31:41.387911       1 nfd-worker.go:550] worker (re-)configuration successfully completed
I0823 10:31:41.411725       1 local.go:115] starting hooks...
I0823 10:31:41.445933       1 nfd-worker.go:561] starting feature discovery...
I0823 10:31:41.446421       1 nfd-worker.go:573] feature discovery completed
I0823 10:31:41.466469       1 nfd-worker.go:694] creating NodeFeature object "gsrv"
I0823 10:32:41.423353       1 local.go:115] starting hooks...
I0823 10:32:41.484458       1 nfd-worker.go:561] starting feature discovery...
I0823 10:32:41.485156       1 nfd-worker.go:573] feature discovery completed
...
...
I0823 10:49:41.430477       1 local.go:115] starting hooks...
I0823 10:49:41.479448       1 nfd-worker.go:561] starting feature discovery...
I0823 10:49:41.480000       1 nfd-worker.go:573] feature discovery completed
...
...

logs from gpu-operator-1692786693-node-feature-discovery-worker-thvxj

g@gsrv:~$ kubectl logs  -n gpu-operator-nvidia gpu-operator-1692786693-node-feature-discovery-worker-thvxj
I0823 10:31:40.964623       1 nfd-worker.go:222] Node Feature Discovery Worker v0.13.1
I0823 10:31:40.964651       1 nfd-worker.go:223] NodeName: 'dgx'
I0823 10:31:40.964655       1 nfd-worker.go:224] Kubernetes namespace: 'gpu-operator-nvidia'
I0823 10:31:40.966036       1 nfd-worker.go:518] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed
I0823 10:31:40.966088       1 nfd-worker.go:550] worker (re-)configuration successfully completed
I0823 10:31:40.975638       1 local.go:115] starting hooks...
I0823 10:31:40.993071       1 nfd-worker.go:561] starting feature discovery...
I0823 10:31:40.993439       1 nfd-worker.go:573] feature discovery completed
I0823 10:31:41.010433       1 nfd-worker.go:694] creating NodeFeature object "dgx"
I0823 10:32:41.000681       1 local.go:115] starting hooks...
I0823 10:32:41.017481       1 nfd-worker.go:561] starting feature discovery...
I0823 10:32:41.017801       1 nfd-worker.go:573] feature discovery completed
...
...
I0823 10:40:41.002458       1 local.go:115] starting hooks...
I0823 10:40:41.016863       1 nfd-worker.go:561] starting feature discovery...
I0823 10:40:41.017193       1 nfd-worker.go:573] feature discovery completed
...
...

logs from gpu-operator-5747c5f6db-2cvht

gpu-operator-5747c5f6db-2cvht.txt (4.7 MB)

logs from nvidia-dcgm-exporter-zcxbb

g@gsrv:~$ kubectl logs -n gpu-operator-nvidia nvidia-dcgm-exporter-zcxbb
Defaulted container "nvidia-dcgm-exporter" out of: nvidia-dcgm-exporter, toolkit-validation (init)
Error from server (BadRequest): container "nvidia-dcgm-exporter" in pod "nvidia-dcgm-exporter-zcxbb" is waiting to start: PodInitializing

logs from nvidia-device-plugin-daemonset-mqdxh

g@gsrv:~$ kubectl logs -n gpu-operator-nvidia nvidia-device-plugin-daemonset-mqdxh
Defaulted container "nvidia-device-plugin" out of: nvidia-device-plugin, toolkit-validation (init)
Error from server (BadRequest): container "nvidia-device-plugin" in pod "nvidia-device-plugin-daemonset-mqdxh" is waiting to start: PodInitializing

logs from nvidia-mig-manager-5n7sl

g@gsrv:~$ kubectl logs -n gpu-operator-nvidia nvidia-mig-manager-5n7sl
Defaulted container "nvidia-mig-manager" out of: nvidia-mig-manager, toolkit-validation (init)
Error from server (BadRequest): container "nvidia-mig-manager" in pod "nvidia-mig-manager-5n7sl" is waiting to start: PodInitializing

logs from nvidia-operator-validator-ztgbb

g@gsrv:~$ kubectl logs -n gpu-operator-nvidia nvidia-operator-validator-ztgbb
Defaulted container "nvidia-operator-validator" out of: nvidia-operator-validator, driver-validation (init), toolkit-validation (init), cuda-validation (init), plugin-validation (init)
Error from server (BadRequest): container "nvidia-operator-validator" in pod "nvidia-operator-validator-ztgbb" is waiting to start: PodInitializing

Could us be having different experiences because the topologies are different? or could the “missing” nfd-master.conf could be the cause of the problem?

I can refresh the cluster and see if a such file gets created?

or is there a way to drill in to the nvidia-operator-validator-ztgbb or nvidia operator validator semantics to communicate what stage of operator validation is failing?

in the dgx (gpu node) /run/nvidia directory is empty

g@dgx:~$ tree /run/nvidia/
/run/nvidia/
├── driver
└── validations

2 directories, 0 files

I tred deleting the /etc/kubernetes/node-feature-discovery folder and reinstalling gpu-operator but it still failed to fix the issue (I attached the log below) Maybe the operator validater issue is not connected to that!
deleting-node-feature-discovery-folder.txt (17.7 KB)

logs for the sate of the cluster after the last attempt (23/08/2023).
logs_from_final_attempt_23_08_23.txt (44.0 KB)

Thanks for the detailed info. Appreciate it. I will check further. Seems that the purge is fine but get stuck at reinstalling for you dgx machine.

Could you please run "kubectl describe " against above pods(especially the 5 failed pods)? Thanks a lot.

Also, may I know that if you are running above command successfully long time ago or it is the first triage recently? Is the command available in dgx user guide?

It(the source for the install command) is available. I linked the source somewhere in the top (that links to my notes where is links the source)

I will reply for the rest in the morning. Night time in U.K. now sorry 🙏🙏

No worries. I will take a look about the source. Thanks for the info.

I think that is the output of kubectl describe pod -n gpu-operator it is all included in the section.

  1. This command and my notes are referenced in

  2. The specifice section in my notes regarding gpu operator install is this.

  3. The specific location in nvidia guide docs is this

Yes this command has always worked for me (from a fresh install) I don’t have recent memories of having to re install gpu operator but I think this is the right command for my kind of setup.

Thanks a lot for getting back to me! please let me know if there is anything else needed from my end! Highly appriciate your time helping us!!

Best,
Ganindu.

Could you please share below as well? Thanks.
$ helm show values nvidia/gpu-operator

g@gsrv:~$ helm show values nvidia/gpu-operator
# Default values for gpu-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

platform:
  openshift: false

nfd:
  enabled: true
  nodefeaturerules: false

# deprecated: use PodSecurityAdmission (PSA) controls instead
psp:
  enabled: false

psa:
  enabled: false

cdi:
  enabled: false
  default: false

sandboxWorkloads:
  enabled: false
  defaultWorkload: "container"

daemonsets:
  labels: {}
  annotations: {}
  priorityClassName: system-node-critical
  tolerations:
  - key: nvidia.com/gpu
    operator: Exists
    effect: NoSchedule
  # configuration for controlling update strategy("OnDelete" or "RollingUpdate") of GPU Operands
  # note that driver Daemonset is always set with OnDelete to avoid unintended disruptions
  updateStrategy: "RollingUpdate"
  # configuration for controlling rolling update of GPU Operands
  rollingUpdate:
    # maximum number of nodes to simultaneously apply pod updates on.
    # can be specified either as number or percentage of nodes. Default 1.
    maxUnavailable: "1"

validator:
  repository: nvcr.io/nvidia/cloud-native
  image: gpu-operator-validator
  # If version is not specified, then default is to use chart.AppVersion
  #version: ""
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  args: []
  resources: {}
  plugin:
    env:
      - name: WITH_WORKLOAD
        value: "false"

operator:
  repository: nvcr.io/nvidia
  image: gpu-operator
  # If version is not specified, then default is to use chart.AppVersion
  #version: ""
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  priorityClassName: system-node-critical
  defaultRuntime: docker
  runtimeClass: nvidia
  use_ocp_driver_toolkit: false
  # cleanup CRD on chart un-install
  cleanupCRD: false
  # upgrade CRD on chart upgrade, requires --disable-openapi-validation flag
  # to be passed during helm upgrade.
  upgradeCRD: false
  initContainer:
    image: cuda
    repository: nvcr.io/nvidia
    version: 12.2.0-base-ubi8
    imagePullPolicy: IfNotPresent
  tolerations:
  - key: "node-role.kubernetes.io/master"
    operator: "Equal"
    value: ""
    effect: "NoSchedule"
  - key: "node-role.kubernetes.io/control-plane"
    operator: "Equal"
    value: ""
    effect: "NoSchedule"
  annotations:
    openshift.io/scc: restricted-readonly
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          preference:
            matchExpressions:
              - key: "node-role.kubernetes.io/master"
                operator: In
                values: [""]
        - weight: 1
          preference:
            matchExpressions:
              - key: "node-role.kubernetes.io/control-plane"
                operator: In
                values: [""]
  logging:
    # Zap time encoding (one of 'epoch', 'millis', 'nano', 'iso8601', 'rfc3339' or 'rfc3339nano')
    timeEncoding: epoch
    # Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
    level: info
    # Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn)
    # Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error)
    develMode: false
  resources:
    limits:
      cpu: 500m
      memory: 350Mi
    requests:
      cpu: 200m
      memory: 100Mi

mig:
  strategy: single

driver:
  enabled: true
  # use pre-compiled packages for NVIDIA driver installation.
  # only supported for as a tech-preview feature on ubuntu22.04 kernels.
  usePrecompiled: false
  repository: nvcr.io/nvidia
  image: driver
  version: "535.86.10"
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  startupProbe:
    initialDelaySeconds: 60
    periodSeconds: 10
    # nvidia-smi can take longer than 30s in some cases
    # ensure enough timeout is set
    timeoutSeconds: 60
    failureThreshold: 120
  rdma:
    enabled: false
    useHostMofed: false
  upgradePolicy:
    # global switch for automatic upgrade feature
    # if set to false all other options are ignored
    autoUpgrade: true
    # how many nodes can be upgraded in parallel
    # 0 means no limit, all nodes will be upgraded in parallel
    maxParallelUpgrades: 1
    # maximum number of nodes with the driver installed, that can be unavailable during
    # the upgrade. Value can be an absolute number (ex: 5) or
    # a percentage of total nodes at the start of upgrade (ex:
    # 10%). Absolute number is calculated from percentage by rounding
    # up. By default, a fixed value of 25% is used.'
    maxUnavailable: 25%
    # options for waiting on pod(job) completions
    waitForCompletion:
      timeoutSeconds: 0
      podSelector: ""
    # options for gpu pod deletion
    gpuPodDeletion:
      force: false
      timeoutSeconds: 300
      deleteEmptyDir: false
    # options for node drain (`kubectl drain`) before the driver reload
    # this is required only if default GPU pod deletions done by the operator
    # are not sufficient to re-install the driver
    drain:
      enable: false
      force: false
      podSelector: ""
      # It's recommended to set a timeout to avoid infinite drain in case non-fatal error keeps happening on retries
      timeoutSeconds: 300
      deleteEmptyDir: false
  manager:
    image: k8s-driver-manager
    repository: nvcr.io/nvidia/cloud-native
    version: v0.6.2
    imagePullPolicy: IfNotPresent
    env:
      - name: ENABLE_GPU_POD_EVICTION
        value: "true"
      - name: ENABLE_AUTO_DRAIN
        value: "false"
      - name: DRAIN_USE_FORCE
        value: "false"
      - name: DRAIN_POD_SELECTOR_LABEL
        value: ""
      - name: DRAIN_TIMEOUT_SECONDS
        value: "0s"
      - name: DRAIN_DELETE_EMPTYDIR_DATA
        value: "false"
  env: []
  resources: {}
  # Private mirror repository configuration
  repoConfig:
    configMapName: ""
  # custom ssl key/certificate configuration
  certConfig:
    name: ""
  # vGPU licensing configuration
  licensingConfig:
    configMapName: ""
    nlsEnabled: false
  # vGPU topology daemon configuration
  virtualTopology:
    config: ""
  # kernel module configuration for NVIDIA driver
  kernelModuleConfig:
    name: ""

toolkit:
  enabled: true
  repository: nvcr.io/nvidia/k8s
  image: container-toolkit
  version: v1.13.4-ubuntu20.04
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  resources: {}
  installDir: "/usr/local/nvidia"

devicePlugin:
  enabled: true
  repository: nvcr.io/nvidia
  image: k8s-device-plugin
  version: v0.14.1-ubi8
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  args: []
  env:
    - name: PASS_DEVICE_SPECS
      value: "true"
    - name: FAIL_ON_INIT_ERROR
      value: "true"
    - name: DEVICE_LIST_STRATEGY
      value: envvar
    - name: DEVICE_ID_STRATEGY
      value: uuid
    - name: NVIDIA_VISIBLE_DEVICES
      value: all
    - name: NVIDIA_DRIVER_CAPABILITIES
      value: all
  resources: {}
  # Plugin configuration
  # Use "name" to either point to an existing ConfigMap or to create a new one with a list of configurations(i.e with create=true).
  # Use "data" to build an integrated ConfigMap from a set of configurations as
  # part of this helm chart. An example of setting "data" might be:
  # config:
  #   name: device-plugin-config
  #   create: true
  #   data:
  #     default: |-
  #       version: v1
  #       flags:
  #         migStrategy: none
  #     mig-single: |-
  #       version: v1
  #       flags:
  #         migStrategy: single
  #     mig-mixed: |-
  #       version: v1
  #       flags:
  #         migStrategy: mixed
  config:
    # Create a ConfigMap (default: false)
    create: false
    # ConfigMap name (either exiting or to create a new one with create=true above)
    name: ""
    # Default config name within the ConfigMap
    default: ""
    # Data section for the ConfigMap to create (i.e only applies when create=true)
    data: {}

# standalone dcgm hostengine
dcgm:
  # disabled by default to use embedded nv-hostengine by exporter
  enabled: false
  repository: nvcr.io/nvidia/cloud-native
  image: dcgm
  version: 3.1.8-1-ubuntu20.04
  imagePullPolicy: IfNotPresent
  hostPort: 5555
  args: []
  env: []
  resources: {}

dcgmExporter:
  enabled: true
  repository: nvcr.io/nvidia/k8s
  image: dcgm-exporter
  version: 3.1.8-3.1.5-ubuntu20.04
  imagePullPolicy: IfNotPresent
  env:
    - name: DCGM_EXPORTER_LISTEN
      value: ":9400"
    - name: DCGM_EXPORTER_KUBERNETES
      value: "true"
    - name: DCGM_EXPORTER_COLLECTORS
      value: "/etc/dcgm-exporter/dcp-metrics-included.csv"
  resources: {}
  serviceMonitor:
    enabled: false
    interval: 15s
    honorLabels: false
    additionalLabels: {}
    relabelings: []
    # - source_labels:
    #     - __meta_kubernetes_pod_node_name
    #   regex: (.*)
    #   target_label: instance
    #   replacement: $1
    #   action: replace

gfd:
  enabled: true
  repository: nvcr.io/nvidia
  image: gpu-feature-discovery
  version: v0.8.1-ubi8
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env:
    - name: GFD_SLEEP_INTERVAL
      value: 60s
    - name: GFD_FAIL_ON_INIT_ERROR
      value: "true"
  resources: {}

migManager:
  enabled: true
  repository: nvcr.io/nvidia/cloud-native
  image: k8s-mig-manager
  version: v0.5.3-ubuntu20.04
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env:
    - name: WITH_REBOOT
      value: "false"
  resources: {}
  config:
    name: "default-mig-parted-config"
    default: "all-disabled"
  gpuClientsConfig:
    name: ""

nodeStatusExporter:
  enabled: false
  repository: nvcr.io/nvidia/cloud-native
  image: gpu-operator-validator
  # If version is not specified, then default is to use chart.AppVersion
  #version: ""
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  resources: {}

gds:
  enabled: false
  repository: nvcr.io/nvidia/cloud-native
  image: nvidia-fs
  version: "2.16.1"
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  args: []

vgpuManager:
  enabled: false
  repository: ""
  image: vgpu-manager
  version: ""
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  resources: {}
  driverManager:
    image: k8s-driver-manager
    repository: nvcr.io/nvidia/cloud-native
    version: v0.6.2
    imagePullPolicy: IfNotPresent
    env:
      - name: ENABLE_GPU_POD_EVICTION
        value: "false"
      - name: ENABLE_AUTO_DRAIN
        value: "false"

vgpuDeviceManager:
  enabled: true
  repository: nvcr.io/nvidia/cloud-native
  image: vgpu-device-manager
  version: "v0.2.3"
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  config:
    name: ""
    default: "default"

vfioManager:
  enabled: true
  repository: nvcr.io/nvidia
  image: cuda
  version: 12.2.0-base-ubi8
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  resources: {}
  driverManager:
    image: k8s-driver-manager
    repository: nvcr.io/nvidia/cloud-native
    version: v0.6.2
    imagePullPolicy: IfNotPresent
    env:
      - name: ENABLE_GPU_POD_EVICTION
        value: "false"
      - name: ENABLE_AUTO_DRAIN
        value: "false"

kataManager:
  enabled: false
  config:
    artifactsDir: "/opt/nvidia-gpu-operator/artifacts/runtimeclasses"
    runtimeClasses:
      - name: kata-qemu-nvidia-gpu
        nodeSelector: {}
        artifacts:
          url: nvcr.io/nvidia/cloud-native/kata-gpu-artifacts:ubuntu22.04-535.54.03
          pullSecret: ""
      - name: kata-qemu-nvidia-gpu-snp
        nodeSelector:
          "nvidia.com/cc.capable": "true"
        artifacts:
          url: nvcr.io/nvidia/cloud-native/kata-gpu-artifacts:ubuntu22.04-535.86.10-snp
          pullSecret: ""
  repository: nvcr.io/nvidia/cloud-native
  image: k8s-kata-manager
  version: v0.1.1
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  resources: {}

sandboxDevicePlugin:
  enabled: true
  repository: nvcr.io/nvidia
  image: kubevirt-gpu-device-plugin
  version: v1.2.2
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  args: []
  env: []
  resources: {}

ccManager:
  enabled: false
  defaultMode: "off"
  repository: nvcr.io/nvidia/cloud-native
  image: k8s-cc-manager
  version: v0.1.0
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env:
    - name: CC_CAPABLE_DEVICE_IDS
      value: "0x2339,0x2331,0x2330,0x2324,0x2322,0x233d"
  resources: {}

node-feature-discovery:
  enableNodeFeatureApi: true
  worker:
    serviceAccount:
      name: node-feature-discovery
      # disable creation to avoid duplicate serviceaccount creation by master spec below
      create: false
    tolerations:
    - key: "node-role.kubernetes.io/master"
      operator: "Equal"
      value: ""
      effect: "NoSchedule"
    - key: "node-role.kubernetes.io/control-plane"
      operator: "Equal"
      value: ""
      effect: "NoSchedule"
    - key: nvidia.com/gpu
      operator: Exists
      effect: NoSchedule
    config:
      sources:
        pci:
          deviceClassWhitelist:
          - "02"
          - "0200"
          - "0207"
          - "0300"
          - "0302"
          deviceLabelFields:
          - vendor
  master:
    serviceAccount:
      name: node-feature-discovery
      create: true
    config:
      extraLabelNs: ["nvidia.com"]
      # noPublish: false
      # resourceLabels: ["nvidia.com/feature-1","nvidia.com/feature-2"]
      # enableTaints: false
      # labelWhiteList: "nvidia.com/gpu"

g@gsrv:~$ 


UPDATE:

This is how I re init the dgx when I’m reinstallng the cluster. (just putting it here in case there is something useful)

here are all the node labels (for the dgx node) I wonder if a label is causing the problem by lingering about after chart deletion.

g@gsrv:~$ kubectl get nodes dgx   --show-labels
NAME   STATUS   ROLES    AGE   VERSION    LABELS
dgx    Ready    <none>   14d   v1.24.14   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,feature.node.kubernetes.io/cpu-cpuid.ADX=true,feature.node.kubernetes.io/cpu-cpuid.AESNI=true,feature.node.kubernetes.io/cpu-cpuid.AVX2=true,feature.node.kubernetes.io/cpu-cpuid.AVX=true,feature.node.kubernetes.io/cpu-cpuid.CLZERO=true,feature.node.kubernetes.io/cpu-cpuid.CMPXCHG8=true,feature.node.kubernetes.io/cpu-cpuid.CPBOOST=true,feature.node.kubernetes.io/cpu-cpuid.FMA3=true,feature.node.kubernetes.io/cpu-cpuid.FP256=true,feature.node.kubernetes.io/cpu-cpuid.FXSR=true,feature.node.kubernetes.io/cpu-cpuid.FXSROPT=true,feature.node.kubernetes.io/cpu-cpuid.IBPB=true,feature.node.kubernetes.io/cpu-cpuid.IBRS=true,feature.node.kubernetes.io/cpu-cpuid.IBRS_PREFERRED=true,feature.node.kubernetes.io/cpu-cpuid.IBRS_PROVIDES_SMP=true,feature.node.kubernetes.io/cpu-cpuid.IBS=true,feature.node.kubernetes.io/cpu-cpuid.IBSBRNTRGT=true,feature.node.kubernetes.io/cpu-cpuid.IBSFETCHSAM=true,feature.node.kubernetes.io/cpu-cpuid.IBSFFV=true,feature.node.kubernetes.io/cpu-cpuid.IBSOPCNT=true,feature.node.kubernetes.io/cpu-cpuid.IBSOPCNTEXT=true,feature.node.kubernetes.io/cpu-cpuid.IBSOPSAM=true,feature.node.kubernetes.io/cpu-cpuid.IBSRDWROPCNT=true,feature.node.kubernetes.io/cpu-cpuid.IBSRIPINVALIDCHK=true,feature.node.kubernetes.io/cpu-cpuid.IBS_FETCH_CTLX=true,feature.node.kubernetes.io/cpu-cpuid.IBS_OPFUSE=true,feature.node.kubernetes.io/cpu-cpuid.INT_WBINVD=true,feature.node.kubernetes.io/cpu-cpuid.LAHF=true,feature.node.kubernetes.io/cpu-cpuid.LBRVIRT=true,feature.node.kubernetes.io/cpu-cpuid.MCAOVERFLOW=true,feature.node.kubernetes.io/cpu-cpuid.MCOMMIT=true,feature.node.kubernetes.io/cpu-cpuid.MOVBE=true,feature.node.kubernetes.io/cpu-cpuid.MOVU=true,feature.node.kubernetes.io/cpu-cpuid.MSRIRC=true,feature.node.kubernetes.io/cpu-cpuid.MSR_PAGEFLUSH=true,feature.node.kubernetes.io/cpu-cpuid.NRIPS=true,feature.node.kubernetes.io/cpu-cpuid.OSXSAVE=true,feature.node.kubernetes.io/cpu-cpuid.PPIN=true,feature.node.kubernetes.io/cpu-cpuid.RDPRU=true,feature.node.kubernetes.io/cpu-cpuid.SEV=true,feature.node.kubernetes.io/cpu-cpuid.SEV_ES=true,feature.node.kubernetes.io/cpu-cpuid.SHA=true,feature.node.kubernetes.io/cpu-cpuid.SME=true,feature.node.kubernetes.io/cpu-cpuid.SPEC_CTRL_SSBD=true,feature.node.kubernetes.io/cpu-cpuid.SSE4A=true,feature.node.kubernetes.io/cpu-cpuid.STIBP=true,feature.node.kubernetes.io/cpu-cpuid.SUCCOR=true,feature.node.kubernetes.io/cpu-cpuid.SVM=true,feature.node.kubernetes.io/cpu-cpuid.SVMDA=true,feature.node.kubernetes.io/cpu-cpuid.SVMFBASID=true,feature.node.kubernetes.io/cpu-cpuid.SVML=true,feature.node.kubernetes.io/cpu-cpuid.SVMNP=true,feature.node.kubernetes.io/cpu-cpuid.SVMPF=true,feature.node.kubernetes.io/cpu-cpuid.SVMPFT=true,feature.node.kubernetes.io/cpu-cpuid.SYSCALL=true,feature.node.kubernetes.io/cpu-cpuid.SYSEE=true,feature.node.kubernetes.io/cpu-cpuid.TOPEXT=true,feature.node.kubernetes.io/cpu-cpuid.TSCRATEMSR=true,feature.node.kubernetes.io/cpu-cpuid.VMCBCLEAN=true,feature.node.kubernetes.io/cpu-cpuid.VTE=true,feature.node.kubernetes.io/cpu-cpuid.WBNOINVD=true,feature.node.kubernetes.io/cpu-cpuid.X87=true,feature.node.kubernetes.io/cpu-cpuid.XGETBV1=true,feature.node.kubernetes.io/cpu-cpuid.XSAVE=true,feature.node.kubernetes.io/cpu-cpuid.XSAVEC=true,feature.node.kubernetes.io/cpu-cpuid.XSAVEOPT=true,feature.node.kubernetes.io/cpu-cpuid.XSAVES=true,feature.node.kubernetes.io/cpu-hardware_multithreading=true,feature.node.kubernetes.io/cpu-model.family=23,feature.node.kubernetes.io/cpu-model.id=49,feature.node.kubernetes.io/cpu-model.vendor_id=AMD,feature.node.kubernetes.io/cpu-rdt.RDTCMT=true,feature.node.kubernetes.io/cpu-rdt.RDTL3CA=true,feature.node.kubernetes.io/cpu-rdt.RDTMBM=true,feature.node.kubernetes.io/cpu-rdt.RDTMON=true,feature.node.kubernetes.io/cpu-security.sev.enabled=true,feature.node.kubernetes.io/kernel-config.NO_HZ=true,feature.node.kubernetes.io/kernel-config.NO_HZ_IDLE=true,feature.node.kubernetes.io/kernel-version.full=5.15.0-1030-nvidia,feature.node.kubernetes.io/kernel-version.major=5,feature.node.kubernetes.io/kernel-version.minor=15,feature.node.kubernetes.io/kernel-version.revision=0,feature.node.kubernetes.io/network-sriov.capable=true,feature.node.kubernetes.io/pci-10de.present=true,feature.node.kubernetes.io/pci-10de.sriov.capable=true,feature.node.kubernetes.io/pci-1a03.present=true,feature.node.kubernetes.io/pci-8086.present=true,feature.node.kubernetes.io/pci-8086.sriov.capable=true,feature.node.kubernetes.io/storage-nonrotationaldisk=true,feature.node.kubernetes.io/system-os_release.ID=ubuntu,feature.node.kubernetes.io/system-os_release.VERSION_ID.major=22,feature.node.kubernetes.io/system-os_release.VERSION_ID.minor=04,feature.node.kubernetes.io/system-os_release.VERSION_ID=22.04,kubernetes.io/arch=amd64,kubernetes.io/hostname=dgx,kubernetes.io/os=linux,nvidia.com/gpu.deploy.container-toolkit=true,nvidia.com/gpu.deploy.dcgm-exporter=true,nvidia.com/gpu.deploy.dcgm=true,nvidia.com/gpu.deploy.device-plugin=true,nvidia.com/gpu.deploy.driver=true,nvidia.com/gpu.deploy.gpu-feature-discovery=true,nvidia.com/gpu.deploy.mig-manager=true,nvidia.com/gpu.deploy.node-status-exporter=true,nvidia.com/gpu.deploy.operator-validator=true,nvidia.com/gpu.present=true,nvidia.com/mig.config.state=success,nvidia.com/mig.config=all-disabled

checking if node tags persist after purging the chart

I have deleted the gup operator chart using the steps I listed in the replies above;

Now checking to confirm chart is deleted.

g@gsrv:~$ helm ls -n gpu-operator
NAME  NAMESPACE REVISION  UPDATED STATUS  CHART APP VERSION
g@gsrv:~$ kubectl get clusterroles | grep gpu
g@gsrv:~$ kubectl get clusterrolebinding | grep gpu
g@gsrv:~$ kubectl get clusterrolebinding | grep nv
g@gsrv:~$ kubectl get clusterroles | grep nv
g@gsrv:~$ kubectl get crd 
NAME                                                  CREATED AT
apiservers.operator.tigera.io                         2023-08-09T17:40:28Z
bgpconfigurations.crd.projectcalico.org               2023-08-09T17:40:27Z
bgpfilters.crd.projectcalico.org                      2023-08-09T17:40:28Z
bgppeers.crd.projectcalico.org                        2023-08-09T17:40:28Z
blockaffinities.crd.projectcalico.org                 2023-08-09T17:40:28Z
caliconodestatuses.crd.projectcalico.org              2023-08-09T17:40:28Z
clusterinformations.crd.projectcalico.org             2023-08-09T17:40:28Z
felixconfigurations.crd.projectcalico.org             2023-08-09T17:40:28Z
globalnetworkpolicies.crd.projectcalico.org           2023-08-09T17:40:28Z
globalnetworksets.crd.projectcalico.org               2023-08-09T17:40:28Z
hostendpoints.crd.projectcalico.org                   2023-08-09T17:40:28Z
imagesets.operator.tigera.io                          2023-08-09T17:40:28Z
installations.operator.tigera.io                      2023-08-09T17:40:28Z
ipamblocks.crd.projectcalico.org                      2023-08-09T17:40:28Z
ipamconfigs.crd.projectcalico.org                     2023-08-09T17:40:28Z
ipamhandles.crd.projectcalico.org                     2023-08-09T17:40:28Z
ippools.crd.projectcalico.org                         2023-08-09T17:40:28Z
ipreservations.crd.projectcalico.org                  2023-08-09T17:40:28Z
kubecontrollersconfigurations.crd.projectcalico.org   2023-08-09T17:40:28Z
networkpolicies.crd.projectcalico.org                 2023-08-09T17:40:28Z
networksets.crd.projectcalico.org                     2023-08-09T17:40:28Z
nodefeaturerules.nfd.k8s-sigs.io                      2023-08-22T09:50:50Z
nodefeatures.nfd.k8s-sigs.io                          2023-08-09T17:53:28Z
nuclioapigateways.nuclio.io                           2023-08-11T11:08:55Z
nucliofunctionevents.nuclio.io                        2023-08-11T11:08:55Z
nucliofunctions.nuclio.io                             2023-08-11T11:08:55Z
nuclioprojects.nuclio.io                              2023-08-11T11:08:55Z
tigerastatuses.operator.tigera.io                     2023-08-09T17:40:29Z
g@gsrv:~$ kubectl get pods -A
NAMESPACE          NAME                                                      READY   STATUS    RESTARTS       AGE
calico-apiserver   calico-apiserver-db54b987d-m66zz                          1/1     Running   0              14d
calico-apiserver   calico-apiserver-db54b987d-ncspt                          1/1     Running   0              14d
calico-system      calico-kube-controllers-666f5dcd4d-kj7fs                  1/1     Running   0              14d
calico-system      calico-node-j2ljx                                         1/1     Running   7 (20h ago)    14d
calico-system      calico-node-trx99                                         1/1     Running   0              14d
calico-system      calico-typha-585d9c9df4-x9c6k                             1/1     Running   0              14d
calico-system      csi-node-driver-slh5f                                     2/2     Running   0              14d
calico-system      csi-node-driver-wf8n9                                     2/2     Running   14 (20h ago)   14d
clearml            clearml-apiserver-76ff97d7f7-cl4b6                        1/1     Running   0              20h
clearml            clearml-elastic-master-0                                  1/1     Running   0              20h
clearml            clearml-fileserver-ff756c4b8-fmmq4                        1/1     Running   0              20h
clearml            clearml-mongodb-5f9468969b-pgwf8                          1/1     Running   0              20h
clearml            clearml-redis-master-0                                    1/1     Running   0              20h
clearml            clearml-webserver-7f5fb5df5d-9pd26                        1/1     Running   0              20h
default            cuda-vectoradd                                            0/1     Pending   0              45h
default            gpu-test-job-v2zxg                                        0/1     Pending   0              2d18h
k8-storage         nfs-subdir-external-provisioner-5669cc5b6-kcktq           1/1     Running   0              20h
kube-system        coredns-57575c5f89-9flb2                                  1/1     Running   0              14d
kube-system        coredns-57575c5f89-nrd5f                                  1/1     Running   0              14d
kube-system        etcd-gsrv                                                 1/1     Running   0              14d
kube-system        kube-apiserver-gsrv                                       1/1     Running   0              14d
kube-system        kube-controller-manager-gsrv                              1/1     Running   0              14d
kube-system        kube-proxy-tzhrp                                          1/1     Running   7 (20h ago)    14d
kube-system        kube-proxy-z4hxr                                          1/1     Running   0              14d
kube-system        kube-scheduler-gsrv                                       1/1     Running   0              14d
nuclio             nuclio-controller-679c44dcdc-hk8b7                        1/1     Running   0              20h
nuclio             nuclio-dashboard-6496cdfd66-54s4l                         1/1     Running   0              20h
nuclio             nuclio-test-nuctl-function-1-84b6bd65bd-s27g5             1/1     Running   0              20h
nuclio             nuclio-test-nuctl-function-2-retinanet-7d8545d7db-fv87n   0/1     Pending   0              2d17h
tao-gnet           ingress-nginx-controller-78d54fbd-ngzzx                   1/1     Running   0              20h
tao-gnet           tao-toolkit-api-app-pod-5ffc48cd57-m8x64                  1/1     Running   0              20h
tao-gnet           tao-toolkit-api-workflow-pod-6dbc7c8f98-vhl97             1/1     Running   0              20h
tigera-operator    tigera-operator-959786749-ctprw                           1/1     Running   0              14d
g@gsrv:~$ kubectl get nodes dgx   --show-labels
NAME   STATUS   ROLES    AGE   VERSION    LABELS
dgx    Ready    <none>   14d   v1.24.14   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,feature.node.kubernetes.io/cpu-cpuid.ADX=true,feature.node.kubernetes.io/cpu-cpuid.AESNI=true,feature.node.kubernetes.io/cpu-cpuid.AVX2=true,feature.node.kubernetes.io/cpu-cpuid.AVX=true,feature.node.kubernetes.io/cpu-cpuid.CLZERO=true,feature.node.kubernetes.io/cpu-cpuid.CMPXCHG8=true,feature.node.kubernetes.io/cpu-cpuid.CPBOOST=true,feature.node.kubernetes.io/cpu-cpuid.FMA3=true,feature.node.kubernetes.io/cpu-cpuid.FP256=true,feature.node.kubernetes.io/cpu-cpuid.FXSR=true,feature.node.kubernetes.io/cpu-cpuid.FXSROPT=true,feature.node.kubernetes.io/cpu-cpuid.IBPB=true,feature.node.kubernetes.io/cpu-cpuid.IBRS=true,feature.node.kubernetes.io/cpu-cpuid.IBRS_PREFERRED=true,feature.node.kubernetes.io/cpu-cpuid.IBRS_PROVIDES_SMP=true,feature.node.kubernetes.io/cpu-cpuid.IBS=true,feature.node.kubernetes.io/cpu-cpuid.IBSBRNTRGT=true,feature.node.kubernetes.io/cpu-cpuid.IBSFETCHSAM=true,feature.node.kubernetes.io/cpu-cpuid.IBSFFV=true,feature.node.kubernetes.io/cpu-cpuid.IBSOPCNT=true,feature.node.kubernetes.io/cpu-cpuid.IBSOPCNTEXT=true,feature.node.kubernetes.io/cpu-cpuid.IBSOPSAM=true,feature.node.kubernetes.io/cpu-cpuid.IBSRDWROPCNT=true,feature.node.kubernetes.io/cpu-cpuid.IBSRIPINVALIDCHK=true,feature.node.kubernetes.io/cpu-cpuid.IBS_FETCH_CTLX=true,feature.node.kubernetes.io/cpu-cpuid.IBS_OPFUSE=true,feature.node.kubernetes.io/cpu-cpuid.INT_WBINVD=true,feature.node.kubernetes.io/cpu-cpuid.LAHF=true,feature.node.kubernetes.io/cpu-cpuid.LBRVIRT=true,feature.node.kubernetes.io/cpu-cpuid.MCAOVERFLOW=true,feature.node.kubernetes.io/cpu-cpuid.MCOMMIT=true,feature.node.kubernetes.io/cpu-cpuid.MOVBE=true,feature.node.kubernetes.io/cpu-cpuid.MOVU=true,feature.node.kubernetes.io/cpu-cpuid.MSRIRC=true,feature.node.kubernetes.io/cpu-cpuid.MSR_PAGEFLUSH=true,feature.node.kubernetes.io/cpu-cpuid.NRIPS=true,feature.node.kubernetes.io/cpu-cpuid.OSXSAVE=true,feature.node.kubernetes.io/cpu-cpuid.PPIN=true,feature.node.kubernetes.io/cpu-cpuid.RDPRU=true,feature.node.kubernetes.io/cpu-cpuid.SEV=true,feature.node.kubernetes.io/cpu-cpuid.SEV_ES=true,feature.node.kubernetes.io/cpu-cpuid.SHA=true,feature.node.kubernetes.io/cpu-cpuid.SME=true,feature.node.kubernetes.io/cpu-cpuid.SPEC_CTRL_SSBD=true,feature.node.kubernetes.io/cpu-cpuid.SSE4A=true,feature.node.kubernetes.io/cpu-cpuid.STIBP=true,feature.node.kubernetes.io/cpu-cpuid.SUCCOR=true,feature.node.kubernetes.io/cpu-cpuid.SVM=true,feature.node.kubernetes.io/cpu-cpuid.SVMDA=true,feature.node.kubernetes.io/cpu-cpuid.SVMFBASID=true,feature.node.kubernetes.io/cpu-cpuid.SVML=true,feature.node.kubernetes.io/cpu-cpuid.SVMNP=true,feature.node.kubernetes.io/cpu-cpuid.SVMPF=true,feature.node.kubernetes.io/cpu-cpuid.SVMPFT=true,feature.node.kubernetes.io/cpu-cpuid.SYSCALL=true,feature.node.kubernetes.io/cpu-cpuid.SYSEE=true,feature.node.kubernetes.io/cpu-cpuid.TOPEXT=true,feature.node.kubernetes.io/cpu-cpuid.TSCRATEMSR=true,feature.node.kubernetes.io/cpu-cpuid.VMCBCLEAN=true,feature.node.kubernetes.io/cpu-cpuid.VTE=true,feature.node.kubernetes.io/cpu-cpuid.WBNOINVD=true,feature.node.kubernetes.io/cpu-cpuid.X87=true,feature.node.kubernetes.io/cpu-cpuid.XGETBV1=true,feature.node.kubernetes.io/cpu-cpuid.XSAVE=true,feature.node.kubernetes.io/cpu-cpuid.XSAVEC=true,feature.node.kubernetes.io/cpu-cpuid.XSAVEOPT=true,feature.node.kubernetes.io/cpu-cpuid.XSAVES=true,feature.node.kubernetes.io/cpu-hardware_multithreading=true,feature.node.kubernetes.io/cpu-model.family=23,feature.node.kubernetes.io/cpu-model.id=49,feature.node.kubernetes.io/cpu-model.vendor_id=AMD,feature.node.kubernetes.io/cpu-rdt.RDTCMT=true,feature.node.kubernetes.io/cpu-rdt.RDTL3CA=true,feature.node.kubernetes.io/cpu-rdt.RDTMBM=true,feature.node.kubernetes.io/cpu-rdt.RDTMON=true,feature.node.kubernetes.io/cpu-security.sev.enabled=true,feature.node.kubernetes.io/kernel-config.NO_HZ=true,feature.node.kubernetes.io/kernel-config.NO_HZ_IDLE=true,feature.node.kubernetes.io/kernel-version.full=5.15.0-1030-nvidia,feature.node.kubernetes.io/kernel-version.major=5,feature.node.kubernetes.io/kernel-version.minor=15,feature.node.kubernetes.io/kernel-version.revision=0,feature.node.kubernetes.io/network-sriov.capable=true,feature.node.kubernetes.io/pci-10de.present=true,feature.node.kubernetes.io/pci-10de.sriov.capable=true,feature.node.kubernetes.io/pci-1a03.present=true,feature.node.kubernetes.io/pci-8086.present=true,feature.node.kubernetes.io/pci-8086.sriov.capable=true,feature.node.kubernetes.io/storage-nonrotationaldisk=true,feature.node.kubernetes.io/system-os_release.ID=ubuntu,feature.node.kubernetes.io/system-os_release.VERSION_ID.major=22,feature.node.kubernetes.io/system-os_release.VERSION_ID.minor=04,feature.node.kubernetes.io/system-os_release.VERSION_ID=22.04,kubernetes.io/arch=amd64,kubernetes.io/hostname=dgx,kubernetes.io/os=linux,nvidia.com/gpu.deploy.container-toolkit=true,nvidia.com/gpu.deploy.dcgm-exporter=true,nvidia.com/gpu.deploy.dcgm=true,nvidia.com/gpu.deploy.device-plugin=true,nvidia.com/gpu.deploy.driver=true,nvidia.com/gpu.deploy.gpu-feature-discovery=true,nvidia.com/gpu.deploy.mig-manager=true,nvidia.com/gpu.deploy.node-status-exporter=true,nvidia.com/gpu.deploy.operator-validator=true,nvidia.com/gpu.present=true,nvidia.com/mig.config.state=success,nvidia.com/mig.config=all-disabled

looks like some tags are still there

e.g.

gpu.deploy.driver=true,nvidia.com/gpu.deploy.gpu-feature-discovery=true,nvidia.com/gpu.deploy.mig-manager=true,nvidia.com/gpu.deploy.node-status-exporter=true,nvidia.com/gpu.deploy.operator-validator=true,nvidia.com/gpu.present=true,nvidia.com/mig.config.state=success,nvidia.com/mig.config=all-disabled

do you think it’s worth trying deleting the tags. I don’t know which ones to delete unless I refresh the node and see which tags remain (hence added by the gpu operator). Can you help me with that please! (e.g. a command suggestion and arguments to get rid of the tags which may be potentially causing the gpu-operation install too stall) if you reckon it is worth doing?

command log (for the actions described above)
tag_check_commands_log.txt (30.4 KB)

UPATE:
deleting tags didn’t work either; find command log (attached) below.
trying_to_delete_tags_commad_logs.txt (34.5 KB)

I work on A40(not dgx) and can repro bad pods, but not exactly the same error as yours.

local-morganh@4u4g-0033:~/getting_started_v5.0.0/setup/quickstart_api_bare_metal$ kubectl get pods -A        
NAMESPACE             NAME                                                              READY   STATUS     RESTARTS   AGE
default               ingress-nginx-controller-5ff6555d5d-z47lw                         1/1     Running    0          9m19s
default               nfs-subdir-external-provisioner-798ffff8db-9k2dw                  1/1     Running    0          9m13s
default               nvidia-smi-4u4g-0033                                              1/1     Running    0          9m10s
default               tao-toolkit-api-app-pod-55c5d88d86-2x797                          1/1     Running    0          9m8s
default               tao-toolkit-api-jupyterlab-pod-5db94dd6cc-mm5gs                   1/1     Running    0          9m8s
default               tao-toolkit-api-workflow-pod-55db5b9bf9-wqs2q                     1/1     Running    0          9m8s
kube-system           calico-kube-controllers-7f76d48f74-7frcw                          1/1     Running    0          12m
kube-system           calico-node-pwf4l                                                 1/1     Running    0          12m
kube-system           coredns-64897985d-4cpcq                                           1/1     Running    0          11m
kube-system           coredns-64897985d-hjxf6                                           1/1     Running    0          11m
kube-system           etcd-4u4g-0033                                                    1/1     Running    1          12m
kube-system           kube-apiserver-4u4g-0033                                          1/1     Running    1          12m
kube-system           kube-controller-manager-4u4g-0033                                 1/1     Running    1          12m
kube-system           kube-proxy-2vwgz                                                  1/1     Running    0          12m
kube-system           kube-scheduler-4u4g-0033                                          1/1     Running    1          12m
nvidia-gpu-operator   gpu-feature-discovery-nn825                                       0/1     Init:0/1   0          15s
nvidia-gpu-operator   gpu-operator-1692892330-node-feature-discovery-master-57f4bk7jv   1/1     Running    0          16s
nvidia-gpu-operator   gpu-operator-1692892330-node-feature-discovery-worker-rtxtl       1/1     Running    0          16s
nvidia-gpu-operator   gpu-operator-79598cbdd4-8vhvz                                     1/1     Running    0          16s
nvidia-gpu-operator   nvidia-dcgm-exporter-w2t58                                        0/1     Init:0/1   0          15s
nvidia-gpu-operator   nvidia-device-plugin-daemonset-frnxx                              0/1     Init:0/1   0          15s
nvidia-gpu-operator   nvidia-operator-validator-7wgwv                                   0/1     Init:0/4   0          15s

Then, I leverage the “setup.sh” of TAO API 5.0, later can resume the good pods.

Command:
$ kubectl delete crd clusterpolicies.nvidia.com
$ bash setup.sh install

Logs:
20230824_purge_nvidia_gpu_operator_reinstall_with_tao_api.txt (219.0 KB)

Please take a look at above log for the detailed steps. You can leverage it to resume the good pods.

Make sure set install_driver: false in gpu-operator-values.yml.
Some extra files for reference.

$ cat hosts
  # List all hosts below.
  # For single node deployment, listing the master is enough.
[master]
  # Example of host accessible using ssh private key
  # 1.1.1.1 ansible_ssh_user='ubuntu' ansible_ssh_private_key_file='/path/to/key.pem'
10.34.xxx.xxx ansible_ssh_user='local-morganh' ansible_ssh_pass='xxx' ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
[nodes]
  # Example of host accessible using ssh password
  # 1.1.1.2 ansible_ssh_user='ubuntu' ansible_ssh_pass='some-password' ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
$cat tao-toolkit-api-ansible-values.yml
ngc_api_key: xxx
ngc_email: xxxxx@nvidia.com
api_chart: https://helm.ngc.nvidia.com/nvidia/tao/charts/tao-toolkit-api-5.0.0.tgz
api_values: ./tao-toolkit-api-helm-values.yml
cluster_name: demo
$ cat gpu-operator-values.yml  (below setting is only for TAO5.0)
enable_mig: no
mig_profile: all-disabled
mig_strategy: single
nvidia_driver_version: "525.85.12"
install_driver: false  (Please set to false here .)

Thanks for the update I’m back at home. Will get back to you. Just to be clear I’m using a local multi node cluster(kubeadm, kubectl etc. ) and not using ansible.

If you share the contents of the scripts maybe I can adopt them to my use case?

Cheers,
Ganindu.