Please provide the following information when creating a topic:
- Hardware Platform (GPU model and numbers) - 8 x L40S (48 GB)
- System Memory - 1536 GiB
- Ubuntu Version - 22.04
- NVIDIA GPU Driver Version (valid for GPU only) - 535.230.02
- Issue Type( questions, new requirements, bugs) - Bugs
- How to reproduce the issue ? (This is for bugs. Including the command line used and other details for reproducing) -
Fetch the VSS Blueprint Helm Chart
sudo microk8s helm fetch https://helm.ngc.nvidia.com/nvidia/blueprint/charts/nvidia-blueprint-vss-2.1.0.tgz --username=‘$oauthtoken’ --password=$NGC_API_KEY
Install the Helm Chart
sudo microk8s helm install vss-blueprint nvidia-blueprint-vss-2.1.0.tgz --set global.ngcImagePullSecretName=ngc-docker-reg-secret --set vss.applicationSpecs.vss-deployment.containers.vss.startupProbe.failureThreshold=360
sudo microk8s kubectl get pod -A
sudo microk8s kubectl get pod -A
- Requirement details (This is for new requirement. Including the logs for the pods, the description for the pods)
vss-vss-deployment POD is failing to initialize
Events:
Type Reason Age From Message
Warning FailedScheduling 32s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/disk-pressure: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
sudo microk8s kubectl logs -n default vss-vss-deployment-6954d97ff-8tdpv -c vss
Error from server (BadRequest): container “vss” in pod “vss-vss-deployment-6954d97ff-8tdpv” is waiting to start: PodInitializing
ubuntu@ip-172-16-3-110:~$ sudo microk8s kubectl describe pod -n default vss
Name: vss-blueprint-0
Namespace: default
Priority: 0
Service Account: default
Node:
Labels: app.kubernetes.io/instance=vss-blueprint
app.kubernetes.io/name=nim-llm
apps.kubernetes.io/pod-index=0
controller-revision-hash=vss-blueprint-746df69bf
statefulset.kubernetes.io/pod-name=vss-blueprint-0
Annotations:
Status: Pending
IP:
IPs:
Controlled By: StatefulSet/vss-blueprint
Containers:
nim-llm:
Image: nvcr.io/nim/meta/llama-3.1-70b-instruct:1.3.0
Port: 8000/TCP
Host Port: 0/TCP
Limits:
nvidia.com/gpu: 4
Requests:
nvidia.com/gpu: 4
Liveness: http-get http://:http-openai/v1/health/live delay=15s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http-openai/v1/health/ready delay=15s timeout=1s period=10s #success=1 #failure=3
Startup: http-get http://:http-openai/v1/health/ready delay=40s timeout=1s period=10s #success=1 #failure=180
Environment:
NIM_CACHE_PATH: /model-store
NGC_API_KEY: <set to the key ‘NGC_API_KEY’ in secret ‘ngc-api-key-secret’> Optional: false
OUTLINES_CACHE_DIR: /tmp/outlines
NIM_SERVER_PORT: 8000
NIM_JSONL_LOGGING: 1
NIM_LOG_LEVEL: INFO
Mounts:
/dev/shm from dshm (rw)
/model-store from model-store (rw)
/scripts from scripts-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6t9b8 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
model-store:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: model-store-vss-blueprint-0
ReadOnly: false
dshm:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium: Memory
SizeLimit:
scripts-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vss-blueprint-scripts-configmap
Optional: false
kube-api-access-6t9b8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
nvidia.com/gpu:NoSchedule op=Exists
Events:
Type Reason Age From Message
Warning FailedScheduling 32s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/disk-pressure: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Name: vss-vss-deployment-6954d97ff-8tdpv
Namespace: default
Priority: 0
Service Account: default
Node: ip-172-16-3-110/172.16.3.110
Start Time: Thu, 13 Feb 2025 11:17:57 +0000
Labels: app=vss-vss-deployment
app.kubernetes.io/instance=vss-blueprint
app.kubernetes.io/name=vss
generated_with=helm_builder
hb_version=1.0.0
microservice_version=2.1.0
msb_version=2.5.0
pod-template-hash=6954d97ff
Annotations: checksum/vss-configs-cm: bd47e7de83477a400983e1002fcd5792cdc2a07649afc80eb3f9b59ad844775b
checksum/vss-external-files-cm: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
checksum/vss-scripts-cm: 28c7bf12f8b3f914a8ec84917601fe5577421de9ee7e842fc0581d1319031df0
checksum/vss-workload-cm: 377e654e33de70bde677eaf1e45466288744b3ec1210c3f584be0c858be89fcf
cni.projectcalico.org/containerID: 4ba93b9842f405f4894be5c0fbbfbfd1c94430848acffe1f4cf696fd2e8f9d64
cni.projectcalico.org/podIP: 10.1.253.86/32
cni.projectcalico.org/podIPs: 10.1.253.86/32
Status: Pending
IP: 10.1.253.86
IPs:
IP: 10.1.253.86
Controlled By: ReplicaSet/vss-vss-deployment-6954d97ff
Init Containers:
check-milvus-up:
Container ID: containerd://cddafbd3ef945ddcdadcb8932f37a22cf7d58b52774e44448202b9272f529f8a
Image: busybox:1.28
Image ID: Docker Hub Container Image Library | App Containerization
Port:
Host Port:
Command:
sh
-c
until nc -z -w 2 milvus-milvus-deployment-milvus-service 19530; do echo waiting for milvus; sleep 2; done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 13 Feb 2025 11:17:58 +0000
Finished: Thu, 13 Feb 2025 11:17:58 +0000
Ready: True
Restart Count: 0
Limits:
nvidia.com/gpu: 2
Requests:
nvidia.com/gpu: 2
Environment:
Mounts:
/opt/configs from configs-volume (rw)
/opt/scripts from scripts-cm-volume (rw)
/opt/workload-config from workload-cm-volume (rw)
/secrets/graph-db-password from secret-graph-db-password-volume (ro,path=“graph-db-password”)
/secrets/graph-db-username from secret-graph-db-username-volume (ro,path=“graph-db-username”)
/secrets/ngc-api-key from secret-ngc-api-key-volume (ro,path=“ngc-api-key”)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86blt (ro)
check-neo4j-up:
Container ID: containerd://dda74bd1353b68acb786d207867af7f25751958471127f19a8c2e5ec877988f4
Image: busybox:1.28
Image ID: Docker Hub Container Image Library | App Containerization
Port:
Host Port:
Command:
sh
-c
until nc -z -w 2 neo-4-j-service 7687; do echo waiting for neo4j; sleep 2; done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 13 Feb 2025 11:18:29 +0000
Finished: Thu, 13 Feb 2025 11:18:29 +0000
Ready: True
Restart Count: 0
Limits:
nvidia.com/gpu: 2
Requests:
nvidia.com/gpu: 2
Environment:
Mounts:
/opt/configs from configs-volume (rw)
/opt/scripts from scripts-cm-volume (rw)
/opt/workload-config from workload-cm-volume (rw)
/secrets/graph-db-password from secret-graph-db-password-volume (ro,path=“graph-db-password”)
/secrets/graph-db-username from secret-graph-db-username-volume (ro,path=“graph-db-username”)
/secrets/ngc-api-key from secret-ngc-api-key-volume (ro,path=“ngc-api-key”)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86blt (ro)
check-llm-up:
Container ID: containerd://a1bc8fd80d1058f6f3b9082d3032a3970cfa60dfe812ae4bc86ac3a37eb908eb
Image: curlimages/curl:latest
Image ID: docker.io/curlimages/curl@sha256:94e9e444bcba979c2ea12e27ae39bee4cd10bc7041a472c4727a558e213744e6
Port:
Host Port:
Command:
sh
-c
Args:
while ! curl -s -f -o /dev/null http://llm-nim-svc:8000/v1/health/live; do
echo “Waiting for LLM…”
sleep 2
done
State: Running
Started: Thu, 13 Feb 2025 11:18:48 +0000
Ready: False
Restart Count: 0
Limits:
nvidia.com/gpu: 2
Requests:
nvidia.com/gpu: 2
Environment: <none>
Mounts:
/opt/configs from configs-volume (rw)
/opt/scripts from scripts-cm-volume (rw)
/opt/workload-config from workload-cm-volume (rw)
/secrets/graph-db-password from secret-graph-db-password-volume (ro,path="graph-db-password")
/secrets/graph-db-username from secret-graph-db-username-volume (ro,path="graph-db-username")
/secrets/ngc-api-key from secret-ngc-api-key-volume (ro,path="ngc-api-key")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86blt (ro)
Containers:
vss:
Container ID:
Image: nvcr.io/nvidia/blueprint/vss-engine:2.1.0
Image ID:
Port: 8000/TCP
Host Port: 0/TCP
Command:
bash
/opt/scripts/start.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
nvidia.com/gpu: 2
Requests:
nvidia.com/gpu: 2
Liveness: http-get http://:http-api/health/live delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http-api/health/ready delay=5s timeout=1s period=5s #success=1 #failure=3
Startup: http-get http://:http-api/health/ready delay=0s timeout=1s period=10s #success=1 #failure=360
Environment:
FRONTEND_PORT: 9000
BACKEND_PORT: 8000
GRAPH_DB_URI: bolt://neo-4-j-service:7687
GRAPH_DB_USERNAME: neo4j
GRAPH_DB_PASSWORD: password
MILVUS_DB_HOST: milvus-milvus-deployment-milvus-service
MILVUS_DB_PORT: 19530
VLM_MODEL_TO_USE: vila-1.5
MODEL_PATH: ngc:nim/nvidia/vila-1.5-40b:vila-yi-34b-siglip-stage3_1003_video_v8
DISABLE_GUARDRAILS: false
OPENAI_API_KEY_NAME: VSS_OPENAI_API_KEY
NVIDIA_API_KEY_NAME: VSS_NVIDIA_API_KEY
NGC_API_KEY_NAME: VSS_NGC_API_KEY
TRT_LLM_MODE: int4_awq
VLM_BATCH_SIZE:
VIA_VLM_OPENAI_MODEL_DEPLOYMENT_NAME:
VIA_VLM_ENDPOINT:
VIA_VLM_API_KEY:
OPENAI_API_VERSION:
AZURE_OPENAI_API_VERSION:
Mounts:
/opt/configs from configs-volume (rw)
/opt/scripts from scripts-cm-volume (rw)
/opt/workload-config from workload-cm-volume (rw)
/secrets/graph-db-password from secret-graph-db-password-volume (ro,path=“graph-db-password”)
/secrets/graph-db-username from secret-graph-db-username-volume (ro,path=“graph-db-username”)
/secrets/ngc-api-key from secret-ngc-api-key-volume (ro,path=“ngc-api-key”)
/tmp/via-ngc-model-cache from ngc-model-cache-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86blt (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
ngc-model-cache-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: vss-ngc-model-cache-pvc
ReadOnly: false
workload-cm-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vss-workload-cm
Optional: false
configs-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vss-configs-cm
Optional: false
scripts-cm-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vss-scripts-cm
Optional: false
secret-ngc-api-key-volume:
Type: Secret (a volume populated by a Secret)
SecretName: ngc-api-key-secret
Optional: false
secret-graph-db-username-volume:
Type: Secret (a volume populated by a Secret)
SecretName: graph-db-creds-secret
Optional: false
secret-graph-db-password-volume:
Type: Secret (a volume populated by a Secret)
SecretName: graph-db-creds-secret
Optional: false
kube-api-access-86blt:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
ubuntu@ip-1XXXX:~$ sudo microk8s kubectl describe pod -n default vss-vss-deployment-6954d97ff-8tdpv
Name: vss-vss-deployment-6954d97ff-8tdpv
Namespace: default
Priority: 0
Service Account: default
Node: ip-172-16-3-110/172.16.3.110
Start Time: Thu, 13 Feb 2025 11:17:57 +0000
Labels: app=vss-vss-deployment
app.kubernetes.io/instance=vss-blueprint
app.kubernetes.io/name=vss
generated_with=helm_builder
hb_version=1.0.0
microservice_version=2.1.0
msb_version=2.5.0
pod-template-hash=6954d97ff
Annotations: checksum/vss-configs-cm: bd47e7de83477a400983e1002fcd5792cdc2a07649afc80eb3f9b59ad844775b
checksum/vss-external-files-cm: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
checksum/vss-scripts-cm: 28c7bf12f8b3f914a8ec84917601fe5577421de9ee7e842fc0581d1319031df0
checksum/vss-workload-cm: 377e654e33de70bde677eaf1e45466288744b3ec1210c3f584be0c858be89fcf
cni.projectcalico.org/containerID: 4ba93b9842f405f4894be5c0fbbfbfd1c94430848acffe1f4cf696fd2e8f9d64
cni.projectcalico.org/podIP: 10.1.253.86/32
cni.projectcalico.org/podIPs: 10.1.253.86/32
Status: Pending
IP: 10.1.253.86
IPs:
IP: 10.1.253.86
Controlled By: ReplicaSet/vss-vss-deployment-6954d97ff
Init Containers:
check-milvus-up:
Container ID: containerd://cddafbd3ef945ddcdadcb8932f37a22cf7d58b52774e44448202b9272f529f8a
Image: busybox:1.28
Image ID: Docker Hub Container Image Library | App Containerization
Port:
Host Port:
Command:
sh
-c
until nc -z -w 2 milvus-milvus-deployment-milvus-service 19530; do echo waiting for milvus; sleep 2; done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 13 Feb 2025 11:17:58 +0000
Finished: Thu, 13 Feb 2025 11:17:58 +0000
Ready: True
Restart Count: 0
Limits:
nvidia.com/gpu: 2
Requests:
nvidia.com/gpu: 2
Environment:
Mounts:
/opt/configs from configs-volume (rw)
/opt/scripts from scripts-cm-volume (rw)
/opt/workload-config from workload-cm-volume (rw)
/secrets/graph-db-password from secret-graph-db-password-volume (ro,path=“graph-db-password”)
/secrets/graph-db-username from secret-graph-db-username-volume (ro,path=“graph-db-username”)
/secrets/ngc-api-key from secret-ngc-api-key-volume (ro,path=“ngc-api-key”)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86blt (ro)
check-neo4j-up:
Container ID: containerd://dda74bd1353b68acb786d207867af7f25751958471127f19a8c2e5ec877988f4
Image: busybox:1.28
Image ID: Docker Hub Container Image Library | App Containerization
Port:
Host Port:
Command:
sh
-c
until nc -z -w 2 neo-4-j-service 7687; do echo waiting for neo4j; sleep 2; done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 13 Feb 2025 11:18:29 +0000
Finished: Thu, 13 Feb 2025 11:18:29 +0000
Ready: True
Restart Count: 0
Limits:
nvidia.com/gpu: 2
Requests:
nvidia.com/gpu: 2
Environment:
Mounts:
/opt/configs from configs-volume (rw)
/opt/scripts from scripts-cm-volume (rw)
/opt/workload-config from workload-cm-volume (rw)
/secrets/graph-db-password from secret-graph-db-password-volume (ro,path=“graph-db-password”)
/secrets/graph-db-username from secret-graph-db-username-volume (ro,path=“graph-db-username”)
/secrets/ngc-api-key from secret-ngc-api-key-volume (ro,path=“ngc-api-key”)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86blt (ro)
check-llm-up:
Container ID: containerd://a1bc8fd80d1058f6f3b9082d3032a3970cfa60dfe812ae4bc86ac3a37eb908eb
Image: curlimages/curl:latest
Image ID: docker.io/curlimages/curl@sha256:94e9e444bcba979c2ea12e27ae39bee4cd10bc7041a472c4727a558e213744e6
Port:
Host Port:
Command:
sh
-c
Args:
while ! curl -s -f -o /dev/null http://llm-nim-svc:8000/v1/health/live; do
echo “Waiting for LLM…”
sleep 2
done
State: Running
Started: Thu, 13 Feb 2025 11:18:48 +0000
Ready: False
Restart Count: 0
Limits:
nvidia.com/gpu: 2
Requests:
nvidia.com/gpu: 2
Environment: <none>
Mounts:
/opt/configs from configs-volume (rw)
/opt/scripts from scripts-cm-volume (rw)
/opt/workload-config from workload-cm-volume (rw)
/secrets/graph-db-password from secret-graph-db-password-volume (ro,path="graph-db-password")
/secrets/graph-db-username from secret-graph-db-username-volume (ro,path="graph-db-username")
/secrets/ngc-api-key from secret-ngc-api-key-volume (ro,path="ngc-api-key")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86blt (ro)
Containers:
vss:
Container ID:
Image: nvcr.io/nvidia/blueprint/vss-engine:2.1.0
Image ID:
Port: 8000/TCP
Host Port: 0/TCP
Command:
bash
/opt/scripts/start.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
nvidia.com/gpu: 2
Requests:
nvidia.com/gpu: 2
Liveness: http-get http://:http-api/health/live delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http-api/health/ready delay=5s timeout=1s period=5s #success=1 #failure=3
Startup: http-get http://:http-api/health/ready delay=0s timeout=1s period=10s #success=1 #failure=360
Environment:
FRONTEND_PORT: 9000
BACKEND_PORT: 8000
GRAPH_DB_URI: bolt://neo-4-j-service:7687
GRAPH_DB_USERNAME: neo4j
GRAPH_DB_PASSWORD: password
MILVUS_DB_HOST: milvus-milvus-deployment-milvus-service
MILVUS_DB_PORT: 19530
VLM_MODEL_TO_USE: vila-1.5
MODEL_PATH: ngc:nim/nvidia/vila-1.5-40b:vila-yi-34b-siglip-stage3_1003_video_v8
DISABLE_GUARDRAILS: false
OPENAI_API_KEY_NAME: VSS_OPENAI_API_KEY
NVIDIA_API_KEY_NAME: VSS_NVIDIA_API_KEY
NGC_API_KEY_NAME: VSS_NGC_API_KEY
TRT_LLM_MODE: int4_awq
VLM_BATCH_SIZE:
VIA_VLM_OPENAI_MODEL_DEPLOYMENT_NAME:
VIA_VLM_ENDPOINT:
VIA_VLM_API_KEY:
OPENAI_API_VERSION:
AZURE_OPENAI_API_VERSION:
Mounts:
/opt/configs from configs-volume (rw)
/opt/scripts from scripts-cm-volume (rw)
/opt/workload-config from workload-cm-volume (rw)
/secrets/graph-db-password from secret-graph-db-password-volume (ro,path=“graph-db-password”)
/secrets/graph-db-username from secret-graph-db-username-volume (ro,path=“graph-db-username”)
/secrets/ngc-api-key from secret-ngc-api-key-volume (ro,path=“ngc-api-key”)
/tmp/via-ngc-model-cache from ngc-model-cache-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86blt (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
ngc-model-cache-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: vss-ngc-model-cache-pvc
ReadOnly: false
workload-cm-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vss-workload-cm
Optional: false
configs-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vss-configs-cm
Optional: false
scripts-cm-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vss-scripts-cm
Optional: false
secret-ngc-api-key-volume:
Type: Secret (a volume populated by a Secret)
SecretName: ngc-api-key-secret
Optional: false
secret-graph-db-username-volume:
Type: Secret (a volume populated by a Secret)
SecretName: graph-db-creds-secret
Optional: false
secret-graph-db-password-volume:
Type: Secret (a volume populated by a Secret)
SecretName: graph-db-creds-secret
Optional: false
kube-api-access-86blt:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Pulling 5m9s kubelet Pulling image “curlimages/curl:latest”
Normal Pulled 5m6s kubelet Successfully pulled image “curlimages/curl:latest” in 2.835s (2.835s including waiting). Image size: 12434447 bytes.
Normal Created 5m6s kubelet Created container check-llm-up
Normal Started 5m6s kubelet Started container check-llm-up
ubuntu@ip-172-16-3-110:~$ sudo microk8s kubectl describe pod -n default vss-vss-deployment-6954d97ff-8tdpv
Name: vss-vss-deployment-6954d97ff-8tdpv
Namespace: default
Priority: 0
Service Account: default
Node: ip-172-16-3-110/172.16.3.110
Start Time: Thu, 13 Feb 2025 11:17:57 +0000
Labels: app=vss-vss-deployment
app.kubernetes.io/instance=vss-blueprint
app.kubernetes.io/name=vss
generated_with=helm_builder
hb_version=1.0.0
microservice_version=2.1.0
msb_version=2.5.0
pod-template-hash=6954d97ff
Annotations: checksum/vss-configs-cm: bd47e7de83477a400983e1002fcd5792cdc2a07649afc80eb3f9b59ad844775b
checksum/vss-external-files-cm: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
checksum/vss-scripts-cm: 28c7bf12f8b3f914a8ec84917601fe5577421de9ee7e842fc0581d1319031df0
checksum/vss-workload-cm: 377e654e33de70bde677eaf1e45466288744b3ec1210c3f584be0c858be89fcf
cni.projectcalico.org/containerID: 4ba93b9842f405f4894be5c0fbbfbfd1c94430848acffe1f4cf696fd2e8f9d64
cni.projectcalico.org/podIP: 10.1.253.86/32
cni.projectcalico.org/podIPs: 10.1.253.86/32
Status: Pending
IP: 10.1.253.86
IPs:
IP: 10.1.253.86
Controlled By: ReplicaSet/vss-vss-deployment-6954d97ff
Init Containers:
check-milvus-up:
Container ID: containerd://cddafbd3ef945ddcdadcb8932f37a22cf7d58b52774e44448202b9272f529f8a
Image: busybox:1.28
Image ID: Docker Hub Container Image Library | App Containerization
Port:
Host Port:
Command:
sh
-c
until nc -z -w 2 milvus-milvus-deployment-milvus-service 19530; do echo waiting for milvus; sleep 2; done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 13 Feb 2025 11:17:58 +0000
Finished: Thu, 13 Feb 2025 11:17:58 +0000
Ready: True
Restart Count: 0
Limits:
nvidia.com/gpu: 2
Requests:
nvidia.com/gpu: 2
Environment:
Mounts:
/opt/configs from configs-volume (rw)
/opt/scripts from scripts-cm-volume (rw)
/opt/workload-config from workload-cm-volume (rw)
/secrets/graph-db-password from secret-graph-db-password-volume (ro,path=“graph-db-password”)
/secrets/graph-db-username from secret-graph-db-username-volume (ro,path=“graph-db-username”)
/secrets/ngc-api-key from secret-ngc-api-key-volume (ro,path=“ngc-api-key”)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86blt (ro)
check-neo4j-up:
Container ID: containerd://dda74bd1353b68acb786d207867af7f25751958471127f19a8c2e5ec877988f4
Image: busybox:1.28
Image ID: Docker Hub Container Image Library | App Containerization
Port:
Host Port:
Command:
sh
-c
until nc -z -w 2 neo-4-j-service 7687; do echo waiting for neo4j; sleep 2; done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 13 Feb 2025 11:18:29 +0000
Finished: Thu, 13 Feb 2025 11:18:29 +0000
Ready: True
Restart Count: 0
Limits:
nvidia.com/gpu: 2
Requests:
nvidia.com/gpu: 2
Environment:
Mounts:
/opt/configs from configs-volume (rw)
/opt/scripts from scripts-cm-volume (rw)
/opt/workload-config from workload-cm-volume (rw)
/secrets/graph-db-password from secret-graph-db-password-volume (ro,path=“graph-db-password”)
/secrets/graph-db-username from secret-graph-db-username-volume (ro,path=“graph-db-username”)
/secrets/ngc-api-key from secret-ngc-api-key-volume (ro,path=“ngc-api-key”)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86blt (ro)
check-llm-up:
Container ID: containerd://a1bc8fd80d1058f6f3b9082d3032a3970cfa60dfe812ae4bc86ac3a37eb908eb
Image: curlimages/curl:latest
Image ID: docker.io/curlimages/curl@sha256:94e9e444bcba979c2ea12e27ae39bee4cd10bc7041a472c4727a558e213744e6
Port:
Host Port:
Command:
sh
-c
Args:
while ! curl -s -f -o /dev/null http://llm-nim-svc:8000/v1/health/live; do
echo “Waiting for LLM…”
sleep 2
done
State: Running
Started: Thu, 13 Feb 2025 11:18:48 +0000
Ready: False
Restart Count: 0
Limits:
nvidia.com/gpu: 2
Requests:
nvidia.com/gpu: 2
Environment: <none>
Mounts:
/opt/configs from configs-volume (rw)
/opt/scripts from scripts-cm-volume (rw)
/opt/workload-config from workload-cm-volume (rw)
/secrets/graph-db-password from secret-graph-db-password-volume (ro,path="graph-db-password")
/secrets/graph-db-username from secret-graph-db-username-volume (ro,path="graph-db-username")
/secrets/ngc-api-key from secret-ngc-api-key-volume (ro,path="ngc-api-key")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86blt (ro)
Containers:
vss:
Container ID:
Image: nvcr.io/nvidia/blueprint/vss-engine:2.1.0
Image ID:
Port: 8000/TCP
Host Port: 0/TCP
Command:
bash
/opt/scripts/start.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
nvidia.com/gpu: 2
Requests:
nvidia.com/gpu: 2
Liveness: http-get http://:http-api/health/live delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http-api/health/ready delay=5s timeout=1s period=5s #success=1 #failure=3
Startup: http-get http://:http-api/health/ready delay=0s timeout=1s period=10s #success=1 #failure=360
Environment:
FRONTEND_PORT: 9000
BACKEND_PORT: 8000
GRAPH_DB_URI: bolt://neo-4-j-service:7687
GRAPH_DB_USERNAME: neo4j
GRAPH_DB_PASSWORD: password
MILVUS_DB_HOST: milvus-milvus-deployment-milvus-service
MILVUS_DB_PORT: 19530
VLM_MODEL_TO_USE: vila-1.5
MODEL_PATH: ngc:nim/nvidia/vila-1.5-40b:vila-yi-34b-siglip-stage3_1003_video_v8
DISABLE_GUARDRAILS: false
OPENAI_API_KEY_NAME: VSS_OPENAI_API_KEY
NVIDIA_API_KEY_NAME: VSS_NVIDIA_API_KEY
NGC_API_KEY_NAME: VSS_NGC_API_KEY
TRT_LLM_MODE: int4_awq
VLM_BATCH_SIZE:
VIA_VLM_OPENAI_MODEL_DEPLOYMENT_NAME:
VIA_VLM_ENDPOINT:
VIA_VLM_API_KEY:
OPENAI_API_VERSION:
AZURE_OPENAI_API_VERSION:
Mounts:
/opt/configs from configs-volume (rw)
/opt/scripts from scripts-cm-volume (rw)
/opt/workload-config from workload-cm-volume (rw)
/secrets/graph-db-password from secret-graph-db-password-volume (ro,path=“graph-db-password”)
/secrets/graph-db-username from secret-graph-db-username-volume (ro,path=“graph-db-username”)
/secrets/ngc-api-key from secret-ngc-api-key-volume (ro,path=“ngc-api-key”)
/tmp/via-ngc-model-cache from ngc-model-cache-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86blt (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
ngc-model-cache-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: vss-ngc-model-cache-pvc
ReadOnly: false
workload-cm-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vss-workload-cm
Optional: false
configs-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vss-configs-cm
Optional: false
scripts-cm-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vss-scripts-cm
Optional: false
secret-ngc-api-key-volume:
Type: Secret (a volume populated by a Secret)
SecretName: ngc-api-key-secret
Optional: false
secret-graph-db-username-volume:
Type: Secret (a volume populated by a Secret)
SecretName: graph-db-creds-secret
Optional: false
secret-graph-db-password-volume:
Type: Secret (a volume populated by a Secret)
SecretName: graph-db-creds-secret
Optional: false
kube-api-access-86blt:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
sudo microk8s kubectl logs -n default vss-vss-deployment-6954d97ff-8tdpv -c check-llm-up
Waiting for LLM…
Waiting for LLM…
Waiting for LLM…
Waiting for LLM…
Waiting for LLM…
Waiting for LLM…
Waiting for LLM…
Waiting for LLM…
Waiting for LLM…
Waiting for LLM…
Waiting for LLM…
Waiting for LLM…
Waiting for LLM…
Waiting for LLM…