Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): GPU Tesla T4
• DeepStream Version Docker image nvcr.io/nvidia/deepstream:5.0.1-20.09-triton
• JetPack Version (valid for Jetson only)
• TensorRT VersionTensorRT 7.0.0-1, with Cuda 10.2
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA-DOCKER driver
• Issue Type( questions, new requirements, bugs) Not able to launch docker container thro Kubernetes
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) → kubectl create deployment nvidia-deepstream --image=nvcr.io/nvidia/deepstream:5.0.1-20.09-triton
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
±----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64.00 Driver Version: 440.64.00 CUDA Version: 10.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000097:00:00.0 Off | 0 |
| N/A 61C P0 28W / 70W | 213MiB / 15109MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 1 Tesla T4 On | 0000FE83:00:00.0 Off | 0 |
| N/A 56C P0 29W / 70W | 14142MiB / 15109MiB | 0% Default |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 114854 C python3 101MiB |
| 0 114855 C python3 101MiB |
| 1 114856 C python3 14131MiB |
±----------------------------------------------------------------------------+
$ kubectl create deployment nvidia-deepstream --image=nvcr.io/nvidia/deepstream:5.0.1-20.09-triton
deployment.apps/nvidia-deepstream created
$ kubectl get deployments -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default kubernetes-bootcamp 1/1 1 1 2d22h
default nvidia-deepstream 0/1 1 0 7m
$ kubectl get pod -n default
NAME READY STATUS RESTARTS AGE
kubernetes-bootcamp-69fbc6f4cf-r7l4q 1/1 Running 0 2d22h
nvidia-deepstream-6c8bdff646-4kml4 0/1 CrashLoopBackOff 2 39s