Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Both Jetson Nano Developer Kit and Jetson Xavier NX Developer Kit
• DeepStream Version
Docker image used: nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples
• JetPack Version (valid for Jetson only)
jetson-nx-jp451-sd-card-image.zip and jetson-nano-jp451-sd-card-image
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
Bug / Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Flash the Jetson (both Nano and Xavier NX).
Run the following commands:
xhost +
sudo docker run -it --rm
–net=host
–runtime nvidia
-e DISPLAY=$DISPLAY
-w /opt/nvidia/deepstream/deepstream-5.1
-v $(pwd):/code
-v $(pwd)/files/faster_rcnn_inception_v2:/opt/nvidia/deepstream/deepstream-5.1/samples/trtis_model_repo/faster_rcnn_inception_v2
-v /tmp/.X11-unix/:/tmp/.X11-unix
nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples
Inside the Docker
cd /opt/nvidia/deepstream/deepstream-5.1/samples
./prepare_ds_trtis_model_repo.sh
cd /opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app-trtis
apt-get update && apt-get install ffmpeg
deepstream-app -c source1_primary_classifier.txt
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
N/A
Basically I am trying to run the Deepstream Triton Inference Server examples on both a Jetson Nano Developer kit and a Jetson Xavier NX Developer Kit.
When I try to run the example on either platform, the performance of the unit will slow to a complete halt so that even the clock freezes, mouse freezes etc. After a long while (1 hour + ) the Xavier eventually showed a static image of a bus and also the terminal will show some text indicating 0fps but then back to frozen.
What am I doing wrong here?