TensorRT and DeepStream 7.1 running problems on USB Camera

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
NVIDIA Jetson Orin Nano Developer Kit
• DeepStream Version
DeepStream 7.1
• JetPack Version (valid for Jetson only)
JetPack 6.1
• TensorRT Version
10.3.0.30
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
Questions and running CV inference problems
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Latest containers indicated in official docs and a USB camera.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
The ‘deepstream-test1-usbcam’ app.

Hi,

I am working on a project which needs live stream inference. I have tried for many times 2 ways for live inference (using YOLOv11) and nothing works, even though i follow the docs. I am interested to have both working in order to make comparisons and choose the final way to proceed with.

I have installed JetPack 6.1 via microSD card (fresh install) on a Jetson Orin Nano Developer Kit (not using debian installer). I am using a USB camera for live streaming (48MP - Model: ELP-USB48MP02-SL170).

=======FIRST PROBLEM=======
I want to run inference with tracking on live stream using TensorRT (only) and an Ultralytics container.

Commands used:

sudo nvpmodel -m 0
sudo jetson_clocks

sudo docker pull ultralytics/ultralytics:latest-jetson-jetpack6

t=ultralytics/ultralytics:latest-jetson-jetpack6
sudo docker run -it --name TEST --ipc=host --runtime=nvidia $t

yolo export model=my_model.pt format=engine

→ Tried the .engine on a .mp4 clip with tracking. Everything works.
yolo predict task=detect mode=track model=my_model.engine source=“/ultralytics/test_clip.mp4”

→ When trying live stream it doesn’t work.
yolo predict task=detect mode=track model=my_model.engine source=0

I get the error: “ConnectionError: 1/1: 0… Failed to open 0”

I cannot get the live stream and USB camera to work, whatever i do.

It’s interesting that when i run (outside of the container?) ‘gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! autovideosink’ the camera starts and shows live stream (no inference implied). I have installed v4l-utils. Guvcview and Cheese apps also do not work.

=======SECOND PROBLEM=======
I want to run inference on live stream using DeepStream 7.1 in an NIVIDA container, on a sample app (deepstream-test1-usbcam), using YOLOv11 ( NVIDIA docs ).

Commands used:

sudo nvpmodel -m 0
sudo jetson_clocks

sudo apt-get install --reinstall libflac8 libmp3lame0 libxvidcore4 ffmpeg

docker pull nvcr.io/nvidia/deepstream-l4t:7.1-samples-multiarch

xhost +

→ Check the camera address, which will be ‘/dev/video0’
v4l2-ctl --listdevices

→ Create the container (i have cloned the repository on Ubuntu, outside of the container, in order to be able to make the bindings when creating the container - mount the bindings folder, also i have mounted the device)
docker run -it --net=host --runtime nvidia -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-7.1 -v /tmp/.X11-unix/:/tmp/.X11-unix -v /home/jetson/deepstream_python_apps/bindings:/opt/nvidia/deepstream/deepstream/sources/python --device /dev/video0 --name SAMPLES nvcr.io/nvidia/deepstream-l4t:7.1-samples-multiarch

→ Install codecs inside the created container
/opt/nvidia/deepstream/deepstream/user_additional_install.sh

→ Clone the repository in the ‘sources’ directory (i have ‘cd’ to /sources)
git clone GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications

apt install python3-gi python3-dev python3-gst-1.0 –y

→ prepare the .whl file for precompiled bindings
curl -L -o pyds-1.2.0-cp310-cp310-linux_aarch64.whl https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/releases/download/v1.2.0/pyds-1.2.0-cp310-cp310-linux_aarch64.whl

→ install precompiled .whl file for bindings
pip install pyds-1.2.0-cp310-cp310-linux_aarch64.whl

cd to sources/deepstream_python_apps/apps/deepstream-test1-usbcam/

python3 deepstream_test_1_usb.py /dev/video0

Error: ()gst-stream-error-quark: Internal data stream error (…)
streaming stopped, reason non-negotiated (-4).
nvstreammux: Succesfully handled EOS for source_id=0

=======Feedback and Additional Questions=======

  1. Is there any centralized steps guide from A to Z in a single document? It is a bit confusing to have several different sources with complementary or separated information, in READMEs, HOWTOs, different webpages etc (too many fragmented info) implied in the same process, it’s hard to follow. It would be very helpful to indicate precisely what document to follow at each step and the specific order to do the things properly. Also, more specific details would be great, like:

a) What commands to use outside of the docker container VS what commands to use inside the docker container (and what container if there are differences), in what precise order (the docs specify this info a few times, but not always, rarely)
b) What should you do if you install the JetPack 6.1 via debian installer VS microSD card fresh install (do i need to make additional steps for one situation or another?) I felt sometimes that some info was concerning the debian installer which made me wonder if i should proceed to it or not (having a microSD fresh install), was not clear enough.
c) What commands are for what versions of JetPack and DeepStream. Are they for JetPack 6.1 and DeepStream 7.1 or is it an older information (not yet updated on the page)? DeepStream 7.1 is quite new, so probably not all the info from every doc got updated, associated to a particular process.
d) “2. Run the docker with Python Bindings mapped using the following option:
-v :/opt/nvidia/deepstream/deepstream/sources/python”

How can i set a path to a bindings directory, in the creating a container process, when the bindings directory will be cloned inside the container after it is created? The only possible way is to connect it to the outside of the container, correct?

  1. What are /install.sh and /update_rtpmanager.sh files from /opt/nvidia/deepstream/deepstream/?
  2. What do you recommend for 1 USB camera (1 live stream) with 2-3 YOLOv11 models used in the process? TensorRT VS DeepStream? Which would be faster and a better option?
  3. What NVIDIA hardware would be more appropiate? Is Jetson Orin Nano enough for edge computing?
  4. Does TensorRT (from Ultralytics container) accepts live stream? I have seen that on Ultralytics page they recommend DeepStream for live stream. Why? Can’t i use only TensorRT (problem 1) to keep it simpler and clearer to use?
  5. Why do you recommend when creating a container to use ‘-rm’ argument? Why should i everytime create the same container and remove it afterwards? Wouldn’t it be simpler to create a container once, configure it properly, and afterwards just ‘docker start -ai CONTAINER’ and use it? It’s unclear why to use ‘-rm’ flag.
  6. What are some of the best cameras for computer vision/edge computing on Jetson devices?

Thank you very much for your support and assistance! Keep up the good work!

Best regards,
Y.

And here is the error for TensorRT.

If you make it work please let me know the precise steps you have taken!

please refer to this faq for How to connect a USB camera in DeepStream. you can use gst-launch debug first. then port the pipeline to Python code.

Thank you very much! I will check it and get back to you with a conclusion.

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

Hi, fanzh! I have created a pipeline (from FAQ docs) but it still doesn’t work running the sample app ‘deepstream-test1-usbcam’ (even though the pipeline itself works and it shows livestream). Can you please be more specific in steps to be taken for running the sample app?

Concerning TensorRT, i have managed to make it work, so for now there it’s only the sample app not working properly.

Also, i have asked a few more questions at the end of the first message. It would help if you would take a little time to answer them.

Thank you very much for your assistance!

please refer to this link for how to dump pipeline graph. you can compare the no-working pipeline with the working pipeline. if still can’t work, please share the running log.
a. the start command-line is the same.
b. as the doc shown, if using Orin nano, please use " SD Card Image Method".
c. please refer to this DeepStream compatibility table.
d. docker “-v” issue would be outside of DeepStream.
2. you can find them and the explanation in this link. install.sh is used to install. update_rtpmanager.sh fixed a Gstreamer bug.
3. DeepStream uses TensorRT to do inference. DeepTream has other features except inference. please refer to the doc and the samples /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/.
4. Orin/Orin nano. especially nano does not support hardware encoding. please refer to this link for Orin performance comparison.
5. please refer to the point 3.
6. the docker issue would be outside of DeepStream.
7. there is not the best cameras. DeepStreaam only needs NV12, I420, RGBA raw data to do inference. you can use videocovnert/nvvideocovnert to do format conversion.

Hi, fanzh! Thank you very much for your prompt reply!

I will try everything you suggested in the next few days and afterwards i will get back to you with conclusions!

Thank you very much for your assistance!

Hi, fanzh! I have managed to make the sample app work. Everything is ok for now. I am going to apply the principles to a custom app and capture the inference data afterwards. Thank you very much for your assistance! If you have any advices i would be glad hearing them!

Glad to know you fixed it, thanks for the update! If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.