Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
NVIDIA Jetson Orin Nano Developer Kit
• DeepStream Version
DeepStream 7.1
• JetPack Version (valid for Jetson only)
JetPack 6.1
• TensorRT Version
10.3.0.30
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
Questions and running CV inference problems
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Latest containers indicated in official docs and a USB camera.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
The ‘deepstream-test1-usbcam’ app.
Hi,
I am working on a project which needs live stream inference. I have tried for many times 2 ways for live inference (using YOLOv11) and nothing works, even though i follow the docs. I am interested to have both working in order to make comparisons and choose the final way to proceed with.
I have installed JetPack 6.1 via microSD card (fresh install) on a Jetson Orin Nano Developer Kit (not using debian installer). I am using a USB camera for live streaming (48MP - Model: ELP-USB48MP02-SL170).
=======FIRST PROBLEM=======
I want to run inference with tracking on live stream using TensorRT (only) and an Ultralytics container.
Commands used:
sudo nvpmodel -m 0
sudo jetson_clocks
sudo docker pull ultralytics/ultralytics:latest-jetson-jetpack6
t=ultralytics/ultralytics:latest-jetson-jetpack6
sudo docker run -it --name TEST --ipc=host --runtime=nvidia $t
yolo export model=my_model.pt format=engine
→ Tried the .engine on a .mp4 clip with tracking. Everything works.
yolo predict task=detect mode=track model=my_model.engine source=“/ultralytics/test_clip.mp4”
→ When trying live stream it doesn’t work.
yolo predict task=detect mode=track model=my_model.engine source=0
I get the error: “ConnectionError: 1/1: 0… Failed to open 0”
I cannot get the live stream and USB camera to work, whatever i do.
It’s interesting that when i run (outside of the container?) ‘gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! autovideosink’ the camera starts and shows live stream (no inference implied). I have installed v4l-utils. Guvcview and Cheese apps also do not work.
=======SECOND PROBLEM=======
I want to run inference on live stream using DeepStream 7.1 in an NIVIDA container, on a sample app (deepstream-test1-usbcam), using YOLOv11 ( NVIDIA docs ).
Commands used:
sudo nvpmodel -m 0
sudo jetson_clocks
sudo apt-get install --reinstall libflac8 libmp3lame0 libxvidcore4 ffmpeg
docker pull nvcr.io/nvidia/deepstream-l4t:7.1-samples-multiarch
xhost +
→ Check the camera address, which will be ‘/dev/video0’
v4l2-ctl --listdevices
→ Create the container (i have cloned the repository on Ubuntu, outside of the container, in order to be able to make the bindings when creating the container - mount the bindings folder, also i have mounted the device)
docker run -it --net=host --runtime nvidia -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-7.1 -v /tmp/.X11-unix/:/tmp/.X11-unix -v /home/jetson/deepstream_python_apps/bindings:/opt/nvidia/deepstream/deepstream/sources/python --device /dev/video0 --name SAMPLES nvcr.io/nvidia/deepstream-l4t:7.1-samples-multiarch
→ Install codecs inside the created container
/opt/nvidia/deepstream/deepstream/user_additional_install.sh
→ Clone the repository in the ‘sources’ directory (i have ‘cd’ to /sources)
git clone GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications
apt install python3-gi python3-dev python3-gst-1.0 –y
→ prepare the .whl file for precompiled bindings
curl -L -o pyds-1.2.0-cp310-cp310-linux_aarch64.whl https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/releases/download/v1.2.0/pyds-1.2.0-cp310-cp310-linux_aarch64.whl
→ install precompiled .whl file for bindings
pip install pyds-1.2.0-cp310-cp310-linux_aarch64.whl
cd to sources/deepstream_python_apps/apps/deepstream-test1-usbcam/
python3 deepstream_test_1_usb.py /dev/video0
Error: ()gst-stream-error-quark: Internal data stream error (…)
streaming stopped, reason non-negotiated (-4).
nvstreammux: Succesfully handled EOS for source_id=0
=======Feedback and Additional Questions=======
- Is there any centralized steps guide from A to Z in a single document? It is a bit confusing to have several different sources with complementary or separated information, in READMEs, HOWTOs, different webpages etc (too many fragmented info) implied in the same process, it’s hard to follow. It would be very helpful to indicate precisely what document to follow at each step and the specific order to do the things properly. Also, more specific details would be great, like:
a) What commands to use outside of the docker container VS what commands to use inside the docker container (and what container if there are differences), in what precise order (the docs specify this info a few times, but not always, rarely)
b) What should you do if you install the JetPack 6.1 via debian installer VS microSD card fresh install (do i need to make additional steps for one situation or another?) I felt sometimes that some info was concerning the debian installer which made me wonder if i should proceed to it or not (having a microSD fresh install), was not clear enough.
c) What commands are for what versions of JetPack and DeepStream. Are they for JetPack 6.1 and DeepStream 7.1 or is it an older information (not yet updated on the page)? DeepStream 7.1 is quite new, so probably not all the info from every doc got updated, associated to a particular process.
d) “2. Run the docker with Python Bindings mapped using the following option:
-v :/opt/nvidia/deepstream/deepstream/sources/python”
How can i set a path to a bindings directory, in the creating a container process, when the bindings directory will be cloned inside the container after it is created? The only possible way is to connect it to the outside of the container, correct?
- What are /install.sh and /update_rtpmanager.sh files from /opt/nvidia/deepstream/deepstream/?
- What do you recommend for 1 USB camera (1 live stream) with 2-3 YOLOv11 models used in the process? TensorRT VS DeepStream? Which would be faster and a better option?
- What NVIDIA hardware would be more appropiate? Is Jetson Orin Nano enough for edge computing?
- Does TensorRT (from Ultralytics container) accepts live stream? I have seen that on Ultralytics page they recommend DeepStream for live stream. Why? Can’t i use only TensorRT (problem 1) to keep it simpler and clearer to use?
- Why do you recommend when creating a container to use ‘-rm’ argument? Why should i everytime create the same container and remove it afterwards? Wouldn’t it be simpler to create a container once, configure it properly, and afterwards just ‘docker start -ai CONTAINER’ and use it? It’s unclear why to use ‘-rm’ flag.
- What are some of the best cameras for computer vision/edge computing on Jetson devices?
Thank you very much for your support and assistance! Keep up the good work!
Best regards,
Y.