Issue with adding VideoSource in Agent Studio on Jetson AGX Orin

Hi eveyone,

I’m currently working on building an agent using NVIDIA Jetson AGX Orin in Agent Studio, and I’m encountering issues when adding a video source with VideoSource.

Here’s what happens:

  • When I try to use a camera device, the application freezes completely. Logs:
    [gstreamer] gstCamera – attempting to create device v4l2:///dev/video0
    **Opening in BLOCKING MODE **
    Fatal Python error: Segmentation fault

Thread 0x0000fffeacadf120 (most recent call first):
** File “/usr/lib/python3.10/ssl.py”, line 1161 in read**
** File “/usr/lib/python3.10/ssl.py”, line 1288 in recv**
** File “/usr/local/lib/python3.10/dist-packages/websockets/sync/connection.py”, line 561 in recv_events**
** File “/usr/local/lib/python3.10/dist-packages/websockets/sync/server.py”, line 196 in recv_events**
** File “/usr/lib/python3.10/threading.py”, line 953 in run**
** File “/usr/lib/python3.10/threading.py”, line 1016 in _bootstrap_inner**
** File “/usr/lib/python3.10/threading.py”, line 973 in _bootstrap**

Current thread 0x0000fffead2ef120 (most recent call first):
** File “/opt/NanoLLM/nano_llm/plugins/video/video_source.py”, line 67 in init**
** File “/opt/NanoLLM/nano_llm/plugins/dynamic_plugin.py”, line 35 in new**
** File “/opt/NanoLLM/nano_llm/agents/dynamic_agent.py”, line 65 in add_plugin**
** File “/usr/lib/python3.10/threading.py”, line 953 in run**
** File “/opt/NanoLLM/nano_llm/agents/dynamic_agent.py”, line 58 in add_plugin**
** File “/opt/NanoLLM/nano_llm/agents/dynamic_agent.py”, line 414 in invoke_handler**
** File “/opt/NanoLLM/nano_llm/agents/dynamic_agent.py”, line 432 in on_message**
** File “/opt/NanoLLM/nano_llm/agents/dynamic_agent.py”, line 442 in on_websocket**
** File “/opt/NanoLLM/nano_llm/web/server.py”, line 193 in on_message**
** File “/opt/NanoLLM/nano_llm/web/server.py”, line 393 in websocket_listener**
** File “/opt/NanoLLM/nano_llm/web/server.py”, line 314 in on_websocket**
** File “/usr/local/lib/python3.10/dist-packages/websockets/sync/server.py”, line 575 in conn_handler**
** File “/usr/lib/python3.10/threading.py”, line 953 in run**
** File “/usr/lib/python3.10/threading.py”, line 1016 in _bootstrap_inner**
** File “/usr/lib/python3.10/threading.py”, line 973 in _bootstrap**

Thread 0x0000fffeae30f120 (most recent call first):
** File “/usr/lib/python3.10/selectors.py”, line 416 in select**
** File “/usr/lib/python3.10/socketserver.py”, line 232 in serve_forever**
** File “/usr/local/lib/python3.10/dist-packages/werkzeug/serving.py”, line 817 in serve_forever**
** File “/usr/local/lib/python3.10/dist-packages/werkzeug/serving.py”, line 1123 in run_simple**
** File “/usr/local/lib/python3.10/dist-packages/flask/app.py”, line 625 in run**
** File “/opt/NanoLLM/nano_llm/web/server.py”, line 120 in **
** File “/usr/lib/python3.10/threading.py”, line 953 in run**
** File “/usr/lib/python3.10/threading.py”, line 1016 in _bootstrap_inner**
** File “/usr/lib/python3.10/threading.py”, line 973 in _bootstrap**

Thread 0x0000fffeaeb1f120 (most recent call first):
** File “/usr/lib/python3.10/selectors.py”, line 469 in select**
** File “/usr/local/lib/python3.10/dist-packages/websockets/sync/server.py”, line 260 in serve_forever**
** File “/opt/NanoLLM/nano_llm/web/server.py”, line 119 in **
** File “/usr/lib/python3.10/threading.py”, line 953 in run**
** File “/usr/lib/python3.10/threading.py”, line 1016 in _bootstrap_inner**
** File “/usr/lib/python3.10/threading.py”, line 973 in _bootstrap**

Thread 0x0000fffeaf32f120 (most recent call first):
** File “/usr/local/lib/python3.10/dist-packages/psutil/init.py”, line 1805 in cpu_percent**
** File “/opt/NanoLLM/nano_llm/plugins/tegrastats.py”, line 58 in read**
** File “/opt/NanoLLM/nano_llm/plugins/tegrastats.py”, line 96 in run**
** File “/usr/lib/python3.10/threading.py”, line 1016 in _bootstrap_inner**
** File “/usr/lib/python3.10/threading.py”, line 973 in _bootstrap**

Thread 0x0000ffff9855c6c0 (most recent call first):
** File “/usr/lib/python3.10/threading.py”, line 1116 in _wait_for_tstate_lock**
** File “/usr/lib/python3.10/threading.py”, line 1096 in join**
** File “/opt/NanoLLM/nano_llm/agents/dynamic_agent.py”, line 495 in run**
** File “/opt/NanoLLM/nano_llm/studio.py”, line 17 in **
** File “/usr/lib/python3.10/runpy.py”, line 86 in _run_code**
** File “/usr/lib/python3.10/runpy.py”, line 196 in _run_module_as_main**

Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, zstandard.backend_c, charset_normalizer.md, yaml._yaml, sentencepiece._sentencepiece, psutil._psutil_linux, psutil._psutil_posix, PIL._imaging, PIL._imagingft, google.protobuf.pyext._message, jetson_utils_python, cuda._lib.utils, cuda._cuda.ccuda, cuda.ccuda, cuda.cuda, cuda._cuda.cnvrtc, cuda.cnvrtc, cuda.nvrtc, cuda._lib.ccudart.utils, cuda._lib.ccudart.ccudart, cuda.ccudart, cuda.cudart, _cffi_backend, pyaudio._portaudio, markupsafe._speedups, websockets.speedups, regex._regex, scipy._lib._ccallback_c, numba.core.typeconv._typeconv, numba._helperlib, numba._dynfunc, numba._dispatcher, numba.core.runtime._nrt_python, numba.np.ufunc._internal, numba.experimental.jitclass._box, h5py._errors, h5py.defs, h5py._objects, h5py.h5, h5py.utils, h5py.h5t, h5py.h5s, h5py.h5ac, h5py.h5p, h5py.h5r, h5py._proxy, h5py._conv, h5py.h5z, h5py.h5a, h5py.h5d, h5py.h5ds, h5py.h5g, h5py.h5i, h5py.h5f, h5py.h5fd, h5py.h5pl, h5py.h5o, h5py.h5l, h5py._selector, grpc._cython.cygrpc (total: 79)

  • If I use an RTSP stream or an MP4 file the adition of the source fails, but the program doesn’t crash. Logs:
    **(gst-plugin-scanner:71): GStreamer-CRITICAL : 11:56:54.868: gst_element_register: assertion ‘g_type_is_a (type, GST_TYPE_ELEMENT)’ failed
    sh: 1: lsmod: not found
    sh: 1: modprobe: not found
    [gstreamer] initialized gstreamer, version 1.20.3.0
    [gstreamer] gstDecoder – creating decoder for /data/JETSON AI LAB _ Agent Studio - Multimodal VLM Function-calling LLM.mp4
    sh: 1: lsmod: not found
    sh: 1: modprobe: not found
    **Opening in BLOCKING MODE **
    **NvMMLiteOpen : Block : BlockType = 261 **
    **NvMMLiteBlockCreate : Block : BlockType = 261 **

**(python3:1): GStreamer-CRITICAL : 11:56:56.149: gst_debug_log_valist: assertion ‘category != NULL’ failed

**(python3:1): GStreamer-CRITICAL : 11:56:56.149: gst_debug_log_valist: assertion ‘category != NULL’ failed

**(python3:1): GStreamer-CRITICAL : 11:56:56.149: gst_debug_log_valist: assertion ‘category != NULL’ failed

**(python3:1): GStreamer-CRITICAL : 11:56:56.149: gst_debug_log_valist: assertion ‘category != NULL’ failed
[gstreamer] gstDecoder – discovered video resolution: 640x360 (framerate 30.000000 Hz)
[gstreamer] gstDecoder – discovered video caps: video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)3, profile=(string)main, width=(int)640, height=(int)360, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, colorimetry=(string)bt709, parsed=(boolean)true
[gstreamer] gstDecoder – pipeline string:
[gstreamer] filesrc location=/data/JETSON AI LAB _ Agent Studio - Multimodal VLM Function-calling LLM.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder name=decoder enable-max-performance=1 ! nvvidconv name=vidconv ! video/x-raw, width=(int)640, height=(int)480, format=(string)NV12 ! appsink name=mysink
[gstreamer] gstDecoder – failed to create pipeline
[gstreamer] (no element “AI”)
[gstreamer] failed to create decoder pipeline
[gstreamer] gstDecoder – failed to create decoder for file:///data/JETSON AI LAB _ Agent Studio - Multimodal VLM Function-calling LLM.mp4
11:56:56 | ERROR | Exception occurred handling websocket message:

{ ‘add_plugin’: { ‘layout_node’: {‘x’: 10, ‘y’: 10},
** ‘name’: ‘VideoSource’,**
** ‘return_copy’: ‘true’,**
** ‘return_tensors’: ‘cuda’,**
** ‘type’: ‘VideoSource’,**
** ‘video_input’: '/data/JETSON AI LAB _ Agent Studio - '**
** ‘Multimodal VLM Function-calling LLM.mp4’,**
** ‘video_input_height’: 480,**
** ‘video_input_width’: 640}}**
Traceback (most recent call last):
** File “/opt/NanoLLM/nano_llm/web/server.py”, line 193, in on_message**
** callback(payload, payload_size=payload_size, msg_type=msg_type, msg_id=msg_id,**
** File “/opt/NanoLLM/nano_llm/agents/dynamic_agent.py”, line 442, in on_websocket**
** on_message(self, message)**
** File “/opt/NanoLLM/nano_llm/agents/dynamic_agent.py”, line 432, in on_message**
** if invoke_handler(obj, key, msg):**
** File “/opt/NanoLLM/nano_llm/agents/dynamic_agent.py”, line 414, in invoke_handler**
** response = func(msg)
** File “/opt/NanoLLM/nano_llm/agents/dynamic_agent.py”, line 58, in add_plugin**
** threading.Thread(target=self.add_plugin, kwargs={‘type’: type, ‘wait’: True, ‘state_dict’: state_dict, ‘layout_node’: layout_node, kwargs}).run()
** File “/usr/lib/python3.10/threading.py”, line 953, in run**
** self._target(self._args, self._kwargs)
** File “/opt/NanoLLM/nano_llm/agents/dynamic_agent.py”, line 65, in add_plugin
*
** plugin = DynamicPlugin(type, init_kwargs)
** File “/opt/NanoLLM/nano_llm/plugins/dynamic_plugin.py”, line 35, in new**
** instance = plugin(args, kwargs)
** File “/opt/NanoLLM/nano_llm/plugins/video/video_source.py”, line 67, in init
*
** self.stream = videoSource(video_input, options=options)**
Exception: jetson.utils – failed to create videoSource device

Has anyone faced similar issues? If so, do you have any insights or suggestions on resolving this?

Thank you in advance for your help!

Hi,
Here are some suggestions for the common issues:

1. Performance

Please run the below command before benchmarking deep learning use case:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

2. Installation

Installation guide of deep learning frameworks on Jetson:

3. Tutorial

Startup deep learning tutorial:

4. Report issue

If these suggestions don’t help and you want to report an issue to us, please attach the model, command/step, and the customized app (if any) with us to reproduce locally.

Thanks!

Hi,

Thank you for the suggestions!

Unfortunately, I can’t run the commands or install anything directly while using Agent Studio because it locks the terminal once launched. My current workflow to start Agent Studio is:

jetson-containers run --env HUGGINGFACE_TOKEN=hf_TOKEN --device /dev/video0 --device /dev/video1 --device /dev/media0 --device /dev/media1 $(autotag nano_llm) python3 -m nano_llm.studio

As you can see, the container is launched directly, and the application takes control. This prevents me from accessing the terminal for running commands or making adjustments.

Do you have any suggestions on how to work around this constraint? Is there a way to prepare the container or configure the system beforehand to address the issue with VideoSource?

Thanks again!