Hi all,
I follow the tutorial in AI lab and this post: https://forums.developer.nvidia.com/t/cant-start-the-live-llava-on-jetson-orin-nano-developer-kit/290887/1
I run the command:
jetson-containers run $(autotag nano_llm) \
python3 -m nano_llm.agents.video_query --api=mlc \
--model Efficient-Large-Model/VILA1.5-3b \
--max-context-len 256 \
--max-new-tokens 32 \
--video-input /dev/video0 \
--video-output webrtc://@:8554/output \
--vision-api=hf
And this is what I have:
Namespace(packages=['nano_llm'], prefer=['local', 'registry', 'build'], disable=[''], user='dustynv', output='/tmp/autotag', quiet=False, verbose=False)
-- L4T_VERSION=36.4.3 JETPACK_VERSION=6.2 CUDA_VERSION=12.6
-- Finding compatible container image for ['nano_llm']
dustynv/nano_llm:r36.4.0
V4L2_DEVICES: --device /dev/video0
### DISPLAY environmental variable is already set: ":0"
localuser:root being added to access control list
+ docker run --runtime nvidia -it --rm --network host --shm-size=8g --volume /tmp/argus_socket:/tmp/argus_socket --volume /etc/enctune.conf:/etc/enctune.conf --volume /etc/nv_tegra_release:/etc/nv_tegra_release --volume /tmp/nv_jetson_model:/tmp/nv_jetson_model --volume /var/run/dbus:/var/run/dbus --volume /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket --volume /var/run/docker.sock:/var/run/docker.sock --volume /home/lcmo/jetson-containers/data:/data -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro --device /dev/snd -e PULSE_SERVER=unix:/run/user/1000/pulse/native -v /run/user/1000/pulse:/run/user/1000/pulse --device /dev/bus/usb -e DISPLAY=:0 -v /tmp/.X11-unix/:/tmp/.X11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth -e XAUTHORITY=/tmp/.docker.xauth --device /dev/video0 --device /dev/i2c-0 --device /dev/i2c-1 --device /dev/i2c-2 --device /dev/i2c-4 --device /dev/i2c-5 --device /dev/i2c-7 --device /dev/i2c-9 --name jetson_container_20250204_111816 dustynv/nano_llm:r36.4.0 python3 -m nano_llm.agents.video_query --api=mlc --model Efficient-Large-Model/VILA1.5-3b --max-context-len 256 --max-new-tokens 32 --video-input /dev/video0 --video-output webrtc://@:8554/output --vision-api=hf
/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py:124: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py:1142: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Fetching 13 files: 100%|█████████████████████| 13/13 [00:00<00:00, 42366.71it/s]
Fetching 17 files: 100%|██████████████████████| 17/17 [00:00<00:00, 5149.73it/s]
11:18:30 | INFO | loading /data/models/huggingface/models--Efficient-Large-Model--VILA1.5-3b/snapshots/42d1dda6807cc521ef27674ca2ae157539d17026 with MLC
11:18:35 | INFO | NumExpr defaulting to 6 threads.
11:18:35 | WARNING | AWQ not installed (requires JetPack 6 / L4T R36) - AWQ models will fail to initialize
11:18:37 | INFO | patching model config with {'model_type': 'llama'}
11:18:38 | INFO | device=cuda(0), name=Orin, compute=8.7, max_clocks=1020000, multiprocessors=8, max_thread_dims=[1024, 1024, 64], api_version=12060, driver_version=None
11:18:38 | INFO | loading VILA1.5-3b from /data/models/mlc/dist/VILA1.5-3b/ctx256/VILA1.5-3b-q4f16_ft/VILA1.5-3b-q4f16_ft-cuda.so
11:18:38 | WARNING | model library /data/models/mlc/dist/VILA1.5-3b/ctx256/VILA1.5-3b-q4f16_ft/VILA1.5-3b-q4f16_ft-cuda.so was missing metadata
11:18:54 | INFO | loading siglip vision model /data/models/huggingface/models--Efficient-Large-Model--VILA1.5-3b/snapshots/42d1dda6807cc521ef27674ca2ae157539d17026/vision_tower
11:19:06 | INFO | loaded siglip vision model /data/models/huggingface/models--Efficient-Large-Model--VILA1.5-3b/snapshots/42d1dda6807cc521ef27674ca2ae157539d17026/vision_tower
11:19:07 | INFO | mm_projector (mlp_downsample) Sequential(
(0): DownSampleBlock()
(1): LayerNorm((4608,), eps=1e-05, elementwise_affine=True)
(2): Linear(in_features=4608, out_features=2560, bias=True)
(3): GELU(approximate='none')
(4): Linear(in_features=2560, out_features=2560, bias=True)
)
11:19:07 | INFO | mm_projector weights: dict_keys(['1.bias', '1.weight', '2.bias', '2.weight', '4.bias', '4.weight'])
┌────────────────────────────┬─────────────────────────────────────────────────────────────────────────────┐
│ _name_or_path │ ./llm │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ architectures │ ['LlamaForCausalLM'] │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ drop_path_rate │ 0.0 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ hidden_size │ 2560 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ image_aspect_ratio │ resize │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ interpolate_mode │ linear │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ mm_hidden_size │ 1152 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ mm_projector_lr │ │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ mm_use_im_patch_token │ False │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ mm_use_im_start_end │ False │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ mm_vision_select_feature │ cls_patch │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ mm_vision_select_layer │ -2 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ model_dtype │ torch.bfloat16 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ model_type │ llama │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ num_video_frames │ 8 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ resume_path │ ./vlm │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ s2 │ False │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ s2_max_split_size │ 336 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ s2_scales │ 336,672,1008 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ transformers_version │ 4.36.2 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ tune_language_model │ True │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ tune_mm_projector │ True │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ tune_vision_tower │ True │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ vision_resolution │ -1 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ name │ VILA1.5-3b │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ api │ mlc │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ max_position_embeddings │ 4096 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ mm_vision_tower │ /data/models/huggingface/models--Efficient-Large-Model--VILA1.5-3b/snapshot │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ mm_projector_path │ /data/models/huggingface/models--Efficient-Large-Model--VILA1.5-3b/snapshot │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ mm_projector_type │ mlp_downsample │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ attention_bias │ False │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ attention_dropout │ 0.0 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ bos_token_id │ 1 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ eos_token_id │ 2 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ hidden_act │ silu │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ initializer_range │ 0.02 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ intermediate_size │ 6912 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ model_max_length │ 4096 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ num_attention_heads │ 20 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ num_hidden_layers │ 32 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ num_key_value_heads │ 20 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ pad_token_id │ 0 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ pretraining_tp │ 1 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ rms_norm_eps │ 1e-05 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ rope_scaling │ │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ rope_theta │ 10000.0 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ tie_word_embeddings │ False │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ tokenizer_model_max_length │ 4096 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ tokenizer_padding_side │ right │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ torch_dtype │ bfloat16 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ use_cache │ True │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ vocab_size │ 32000 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ quant │ q4f16_ft │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ type │ llama │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ max_length │ 256 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ prefill_chunk_size │ -1 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ load_time │ 36.75651084200035 │
├────────────────────────────┼─────────────────────────────────────────────────────────────────────────────┤
│ params_size │ 1300.8330078125 │
└────────────────────────────┴─────────────────────────────────────────────────────────────────────────────┘
11:19:07 | INFO | using chat template 'vicuna-v1' for model VILA1.5-3b
11:19:07 | INFO | model 'VILA1.5-3b', chat template 'vicuna-v1' stop tokens: ['</s>'] -> [2]
11:19:07 | INFO | Warming up LLM with query 'What is 2+2?'
11:19:08 | INFO | Warmup response: '4</s>'
11:19:08 | INFO | plugin | connected PrintStream to on_text on channel 0
11:19:08 | INFO | plugin | connected ChatQuery to PrintStream on channel 0
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
(gst-plugin-scanner:75): GLib-GObject-WARNING **: 11:19:09.431: cannot register existing type 'GstRtpSrc'
(gst-plugin-scanner:75): GLib-GObject-CRITICAL **: 11:19:09.431: g_type_add_interface_static: assertion 'G_TYPE_IS_INSTANTIATABLE (instance_type)' failed
(gst-plugin-scanner:75): GLib-CRITICAL **: 11:19:09.431: g_once_init_leave: assertion 'result != 0' failed
(gst-plugin-scanner:75): GStreamer-CRITICAL **: 11:19:09.431: gst_element_register: assertion 'g_type_is_a (type, GST_TYPE_ELEMENT)' failed
(gst-plugin-scanner:75): GLib-GObject-WARNING **: 11:19:09.431: cannot register existing type 'GstRtpSink'
(gst-plugin-scanner:75): GLib-GObject-CRITICAL **: 11:19:09.432: g_type_add_interface_static: assertion 'G_TYPE_IS_INSTANTIATABLE (instance_type)' failed
(gst-plugin-scanner:75): GLib-CRITICAL **: 11:19:09.432: g_once_init_leave: assertion 'result != 0' failed
(gst-plugin-scanner:75): GStreamer-CRITICAL **: 11:19:09.432: gst_element_register: assertion 'g_type_is_a (type, GST_TYPE_ELEMENT)' failed
sh: 1: lsmod: not found
sh: 1: modprobe: not found
[gstreamer] initialized gstreamer, version 1.20.3.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0
[gstreamer] gstCamera -- didn't discover any v4l2 devices
[gstreamer] gstCamera -- device discovery failed, but /dev/video0 exists
[gstreamer] support for compressed formats is disabled
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 do-timestamp=true ! nvv4l2decoder name=decoder enable-max-performance=1 ! video/x-raw(memory:NVMM) ! nvvidconv flip-method=0 ! video/x-raw ! appsink name=mysink sync=false
sh: 1: lsmod: not found
sh: 1: modprobe: not found
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[video] created gstCamera from v4l2:///dev/video0
------------------------------------------------
gstCamera video options:
------------------------------------------------
-- URI: v4l2:///dev/video0
- protocol: v4l2
- location: /dev/video0
-- deviceType: v4l2
-- ioType: input
-- codec: unknown
-- codecType: v4l2
-- width: 1280
-- height: 720
-- frameRate: 30
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- sslCert /etc/ssl/private/localhost.cert.pem
-- sslKey /etc/ssl/private/localhost.key.pem
------------------------------------------------
[gstreamer] gstEncoder -- codec not specified, defaulting to H.264
failed to find/open file /proc/device-tree/model
[gstreamer] gstEncoder -- detected board 'NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super'
[gstreamer] gstEncoder -- hardware encoder not detected, reverting to CPU encoder
[gstreamer] gstEncoder -- pipeline launch string:
[gstreamer] appsrc name=mysource is-live=true do-timestamp=true format=3 ! x264enc name=encoder bitrate=4000 speed-preset=ultrafast tune=zerolatency key-int-max=30 insert-vui=1 ! video/x-h264 ! rtph264pay config-interval=1 ! application/x-rtp,media=video,encoding-name=H264,clock-rate=90000,payload=96 ! tee name=videotee ! queue ! fakesink
[webrtc] WebRTC server started @ https://lcmo-desktop:8554
[webrtc] WebRTC server thread running...
[webrtc] websocket route added /output
[video] created gstEncoder from webrtc://@:8554/output
------------------------------------------------
gstEncoder video options:
------------------------------------------------
-- URI: webrtc://@:8554/output
- protocol: webrtc
- location: 0.0.0.0
- port: 8554
-- deviceType: ip
-- ioType: output
-- codec: H264
-- codecType: cpu
-- frameRate: 30
-- bitRate: 4000000
-- numBuffers: 4
-- zeroCopy: true
-- latency 10
-- sslCert /etc/ssl/private/localhost.cert.pem
-- sslKey /etc/ssl/private/localhost.key.pem
------------------------------------------------
11:19:10 | INFO | plugin | connected VideoSource to on_video on channel 0
11:19:11 | INFO | mounting webserver path /data/datasets/uploads to /images/uploads
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
11:19:11 | INFO | starting webserver @ https://0.0.0.0:8050
11:19:11 | SUCCESS | VideoQuery - system ready
* Serving Flask app 'nano_llm.web.server'
* Debug mode: on
Opening in BLOCKING MODE
11:19:11 | INFO | WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on https://127.0.0.1:8050
* Running on https://10.0.50.252:8050
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> decoder
11:19:11 | INFO | Press CTRL+C to quit
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer changed state from READY to PAUSED ==> decoder
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> decoder
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstCamera -- end of stream (EOS)
[gstreamer] gstreamer v4l2src0 ERROR Internal data stream error.
[gstreamer] gstreamer Debugging info: ../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message latency ==> mysink
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
11:19:13 | WARNING | video source /dev/video0 timed out during capture, re-trying...
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
11:19:16 | WARNING | video source /dev/video0 timed out during capture, re-trying...
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
11:19:18 | WARNING | video source /dev/video0 timed out during capture, re-trying...
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
11:19:21 | WARNING | video source /dev/video0 timed out during capture, re-trying...
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
11:19:23 | WARNING | video source /dev/video0 timed out during capture, re-trying...
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
11:19:26 | WARNING | video source /dev/video0 timed out during capture, re-trying...
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
11:19:28 | WARNING | video source /dev/video0 timed out during capture, re-trying...
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
11:19:31 | WARNING | video source /dev/video0 timed out during capture, re-trying...
11:19:31 | ERROR | Re-initializing video source "/dev/video0"
[gstreamer] gstCamera -- stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstCamera -- pipeline stopped
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Available Sensor modes :
Resolution: 3280 x 2464 ; Framerate = 21.000000; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000
Resolution: 3280 x 1848 ; Framerate = 28.000001; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000
Resolution: 1920 x 1080 ; Framerate = 29.999999; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000
Resolution: 1640 x 1232 ; Framerate = 29.999999; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000
Resolution: 1280 x 720 ; Framerate = 59.999999; Analog Gain Range Min 1.000000, Max 10.625000, Exposure Range Min 13000, Max 683709000
DEFAULT no IOCTL called
DEFAULT no IOCTL called
DEFAULT no IOCTL called
DEFAULT no IOCTL called
DEFAULT no IOCTL called
DEFAULT no IOCTL called
[gstreamer] gstCamera -- found v4l2 device: NvV4L2 Argus PLugin
[gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)"0.99.3.3\ \(multi-NvV4L2\ Argus\ PLugin", v4l2.device.card=(string)"NvV4L2\ Argus\ PLugin", v4l2.device.bus_info=(string)platform:NV-ARGUS:1.000000, v4l2.device.version=(uint)0, v4l2.device.capabilities=(uint)2216693760, v4l2.device.device_caps=(uint)69210112;
[gstreamer] gstCamera -- found 2 caps for v4l2 device /dev/video0
[gstreamer] [0] video/x-raw, format=(string)NV12, width=(int)[ 48, 3280 ], height=(int)[ 48, 2464 ], framerate=(fraction)[ 0/1, 2147483647/1 ];
[gstreamer] [1] video/x-raw, format=(string)NV12, width=(int)[ 48, 3280 ], height=(int)[ 48, 2464 ], framerate=(fraction)[ 0/1, 2147483647/1 ], interlace-mode=(string)alternate;
[gstreamer] gstCamera -- couldn't find a compatible codec/format for v4l2 device /dev/video0
[gstreamer] gstCamera -- device discovery failed, but /dev/video0 exists
[gstreamer] support for compressed formats is disabled
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 do-timestamp=true ! nvv4l2decoder name=decoder enable-max-performance=1 ! video/x-raw(memory:NVMM) ! nvvidconv flip-method=0 ! video/x-raw ! appsink name=mysink sync=false
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[video] created gstCamera from v4l2:///dev/video0
And they just repeat.
I would like to know how to solve the problem.
This is my nvidia-jetpack info:
$ sudo apt-cache show nvidia-jetpack
Package: nvidia-jetpack
Source: nvidia-jetpack (6.2)
Version: 6.2+b77
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 6.2+b77), nvidia-jetpack-dev (= 6.2+b77)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_6.2+b77_arm64.deb
Size: 29298
SHA256: 70553d4b5a802057f9436677ef8ce255db386fd3b5d24ff2c0a8ec0e485c59cd
SHA1: 9deab64d12eef0e788471e05856c84bf2a0cf6e6
MD5sum: 4db65dc36434fe1f84176843384aee23
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8
Package: nvidia-jetpack
Source: nvidia-jetpack (6.1)
Version: 6.1+b123
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 6.1+b123), nvidia-jetpack-dev (= 6.1+b123)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_6.1+b123_arm64.deb
Size: 29312
SHA256: b6475a6108aeabc5b16af7c102162b7c46c36361239fef6293535d05ee2c2929
SHA1: f0984a6272c8f3a70ae14cb2ca6716b8c1a09543
MD5sum: a167745e1d88a8d7597454c8003fa9a4
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8
I don’t have background in Computer science nor other related engineering knowledge. Please let me know what further information I have to provide to let you look into the issue. Thanks a lot!