-
Hardware Platform: jetson AGX orin
-
DeepStream Version: 7.0
-
JetPack Version: 6.0
-
TensorRT Version: 8.6
-
Issue Type: deepstream_3d_sensor_fusion.cpp:395, ERROR: creating parse bin failed with gst error: no element “nvv4l2decoder”
-
How to reproduce the issue ?
After installing Deepstream 7.0 following the steps on deepstream documentation, then download the data and v2xfusion model according to the instruction on DeepStream-3D Multi-Modal V2XFusion Setup
To Start DS3D V2XFusion Pipeline do the following:
$ cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-lidar-sensor-fusion
$ deepstream-3d-lidar-sensor-fusion -c ds3d_lidar_video_sensor_v2x_fusion.yml
I’ve got the following error:
/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-lidar-sensor-fusion$ deepstream-3d-lidar-sensor-fusion -c ds3d_lidar_video_sensor_v2x_fusion.yml
deepstream_3d_sensor_fusion.cpp:395, ERROR: creating parse bin failed with gst error: no element "nvv4l2decoder", with config:
name: video_source
type: ds3d::gstparsebin
link_to: videobridge_2d_to_3d
config_body:
parse_bin: multifilesrc location=v2xfusion/example-data/v2x-seq.4scenes.10Hz.200frame/0/camera/camera_%05d.jpg ! image/jpeg, framerate=10/1 ! jpegparse ! nvv4l2decoder mjpeg=true ! queue ! nvvideoconvert nvbuf-memory-type=2 compute-hw=1 ! video/x-raw(memory:NVMM), format=RGBA ! m.sink_0 multifilesrc location=v2xfusion/example-data/v2x-seq.4scenes.10Hz.200frame/1/camera/camera_%05d.jpg ! image/jpeg, framerate=10/1 ! jpegparse ! nvv4l2decoder mjpeg=true ! queue ! nvvideoconvert nvbuf-memory-type=2 compute-hw=1 ! video/x-raw(memory:NVMM), format=RGBA ! m.sink_1 multifilesrc location=v2xfusion/example-data/v2x-seq.4scenes.10Hz.200frame/2/camera/camera_%05d.jpg ! image/jpeg, framerate=10/1 ! jpegparse ! nvv4l2decoder mjpeg=true ! queue ! nvvideoconvert nvbuf-memory-type=2 compute-hw=1 ! video/x-raw(memory:NVMM), format=RGBA ! m.sink_2 multifilesrc location=v2xfusion/example-data/v2x-seq.4scenes.10Hz.200frame/3/camera/camera_%05d.jpg ! image/jpeg, framerate=10/1 ! jpegparse ! nvv4l2decoder mjpeg=true ! queue ! nvvideoconvert nvbuf-memory-type=2 compute-hw=1 ! video/x-raw(memory:NVMM), format=RGBA ! m.sink_3 nvstreammux name=m width=1920 height=1080 align-inputs=true batch-size=4 nvbuf-memory-type=2 compute-hw=1 ! nvdspreprocess config-file=v2xfusion/config/config_preprocess.txt
config_path: ds3d_lidar_video_sensor_v2x_fusion.yml, check failure
deepstream_3d_sensor_fusion.cpp:346, ERROR: create component failed, check failure
deepstream_3d_sensor_fusion.cpp:216, ERROR: build componets with config: ds3d_lidar_video_sensor_v2x_fusion.yml failed, check failure
deepstream_3d_lidar_sensor_fusion_main.cpp:116, ERROR: Failed to setup sensor fusion application., check failure
When running $ gst-inspect-1.0 nvv4l2decoder
it gives: No such element or plugin 'nvv4l2decoder'
Is it a problem with gstreamer?
do I need to reinstall gstreamer? if yes, should I reinstall the jetpack 6.0 and re-flash the orin?
please some suggestions.