• Hardware Platform (Jetson / GPU) - Jetson Orin Nano Devkit
• DeepStream Version - 7.0
• JetPack Version (valid for Jetson only) - JP6 - L4T36.3
• TensorRT Version - 8.6.2.3
• Issue Type( questions, new requirements, bugs) - New requirement
I am running the deepstream-3d-lidar-sensor-fusion
example from the DeepStream-3D Multi-Modal V2XFusion Setup (DeepStream-3D Multi-Modal V2XFusion Setup — DeepStream documentation 6.4 documentation) included in the demo samples of deepstream 7. This example runs correctly, but I am attempting to create a gst-launch pipeline to run the same example. So far, this is the functional pipeline I have:
gst-launch-1.0 multifilesrc location=v2xfusion/example-data/v2x-seq.4scenes.10Hz.200frame/0/camera/camera_%05d.jpg ! image/jpeg, framerate=10/1 ! jpegparse ! nvv4l2decoder mjpeg=true ! queue ! nvvideoconvert nvbuf-memory-type=2 compute-hw=1 ! 'video/x-raw(memory:NVMM), format=RGBA' ! m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 nvbuf-memory-type=2 compute-hw=1 ! nvdspreprocess config-file=v2xfusion/config/config_preprocess.txt ! nvds3dbridge config-file=databridge.yml ! mix.sink_0 nvds3dmixer name=mix config-file=mixer.yml ! nvds3dfilter config-file=alignment.yml ! fakesink
To continue with the inference part, I need to include the processed LiDAR data from lidarpreprocess
into the mixer. This can be achieved with a nvds3dfilter
, but I need the dataloader part (check the attached image), and I haven’t found an element that allows me to load these LiDAR data located at /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-lidar-sensor-fusion/v2xfusion/example-data/v2x-seq.4scenes.10Hz.200frame/0/lidar
.
Is there a way to load these LiDAR data using a GStreamer element in the current pipeline so that they output in the ds3d/datamap
format for subsequent processing?
Any guidance or suggestions would be greatly appreciated.
Thank you!