I was using the DeepStream-3D Multi-Modal V2XFusion case and realized that it only supports offline data fusion?

We can see the two cars in both video and lidar point cloud. Looks normal.

These parameters are decided by the camera and lidar which capture the dataset. They should be fixed.

These are used to limit the range of the point position calculated from the view.

The definition is the same as in glm. LearnOpenGL - Camera

When you look at the video I recorded, the point cloud data feels neatly cropped, why? Are there other places where I can set parameters that affect this?

Is the lidar 360 degree lidar?

like this.

looks like a 180 degree lidar

What is your GL render settings?

where is GL render settings?

Take the /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-lidar-sensor-fusion/ds3d_lidar_video_sensor_v2x_fusion_single_batch.yml as the example.

The settings under:

name: ds3d_sensor_fusion_render

It’s set up like this.

---
name: v2xfusion_inference
type: ds3d::datafilter
link_to: ds3d_sensor_fusion_render
in_caps: ds3d/datamap
out_caps: ds3d/datamap
custom_lib_path: libnvds_tritoninferfilter.so
custom_create_function: createLidarInferenceFilter
config_body:
  in_streams: [lidar]
  mem_pool_size: 20
  #datatype: FP32, FP16, INT8, INT32, INT16, UINT8, UINT16, UINT32, FP64, INT64, UINT64, BYTES, BOOL
  model_inputs:
    - name: images
      datatype: FP16
      shape: [4, 3, 864, 1536]
    - name: feats
      datatype: FP16
      shape: [4, 8000, 10, 9]
    - name: coords
      datatype: INT32
      shape: [4, 8000, 4]
    - name: N
      datatype: INT32
      shape: [4, 1]
    - name: intervals
      datatype: INT32
      shape: [4, 10499, 3]
    - name: geometry
      datatype: INT32
      shape: [4, 1086935]
    - name: num_intervals
      datatype: INT32
      shape: [4, 1]
  gpu_id: 0
  #input tensor memory type after preprocess: GpuCuda, CpuCuda
  input_tensor_mem_type: GpuCuda
  custom_preprocess_lib_path: /opt/nvidia/deepstream/deepstream/lib/libnvds_3d_v2x_infer_custom_preprocess.so
  custom_preprocess_func_name: CreateInferServerCustomPreprocess
  preprocess:
    intervalsFrom: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-lidar-sensor-fusion/v2xfusion/example-data/intervals.tensor
    geometrysFrom: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-lidar-sensor-fusion/v2xfusion/example-data/geometrys.tensor
    imagesFrom: DS3D::VideoPreprocessTensor+0
    featsFrom: DS3D::LidarFeatureTensor+1
    coordsFrom: DS3D::LidarCoordTensor+1
    NFrom: DS3D::LidarPointNumTensor+1
  postprocess:
    score_threshold: 0.5
    batchSize: 1
  labels:
    - car:
        color: [255, 158, 0]
    - truck:
        color: [255, 99, 71]
    - construction_vehicle:
        color: [233, 150, 70]
    - bus:
        color: [255, 69, 0]
    - trailer:
        color: [255, 140, 0]
    - barrier:
        color: [112, 128, 144]
    - motorcycle:
        color: [255, 61, 99]
    - bicycle:
        color: [220, 20, 60]
    - pedestrian:
        color: [0, 0, 230]
    - traffic_cone:
        color: [47, 79, 79]
  config_file: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-lidar-sensor-fusion/v2xfusion/config/triton_mode_CAPI.txt


name: ds3d_sensor_fusion_render
type: ds3d::datarender
in_caps: ds3d/datamap
with_queue: sink
custom_lib_path: libnvds_3d_gles_ensemble_render.so
custom_create_function: NvDs3D_CreateGlesEnsembleRender
gst_properties:
  sync: True
  async: False
  drop: False
config_body:
  # window size
  window_width: 1920
  window_height: 540
  color_clear: true
  window_title: DS3D-Lidar-V2X-Fusion-dGPU
  render_graph:
    # cam_0:
    - texture3d_render:
        layout: [0, 0, 960, 540]
        max_vertex_num: 6
        color_clear: false
        texture_frame_key: DS3D::ColorFrame_0+0
    - lidar3d_render:
        layout: [0, 0, 960, 540]
        color_clear: false
        lidar_color: [0, 0, 255]
        # original lidar data key
        lidar_data_key: DS3D::LidarXYZI_0+1
        lidar_bbox_key: DS3D::Lidar3DBboxRawData_0
        enable_label: True
        element_size: 4
        # project lidar data into image require image size settings
        project_lidar_to_image: true
        image_width: 1920
        image_height: 1080
        intrinsics_mat_key: DS3D::Cam0_IntrinsicMatrix
        extrinsics_mat_key: DS3D::LidarToCam0_ExtrinsicMatrix
        z_range: [-300, 300]
        x_range: [-1000, 1600]
        y_range: [-1000, 1600]
        line_width: 2.0
        font_size: 20
        #y_range: [-100, 100]
    # lidar top view
    - lidar3d_render:
        # layout [x0, y0, x1, y1]
        layout: [960, 0, 1920, 540]
        view_position: [20, 70, 10]
        view_target: [30, 10, 10]
        view_up: [1, 0, 0]
        perspective_near: 0.3
        perspective_far: 100
        # angle degree
        perspective_fov: 60
        # 0 stands for (layout.x1 - layout.x0) / (layout.y1 - layout.y0))
        perspective_ratio: 0.0
        lidar_color: [0, 255, 0]
        # lidar transformed to camera coordinates data key
        #lidar_data_key: DS3D::LidarAlignedXYZIKey
        # original lidar data key
        lidar_data_key: DS3D::LidarXYZI_0+1
        element_size: 4
        color_clear: false

You configured two views. In the first view, you configured the lidar point cloud and bbox on the camera frame, but there is no point cloud or bbox in the output. So your question is about the first view, right?

I’m not sure what that configuration is to keep the point cloud in my video from being clipped, I don’t really understand it, can you point it out for me? You can see that the point cloud of the video focus lidar is clipped, causing me not to be able to recognize it and output the fusion results

The view may be correct if the lidar is a 180 degree lidar.

You may need to check whether the camera intrinsic and lidar to camera extrinsic matrixes are correct.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.