How to convert a NvDs3DBuffer to a normal Gst RGB buffer

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
x86-64 Ubuntu 22.04 LTS machine with NVIDIA GeForce RTX 2060
• DeepStream Version
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
How can I extract the information coming from a NvDs3DBuffer (ds3d/datamap) and structure it to fit the requirement of a normal Gst RGB buffer? I want to take data coming from DS3D and pass it to 2D plugins, e.g., nvinfer.

I read in this thread that Gst-nvds3dfilter could help with the conversion, however the documentation requires the plugin’s input and output to be of type NvDs3DBuffer or NvDsBatchMeta (with user meta NVDS_3D_DATAMAP_META_TYPE), which, as far as I understand, both won’t be read by GStreamer plugins in a pipeline. Therefore, any tips on how to circumvent this will be greatly appreciated. Thanks in advance for any help provided.

NvDs3DBuffer is used with DS3D interfaces DeepStream 3D Custom Manual — DeepStream documentation 6.4 documentation. Normally it is used with the data which is not defined by current GStreamer Caps. If you are just use some plugin to generate tensor data for gst-nvinfer, please use nvdpreprocess instead of DS3D.

Thanks for the fast response. My use case is retrieving data from a stereo camera. As a reference, I’m using DeepStream’s depth-camera app to understand 3D data processing using DS3D. That said, I would like to take a NvDs3DBuffer coming from a ds3d::dataloader, extract any 2D encoded data that I’m interested in and convert it to a GStreamer normal image buffer, so I can use it in remaining DeepStream 2D plugins (e.g., nvinfer).

You may need to develop the DS3D filter to convert ds3d/datamap to NvBufSurface by yourself if you want to use gst-nvinfer.

A better choice may be to use the way we used in /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-lidar-inference-app, the nvinferserver library has been integrated with DS3D filter.

I’m trying to follow the nvinferserver approach, however I don’t understand how the server is connected in the lidar app filter.

I see that you have a preprocessor and a postprocessor. The latter I hooked up directly to the Triton server through its config YAML, thus, I understand that, internally, the SDK will take care of loading the postprocees shared library and attaching to the end of the inference. However, I don’t understand how the preprocessor is handled, or even which class implementation I should override to create my custom preprocessor.

Can you clarify how one can implement and connect its own custom preprocessor? Also, Is there an app that I can look into how to push/pull information to nvinferserver programmatically, i.e., not creating gst elements?

The preprocesor is an implementation of the interface IInferCustomPreprocessor which is defined in /opt/nvidia/deepstream/deepstream/sources/includes/ds3d/common/hpp/lidar_custom_process.hpp

To understand how nvinferserver works, please refer to Gst-nvinferserver — DeepStream documentation 6.4 documentation. It is a Triton client. For the Triton server/client, please refer to Triton Inference Server (

nvinferserver is also open source: /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinferserver

The /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-lidar-inference-app is already a complete sample for how to use nvinferserver with DS3D.

I took another look at the lidar app but I’m still confused. The preprocessor created for the app doesn’t override IICustomProcessor, it’s its own thing.

Moreover, the preprocessor has a peculiar signature, it takes as one of the arguments a Guard Frame which doesn’t conform to the pre/postprocessor signatures expected by nvinferserver. Therefore, I’m guessing that the shared library isn’t passed to nvinferserver’s configuration parameters. At this point I’m not understanding how the preprocessor is connected.

One of my conclusions is that inside the filter an Infer Context object is created, to manage the tensor being passed back and forth to the server. Then, the server’s communication struct SharedIBatch is created so that the data can be copied from ds3d::filter input buffer to the network input array. Finally, somehow, this information is forwarded to the server. If that’s the case, which method or trick can I use to send the data to the server?

If you could shine a light on this matter I would be greatly appreciated.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

For DS3D, we have implemented a DS3D Triton inferencing filter with “” which is not open source. The interface IInferCustomPreprocessor is provided for the users to customize their own preprocessing for different kinds of data.

The nvinferserver is a Triton client. You need to configure both Triton server side and Triton client side to support a model.
There are samples of configuring the model information for Triton server /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-lidar-inference-app/tritonserver
and the sample of configuring nvinferserver /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-lidar-inference-app/configs/config_lidar_triton_infer.yaml. Please investigate the “createLidarInferenceFilter” parts.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.