Deep stream with other sensors

Hardware Platform: Jetson AGX Xavier
• DeepStream Version: 5.0 GA
• JetPack Version: 4.4
• TensorRT Version: 7.1.3

Hi Nvidia support,

Would it be possible to create a Deepstream pipeline for object tracking if I were to use a sensor other than a camera, but preprocess the data to be in matrix form for both training input and Deepstream pipeline input? An example would be using lidar, but I’m more concerned about whether Deepstream could be generally adapted to Any sensor with proper preprocessing.

It seems like it would be possible if I trained a custom object detection model but I wasn’t sure since all the examples I have found are for camera sensors only.


Current DeepStream only support image/video input, since most of its GST plugins, including nvinfer, are for image.
For example, TensorRT based nvinfer includes a pre-processor that can convert YUV to RGB planar, which is not usable for Lidar.


Thank you for your response! It’s unfortunate that I couldn’t customize the Deepstream pipeline how I wanted. I was hoping I could build a working pipeline by avoiding camera specific plugins and adding preprocessing and sensor specific object detection models.

Do you have any advice on what Nvidia tools would be the best for building an object tracking pipeline with a variety of sensors? I have seen that Nvidia drive labs has sample code that handles cameras and lidar, but I’m not sure that they can handle more sensors than that.

My initial impression is that I would have to code it from scratch in cuda, but I am relatively new to parallel coding so any head start would be very helpful.


Can you clarify whats the sensor you mean?
There are so many kinds of sensor.

Thanks for getting back to me. I am implementing this as R&D for an early stage company project. Since the project is so new, the main issue I have is that we don’t yet know the full sensor suite. Examples of sensors that are likely to be used are UV/IR image sensors, LIDAR, and cameras, but there will almost certainly be more.

Our goal is to create one app where object tracking could be performed on some data that we get as the result of sensor fusion. For now, since the sensor suite is unknown, I was hoping to build the fundamentals of a pipeline that could feasibly be expanded to include any new sensors with proper sensor fusion and model tuning. I’m not sure how realistic that goal is; any advice is extremely appreciated!

Thanks again


If you will use gstreamer, the non-video/image sensor may need a special pipeline , and video/image sensors go with DeepStream, and fuse them at some point.