Fusion of Sensor Data and DeepStream 3.0 SDK

Just finished the seminar on the DeepStream SDK 3.0. Thank you for the examples, which were helpful.

My questions was, this is great for RGB camera streams. However, RGB streams have serious limitations as well.

For my particular product, I am combining an RGB, Depth, and Thermal camera for full night and day surveillance. My original plan is to put the DL model on the Jetson, have it serve as the primary first pass (ID a pedestrian), and then pass the frame values and bounding boxes back to the server for further processing and managing of state.

However, DeepStream may be able to do something here to make this sensor fusion and model problem more efficient (and easier).

I’m not familiar at all with TensorRT or DeepStream to understand how they would work together.

Are there any suggestions from the team here how to integrate more nVidia tech into this product?

Can you familiar with Deepstream and TensorRT firstly?