Fusion of Data

Just finished the seminar on the DeepStream SDK 3.0. Thank you for the examples, which were helpful.

My questions was, this is great for RGB camera streams. However, RGB streams have serious limitations as well.

For my particular product, I am combining an RGB, Depth, and Thermal camera for full night and day surveillance. My original plan is to put the DL model on the Jetson, fuse all the sensor data and have it serve as the primary first pass (ID a pedestrian), and then pass the frame values and bounding boxes back to a server via Kafka/ZeroMQ or something for further processing and managing of state.

However, DeepStream may be able to do something here to make this sensor fusion and model problem more efficient (and easier).

I’m not familiar at all with TensorRT or DeepStream to understand how they would work together.

Are there any suggestions from the team here how to integrate more nVidia tech into this product?

Deepstream supports multiple channels and multiple models. You can refer to test2 and test3

Could you extend what is test2 and test3, please?
I am investigating multiple channels - multiple models use case.
Thanks