DeepStream Flow of work Explained

Hello everyone!
Hope you are all doing well during this quarantine.
I am an Italian university student in Verona and I started using DeepStream to deploy a TLT model on my TX2.
Although I managed to deploy the model and everything works fine, I am working on pruning and quantization with colleagues, but we can’t really understand what truly happens in the DeepStream’s workflow.
For example I have yet to understand if after getting all the inputs (let’s say we set our config file to operate on 4 videos at the same time), the model operates on each single video or on the global image made out of the 4 videos put together with tiling. And if the model operates separately on each video then how is that done? A copy of the model is allocated for each video maybe?

We read the documentation, we unfortunately have not found information needed.
In a few words, is there any good soul who wants to explain in simple words what happens behind the scenes from giving the inputs to having the live feedback showed by using OSD?
He/she would greatly help out part of the Verona’s community.

I’m interested in an answer as well. The concept of DeepStream is awesome, but not the latest 3.0(?)/4.0 implementation that includes gstreamer. It is hiding what happens.

There is no flow of execution like the old API as shown below from an NVIDA presentation I came across.

Hopefully, they’ll fix this somehow. Good luck and stay safe.

// Create a device worker bitstream
IDeviceWorker* pDW = createDeviceWorker(1,0);

// Add a decode task, specifying the CODEC type
pDW->addDecodeTask(cudaVideoCodec_H264);

// Add a module for color conversion.
// Color conversion module outputs:
// 0: BGR_PLANAR
// 1: NV12 (YCbCr)
IModule* pCC = pDW->addColorSpaceConvertorTask(BGR_PLANAR);

Hi,
Please refer to this FAQ for data flow from streammux input to nvinfer output, DeepStream SDK FAQ nvinfer / streamMux / DeMux

1 Like