Hope you are all doing well during this quarantine.
I am an Italian university student in Verona and I started using DeepStream to deploy a TLT model on my TX2.
Although I managed to deploy the model and everything works fine, I am working on pruning and quantization with colleagues, but we can’t really understand what truly happens in the DeepStream’s workflow.
For example I have yet to understand if after getting all the inputs (let’s say we set our config file to operate on 4 videos at the same time), the model operates on each single video or on the global image made out of the 4 videos put together with tiling. And if the model operates separately on each video then how is that done? A copy of the model is allocated for each video maybe?
We read the documentation, we unfortunately have not found information needed.
In a few words, is there any good soul who wants to explain in simple words what happens behind the scenes from giving the inputs to having the live feedback showed by using OSD?
He/she would greatly help out part of the Verona’s community.