I get this from any network source, including wired IP cameras and youtube uris, in any language, so it’s not Python. I think the core of the problem is gstreamer’s network elements. I get the same problems with similarly configured pipelines on Intel/AMD.
If I figure out a good solution, I will post it in this forum (hopefully Nvidia is working on this as well). So far, I have found the best results automatically with uridecodebin for any sort of network (or local) source, but it struggles with many network sources at once.
I’m going to try experimenting with various queues next to see if that helps, since this splits the pipeline into threads. I am not sure if there is one in uridecodebin already. So far the limit on my Xavier is about 4 youtube streams before frames start to drop.
Re: C vs Python, DeepStream performance seems to be very similar so long as you don’t try to do anything fancy in Python in your callbacks. The main loop, as well as most stuff it calls, is in C.
If it’s taking two minutes to start the app, it’s likely the .engine file is not being loaded and/or the path it’s being written to is not writable. You can find the .engine file generated in “/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/” for your case, and specify that path in the .config file for the primary inference engine. In the case of my xavier, the line in question is:
The file generated for Nano is different but you will find it in that folder if you look. If you copy and paste that path to “model-engine-file”, it should no longer load. If it does anyway, and it did on my Xavier, you can comment out some lines so it looks like this:
That is the only way I was able to get it to stop building the same .engine every single startup. Unfortunately, this whole setup requires your running user to have write access to certain global paths. If the paths aren’t already world writable, which I think they are by default, you can change their containing folder’s ownership to root:video, and make the folder mode 775 (group writable), or you can copy the required files in Primary_Detector somewhere in your ~/home, and modify the config file accordingly so it will be able to write a .engine file.
If the app isn’t able to write it’s .engine file to wherever the model-file is found, it will fail silently, and will try to re-generate the model every single time the app launches, which can take minutes.
@Nvidia, it would be nice if a model cache folder was created under something like ~/.deepstream/ and automatically searched to avoid all this.