I need to write the app in python, so I used the config_infer_primary.txt file to run the model with the Deepstream sample python apps.
In deepstream_test_1.py the performance is 30FPS
in deepstream-imagedata-multistream-redaction the performance is 30FPS
I need a higher FPS for the RTSP stream.
How can I achieve higher FPS using my custom yolov4-tiny model in Jetson Orin?
deepstream_test_1.py and deepstream-imagedata-multistream-redaction use different pipeline and configurations with deepstream-app, they are impossible to have the same FPS as deepstream-app. It is no sense to compare different pipelines’ performance. To handle the video with bigger resolution will take more time than handling the smaller resolution videos. Nobody can tell you what is the speed if we don’t know how many things need to be handled.
Take deepstream_test_1.py as example, there is nvosd and nveglglessink in the pipeline, if you change the pipeline as the same as deepstream-app(without nvdsosd, replace nveglglesssink and nvegltransform with fakesink and sync=0) and use the same source(the same resolution and format), the performance may be similar.
okay got it. So how could I modify deepstream-imagedata-multistream-redaction to achieve similar performance?
It uses udpsink to transmit the RTP packets over IP network.
By setting the sink property to False ‘sink.set_property(‘sync’, 0)’ I can achieve better performance but the FPS varies and goes up and down.
Got it Thanks! I added two more secondary detectors to the pipeline.
print("Linking elements in the Pipeline \n")
pgie and sgie1 work just fine but when I add the third one I get the following error:
WARNING: Num classes mismatch. Configured: 37, detected by network: 1
I have tested the third detector by its own and works just fine, I dont get that error and it can detect all the objects. The error is coming from ‘nvdsparsebbox_Yolo.cpp’ I seems like it is reading the same number of classes of the first model. How do I fix this?