Hi,
I am trying to run detection on a 360 fisheye video using deepstream (python) by using the dewarper plugin to generate 4 surfaces and run them through the inference engine followed by tracking and some functionalities from the nvanalytics plugin.
My pipeline is as follows:
source → h264parser → decoder → nvvidcon_dewarp → caps_dewarp → dewarper → streammux → queue1 → pgie → queue2 → tracker → nvanalytics → tiler → queue3 → nvvidconv → queue4 → nvosd-> queue5-> nvvidconv_postosd → encoder → hparse → filesink
It works fine and i get an output of 4 surfaces and the detection runs fine on all of them, However, I do have some questions that i would appreciate if someone can help me with:
-
Even by setting the batch size of the streammux and the engine to 4 and having an engine generated for a batch size of 4, the 4 dewarped surfaces are passed through the inference plugin one after the other, not as a batch, which is not what i want. so is there a way i can treat the 4 dewarped surfaces as 4 sources?
-
I used the tiler in an attempt to view the four surfaces together but changing the number of rows and columns in the tiler properties doesn’t change the way the frames are shown, which is the 4 dewarped surfaces on top of each other, does that have anything to do with the fact that all dewarped surfaces are treated as one source? How can i change that?
-
If i want to save the dewarped frames separately for example, How can i access the dewarped surfaces and convert them to numpy arrays to save them? i.e. how can i access the dewarper metadata structure?
Appreciate the help, in advance.
• Hardware Platform (Jetson / GPU) Jetson Xavier AGX
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) Jetpack 4.5.1
• TensorRT Version 7.1.3