Hi,
@NvCJR thanks for your reply.
But I realised that on Nano the FPS reduces drastically when multiple models are loaded also even for a single model the load times are too long .
None the less the FPS offered by Coral USB accelerator can be leveraged with jetson Nano’s GPU specially when lower cost PCI-e accelerators can be added.
So wanted to know community’s opinion on the following method to pursue the integration of Coral USB accelerator with Deepstream:
Step 1. Use AppSrc and AppSink similar to https://github.com/google-coral/examples-camera/blob/master/gstreamer/gstreamer.py to do inferencing on an image pipeline via Coral USB accelerator --> Questions in this step are
- What do you think about this approach ? Any pitfalls or this idea is not feasible at all?
- How to handle situations when Batch>1
Step 2. After step 1 is complete and we have the bouning boxes. We feed the bounding boxes detected into nvTracker by injecting NvDsObjectMeta into NvDsFrameMeta. Questions in this step:
- But not sure if this step is feasible-->Need feedback here and some pointers/examples if possible
Any feedback will really help.
Thanks