Using Coral USB accelerator with Jetson Nano

@NvCJR thanks for your reply.
But I realised that on Nano the FPS reduces drastically when multiple models are loaded also even for a single model the load times are too long .
None the less the FPS offered by Coral USB accelerator can be leveraged with jetson Nano’s GPU specially when lower cost PCI-e accelerators can be added.

So wanted to know community’s opinion on the following method to pursue the integration of Coral USB accelerator with Deepstream:

Step 1. Use AppSrc and AppSink similar to to do inferencing on an image pipeline via Coral USB accelerator --> Questions in this step are

  • What do you think about this approach ? Any pitfalls or this idea is not feasible at all?
  • How to handle situations when Batch>1

Step 2. After step 1 is complete and we have the bouning boxes. We feed the bounding boxes detected into nvTracker by injecting NvDsObjectMeta into NvDsFrameMeta. Questions in this step:

  • But not sure if this step is feasible-->Need feedback here and some pointers/examples if possible

Any feedback will really help.


You seem to be on the right path by using AppSrc/AppSink but we are not aware of any concerns/pitfalls since we haven’t tried this and currently don’t plan to support it. I’m not sure about what you mean by “How to handle situations when Batch>1”

Hi @NvCJR,
Thankyou for the reply.
I understand you are not supporting Coral USB or PcIE accelerator at the moment.
But is there any other refrence code or discussion on the forum where someone tried to :

  1. Extract Image from NvDsFrameMeta
  2. Allocate, initialise and inject NvDsObjectMeta with bounding box coordinates
  1. NvDsFrameMeta is the metadata and doesn’t contain the image itself. I would suggest looking at “get_converted_mat” function in the ds-example plugin from the sdk to understand how to fetch an image from the gst buffer.

  2. Have a look at “attach_metadata_full_frame” function in the ds-example plugin to understand how this is done.