Remove Primary infer from GSTPipeline and just perform classification on cropped images(custom cropped)

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU): NVIDIA 2080 TI
• DeepStream Version: 5.0
• NVIDIA GPU Driver Version : 450.102.04
• Issue Type: question

I want to use only classification in GSTPipeline and bypass the primary inference. The use case is to crop n images from each source-frame and send each frame for classification in the pipeline.

We have found gst-dsexample as a way to add custom scaling or cropping logic before classification but that is also one-to-one processing i.e. single frame is processed to give a single output.

So, the question is:

  1. How we can give multiple output from gst-dsexample plugin towards next step?
  2. How we can configure just classifier in the pipeline after muxer/ gst-dsexample to run only classification?

Let me know if any other detail is needed.

  1. DsExample plugin works on the base of batch. It can handle multiple frames in batch. Please refer to gst_dsexample_transform_ip() function. In deepstream, the frames are organized in batches.
  2. The nvinfer network type is configured in nvinfer configuration file. Please set “network-type” to 1, that means classifier.

You need to understand the whole deepstream is based on batch, nvstreammux organizes frames in batch and nvinfer works with batch. So if you have only one input video, there is only one frame in the batch, then it can only work one frame by one frame.

Thanks Fiona,

I am clear with the fact that the deepstream organizes the frames in batch but the use case that I want to target is that lets say a batch of 5 frames are given in input then I have to crop and extract 1-4 images from each frame in the batch and then pass on that output to classifier.

So you will input 5 pictures to pipeline, right? You are talking about multiple inputs pipeline, right?

Yes, I would be taking multiple RTSP streams as a input to the pipeline.

OK. That’s correct. gst_dsexample_transform_ip() has already show how to get multiple frames in the batch.

I have checked gst_dsexample_transform_ip() function it iterates over multiple frames in a batch but my question was how can I give crop each frame in batch into multiple frames and provide it to classifier.

Or is there a way by which we can add custom NvDsObjectMeta into each frame meta so that we can specify custom object dimension into the FrameMeta which can further be processed by the classifier.

You don’t need to crop the frames. The classifier will crop the frames by itself according to the NvDsObjectMeta when nvinfer works in secondary GIE mode. What you have to do is to put correct parameters of bbox to NvDsObjectMeta inside nvdsexample plugin. There is a sample for how to generate new NvDsObjectMeta in deepstream-infer-tensor-meta-test, please refer to the part from “nvds_acquire_obj_meta_from_pool” to “nvds_add_obj_meta_to_frame”.

Thanks Fiona,
The pointers that you have mentioned helped me and now I am able to run classification on custom added object.