I want to use only classification in GSTPipeline and bypass the primary inference. The use case is to crop n images from each source-frame and send each frame for classification in the pipeline.
We have found gst-dsexample as a way to add custom scaling or cropping logic before classification but that is also one-to-one processing i.e. single frame is processed to give a single output.
So, the question is:
How we can give multiple output from gst-dsexample plugin towards next step?
How we can configure just classifier in the pipeline after muxer/ gst-dsexample to run only classification?
Let me know if any other detail is needed.
Thanks.
DsExample plugin works on the base of batch. It can handle multiple frames in batch. Please refer to gst_dsexample_transform_ip() function. In deepstream, the frames are organized in batches.
You need to understand the whole deepstream is based on batch, nvstreammux organizes frames in batch and nvinfer works with batch. So if you have only one input video, there is only one frame in the batch, then it can only work one frame by one frame.
I am clear with the fact that the deepstream organizes the frames in batch but the use case that I want to target is that lets say a batch of 5 frames are given in input then I have to crop and extract 1-4 images from each frame in the batch and then pass on that output to classifier.
I have checked gst_dsexample_transform_ip() function it iterates over multiple frames in a batch but my question was how can I give crop each frame in batch into multiple frames and provide it to classifier.
Or is there a way by which we can add custom NvDsObjectMeta into each frame meta so that we can specify custom object dimension into the FrameMeta which can further be processed by the classifier.
You don’t need to crop the frames. The classifier will crop the frames by itself according to the NvDsObjectMeta when nvinfer works in secondary GIE mode. What you have to do is to put correct parameters of bbox to NvDsObjectMeta inside nvdsexample plugin. There is a sample for how to generate new NvDsObjectMeta in deepstream-infer-tensor-meta-test, please refer to the part from “nvds_acquire_obj_meta_from_pool” to “nvds_add_obj_meta_to_frame”.