Yolo detection sample for DS-4.0 - can I split frame before inference?

Hello, experts!

I would like to split frame before feeding it to the YoloV3 network in order to obtain better results in the detection of small objects on the HD frames.

Can you please guide me where I can find the ability to split every next frame of the video stream before applying YoloV3 inference on it and after that collect and merge detection result on the full frame?

As I understand convertFcn ( Line 1126 in nvdsinfer_context_impl.cpp) resizes input frame according to the network width and height, and in my case, I have to split frame somewhere earlier and feed to here. Is it even possible to work not with only full frames?

/* Input needs to be pre-processed. */
        convertFcn(outPtr, (unsigned char*) batchInput.inputFrames[i],
                m_NetworkInfo.width, m_NetworkInfo.height,
                batchInput.inputPitch, m_NetworkScaleFactor,
                m_MeanDataBuffer, m_PreProcessStream);

Best regards!

Hi,

You can check this page for the pipeline of Deepstream:
https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_app_architecture.html

After the stream muxer, inputs sources is integrated into a batch.
So, if you want to split the buffer, just try to access the buffer and form a customized output.

Here is a related topic for accessing nvbuffer for your reference:
https://devtalk.nvidia.com/default/topic/1060956/deepstream-sdk/access-frame-pointer-in-deepstream-app/post/5375214/#5375214

Thanks.