I would like to split frame before feeding it to the YoloV3 network in order to obtain better results in the detection of small objects on the HD frames.
Can you please guide me where I can find the ability to split every next frame of the video stream before applying YoloV3 inference on it and after that collect and merge detection result on the full frame?
As I understand convertFcn ( Line 1126 in nvdsinfer_context_impl.cpp) resizes input frame according to the network width and height, and in my case, I have to split frame somewhere earlier and feed to here. Is it even possible to work not with only full frames?
/* Input needs to be pre-processed. */ convertFcn(outPtr, (unsigned char*) batchInput.inputFrames[i], m_NetworkInfo.width, m_NetworkInfo.height, batchInput.inputPitch, m_NetworkScaleFactor, m_MeanDataBuffer, m_PreProcessStream);