How to customize nvinfer part of deepstream to process custom data

Hi all.
I’m running a pipeline using deepstream-5.0 (on GPU) to inference on a video using a tlt object detection model.
I want to customize the inference part of it. In detail, I want to split each input frame into 4 parts and then give each part to the inference for detection. Since I use nvinfer, so I think, I should change the nvdsinfer_contex_impl.cpp file.
In this c++ file, I found a function named “NvDsInferStatus InferPreprocessor::transform()”, so we should change this function.
My question is:
If I split each frame into 4 parts in this code, then how should I send these parts to the detection model to perform detection on each one separately (in the function)? Do they need to be written in a specific buffer?
(In fact, in this function after splitting each frame what is the next step?)

Hi,
Sorry for a late reply.
transform does normalization and mean image subtraction and copy converted batch to the input binding buffer, I do not think you can split original frame here. even you can split the frame here, how about the infer result, since you have 3 more frames after split into 4 parts, but you just have one buffer to store frame information and object meta data.

@Amycao, thanks for the reply, ok I will find another solution.

Ok I found another solution and I created a new topic for that solution, could you help me @Amycao, I would appreciate your help.

the topic is (in this topic I removed the crop part of frame):

How to set the order of tee and nvstreammux to duplicate a video file, before nvinfer in deepstream pipeline