Input frame to Nvinfer in a pipeline is wrong

• Hardware Platform (Jetson / GPU) Jetson Nano 4gb Dev kit
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1

I was try to infer from a model which gives me tensor output and i always got a very high rmse when i compared the results with keras and python TensorRt inferences.
To debug this issue i made a simple model like this

Layer (type)                 Output Shape              Param #   
reshape (Reshape)            (None, 224, 224, 3)       0         
sequential_6 (Sequential)    (None, 224, 224, 3)       0         
Total params: 0
Trainable params: 0
Non-trainable params: 0

The simple model give the input image as the output.

when i port this model to keras and tensorrt in python it worked fine and returned the image like it is suppose to but when i ported this via deepstream i got an image like this

This image is suppose to be the videotestsrc frame. Its suppose to be in colour and only one frame not 9.

To reproduce:
drive folder with scripts simplemodelpost - Google Drive
In drive folder → deepstream python file
dstest1_pgie_config.txt → config file for deepstream nvinfer
DSTRTBS1.txt → text file with the output images as array from deepstream. → python script to view the images from text file
onnx_model.onnx → onnx file of the simple model
TRTBS1.trt → trt file of simple model

How to fix the image issue in the simple model on deepstream?

How did you get the image?Can you put the code here?

please check the drive folder it contains the code for the deepstream pipeline.
the frames is retrived by an inferance results probe the script to which is there in in the drive it self.

@Fiona.Chen any update ?