• Hardware Platform (Jetson / GPU) Jetson Nano 4gb Dev kit
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1
I was try to infer from a model which gives me tensor output and i always got a very high rmse when i compared the results with keras and python TensorRt inferences.
To debug this issue i made a simple model like this
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape (Reshape) (None, 224, 224, 3) 0
_________________________________________________________________
sequential_6 (Sequential) (None, 224, 224, 3) 0
=================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0
_________________________________________________________________
The simple model give the input image as the output.
Observation:
when i port this model to keras and tensorrt in python it worked fine and returned the image like it is suppose to but when i ported this via deepstream i got an image like this
This image is suppose to be the videotestsrc frame. Its suppose to be in colour and only one frame not 9.
To reproduce:
drive folder with scripts simplemodelpost - Google Drive
In drive folder
deepstream_test_1_usb.py → deepstream python file
dstest1_pgie_config.txt → config file for deepstream nvinfer
DSTRTBS1.txt → text file with the output images as array from deepstream.
frametest.py → python script to view the images from text file
onnx_model.onnx → onnx file of the simple model
TRTBS1.trt → trt file of simple model
How to fix the image issue in the simple model on deepstream?