Different output from TRT engine in python and c++

Description

Hi, I created simple engine with dynamic shapes with 2 inputs and 1 output. As inputs it takes image and binary mask and it returns frame with blurred places based on mask. In python API everything works perfect but in c++ output doesn’t have blur but has weird artefacts in weird places. I checked input data in python and c++ and they look the same. Can you tell me what I’m doing wrong in c++? I attached python script, c++ script and results from both versions. If you will need onnx file I will send it in PM. Both versions run without any error.

Environment

TensorRT Version: 7.2.1.6
GPU Type: Tesla T4
Nvidia Driver Version: 460.73.01
CUDA Version: 11.1
CUDNN Version:
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Input frame and mask

Output from python

Output from c++

Python file for inference
inference.py (3.4 KB)

C++ file for inference
cppinference.cpp (3.8 KB)

Hi @aleksandra.osztynowicz1,

Are you using the same engine file in the both scripts ? also we recommend you to please try on the latest TensorRT version.

Thank you.

Hi, thanks for reply.
Yes, I’m using the same engine file in both scripts and I really need to use this version of TRT. I believe that in TRT 7.2 the same engine should work in the same way in cpp and python as much as in TRT 8.0. Also I’m sending you my onnx file if you would want to check it.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

I send onnx file in PM. The scripts you can find above. I checked onnx model, no error. I run trtexec with verbose and I didn’t see any error but if it works in python I guess everything is ok with engine.

Hi,

Could you please try on latest TRT version and confirm to us for better help.

Thank you.

Hi,
I can’t use latest TRT version, because I also use deepstream in my solution and as I know the latest version of TRT is not supported in deepstream yet. I’m using this docker nvcr.io/nvidia/deepstream:5.1-21.02-triton

Hello,
Any update? Did you try to run my scripts?

Hi @aleksandra.osztynowicz1 ,
Can you please share your model again with us.

Thanks.

Hi @AakankshaS
I sent onnx file in PM.

Hi @aleksandra.osztynowicz1 ,
Your model Dims is [720, 1280, 3] But you are spliting the input into 3 1280x720 image and writing 1280x720 data to binding[0] + i * 1280 * 720
It corresponds to [3, 720, 1280]
This could be the possible reason for difference.
Can you please check this.

Thanks!

Thank you @AakankshaS !
Finally my engine works :)

1 Like