Inference on png images

Description

I have a network which takes in an image and produces a mask. I converted this network to .trt engine, and now I can successfully do inference on the .trt engine in C++. To fill the inference buffer, I put all of the blue pixels first, then all of the greens, and lastly the red pixels. Inference works okay in this case.

The problem is that I have another segmentation network which takes two RGB images as input (2x250x250x3) and produces a mask. The issue is that after I read the two images and fill the inference buffer (in C++), inference produces incorrect results.

I can’t figure out the right ordering of pixels of two images in the buffer. I tried put all reds, greens then blues. I tried separating the two images and combining them. I tried all possible combinations but I can’t figure out the correct order… What is the right way to sort the pixels of the 2 images in the buffer please?

Any help is appreciated.

Environment

TensorRT Version: 8.4.1.5
GPU Type: Nvidia
Nvidia Driver Version:
CUDA Version: 11.5
CUDNN Version:
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): 3.7
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.10.1
Baremetal or Container (if container which image + tag):

Hi,

We recommend you to please refer to the following sample, developer guide and make sure your script is correct.

If you still face an issue, could you please share with us more issue repro details.
Please share with us the commands/steps you’re using to generate the TRT engine and the complete above script.

Thank you.

1 Like