TensorFlow object detection API OpenCV inference returns wrong Box coordinaates prediction

Description

Hi,
I am trying to use OpenCV instead of PIL to do inference with the generated TRT engine.
Following the link : TensorFlow Object Detection API Models in TensorRT, I correctly converted the TensorFlow 2 model and generated the TensorRT engine and tested with the infer script.
I am now trying to use the OpenCV library to be able to use the webcam to read and infer the images. However, I am getting false predictions of bounding box coordinates.
Could you please take a look at my code attached? any help is very appreciated.

Environment

TensorRT Version: 8
NVIDIA GPU: Tegra Nvidia Jetson Nano & GeForce GTX 1070
CUDA Version: 11
CUDNN Version: 8
Operating System: Linux Ubuntu 18.04).06 LTS with Jetpack 4.6.1 && Windows 10
Python Version (if applicable): 3
Tensorflow Version (if applicable): 2.5
OpenCV Version (if applicable): 4.6

Relevant Files

Steps To Reproduce

change image path line 162
Execute the code python test.py

coco_classes.txt

Hi,

Which JetPack version do you use?
More, do you get the correct output on a desktop environment?

Thanks.

Hi @AastaLLL
I am using Jetpack 4.6.1. The same thing I got even on the desktop environment.
The first bounding box of the person in the left middle seems correct. However, the other object bounding box coordinates are not.

Hi @AastaLLL
I think I have found it. It is about pre-processing the image. I have to convert the format from NCHW to NHWC format.
blob = blob.transpose((0,2, 3, 1))
I will upload the full code in GuitHub

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.