SSD Lite Mobilenet V2 returns shifted bounding boxes after conversion with TensorRT 8.0


I’m using a SSD Lite Mobilenet V2 model that I retrained and changed the input network size (from square to rectangular input size) with TensorFlow. Then I converted this model to UFF format to finally convert it to a TensorRT engine to run on a Jetson with DeepStream. To do so I put my UFF file in the deepstream-app config file to convert it to TensorRT and then start the deepstream-app.

When I do this on a Jetson Xavier NX with TensorRT 8.0 and DeepStream 6.0, the TensorRT engine is created and the app runs but the bounding boxes I get are shifted.

When I do the same thing on a Jetson TX2 with TensorRT 7.1 and DeepStream 5.1, everything works fine and the bounding boxes are not shifted. For TensorRT 7.1, since GridAnchorRect_TRT was not supported, I had to download TensorRT OSS 7.1 which had the plugin to build the TensorRT engine.

Is there a difference between TensorRT 7.1 and 8.0 that could create this issue? Is there a way to make it work with TensorRT 8.0?

What I get with TensorRT 7.1 (Jetson TX2):

What I get with TensorRT 8.0 (Jetson Xavier NX):


TensorRT Version: 8.0
GPU Type: Jetson Xavier NX with JetPack 4.6
CUDA Version: 10.2


This looks like a Jetson issue. Please refer to the below samples in case useful.

For any further assistance, we will move this post to to Jetson related forum.


Thank you. Unfortunately these links do not help.
I’m going to wait until my post is moved to ask for help.

I moved your post to the Jetson Xavier NX category, I am hoping the community here will be able to help.
Best of luck with your project !


Do you use the following sample to generate the output?


More, would you mind testing it with JetPack 4.6.1 + Deepstream 6.0 as well?

Hi, yes I used the sample config_infer_primary_ssd.txt and modified the values so it can corresponds to my model.
To test it with JetPack 4.6.1, I have to uninstall the current JetPack version and reinstall the new one or can I just upgrade it in a simple way ?


YES. You can upgrade it with our OTA command:


In the link you gave me it says that

So I can’t use OTA to upgrade the JetPack version ? I’m not sure I understood.
Shouldn’t I do this instead ?

This is from the link of the JetPack documentation How to Install JetPack :: NVIDIA JetPack Documentation

So I managed to update the JetPack version with the link you sent me but I still get the same issue (with TensorRT 8.2 now).

Hi, I just did another test by using the TensorFlow pre trained SSD Lite Inception V2 model, which have a square input shape (300x300). I tried it on the Jetson NX with TensorRT 8.2 and the bounding boxes are not shifted. So I really think this issue has to do with the fact that I used a rectangular input shape for my retrained model.


Thanks for the testing.

There are some discussions on the nonsquare input for YOLO with Deepstream.
Not sure if you are facing the same issue although we didn’t receive a similar one for SSD.

Could you give it a check to see if it helps?


Hi, I looked it up and it doesn’t help. The files are really different for YOLO and SSD.
I found on github a post of somebody having a similar issue as mine with a SSD but it was not solved either : GridAnchorRect_TRT get shifted bbox · Issue #691 · NVIDIA/TensorRT · GitHub. The only difference is that it seems that this person was running TensorRT 7.1 but it was also on a Jetson Xavier as he mentioned.


We want to check this in deeper.

Would you mind sharing the square and non-square uff models?
And the reproducible Deepstream configure with us?


Hi, thank you for your interest. I sent you a private message to talk about the terms of sharing our model with you.


We can reproduce this issue internally.
Will share more information later.

1 Like


We can get the correct output if parsing the bounding box as a square.
Could you also give it a try?

diff --git a/nvdsinfer_custom_impl_ssd/nvdsparsebbox_ssd.cpp b/nvdsinfer_custom_impl_ssd/nvdsparsebbox_ssd.cpp
index b5e471d..e7c3014 100644
--- a/nvdsinfer_custom_impl_ssd/nvdsparsebbox_ssd.cpp
+++ b/nvdsinfer_custom_impl_ssd/nvdsparsebbox_ssd.cpp
@@ -109,18 +109,19 @@ bool NvDsInferParseCustomSSD (std::vector<NvDsInferLayerInfo> const &outputLayer
     unsigned int rectx1, recty1, rectx2, recty2;
     NvDsInferObjectDetectionInfo object;

-    rectx1 = det[3] * networkInfo.width;
+    unsigned int width = networkInfo.height;
+    rectx1 = det[3] * width;
     recty1 = det[4] * networkInfo.height;
-    rectx2 = det[5] * networkInfo.width;
+    rectx2 = det[5] * width;
     recty2 = det[6] * networkInfo.height;

     object.classId = classId;
     object.detectionConfidence = det[2];

     /* Clip object box co-ordinates to network resolution */
-    object.left = CLIP(rectx1, 0, networkInfo.width - 1);
+    object.left = CLIP(rectx1, 0, width - 1); = CLIP(recty1, 0, networkInfo.height - 1);
-    object.width = CLIP(rectx2, 0, networkInfo.width - 1) -
+    object.width = CLIP(rectx2, 0, width - 1) -
       object.left + 1;
     object.height = CLIP(recty2, 0, networkInfo.height - 1) - + 1;


Hi, I don’t really get the correct output. It seems like the bounding boxes inside the square part of the video is kind of okay (the height * height subframe of the height*width frame) even though there is still a bit of shift but it’s better than before. But there is no bounding boxes outside of this square part and you can see it from the way the bounding boxes get thin and disappear outside of that subframe.


Thanks for pointing out this.

We are checking this internally.
Will share more information with you later


Thanks for your patience.
We found the real issue of this problem. Please ignore the workaround of the Deepstream parser shared on Jun 30.

The root cause is that the gridAnchor kernel doesn’t handle the rectangular case.
It uses a single anchorStride to represent horizontal and vertical stride, and generates the incorrect bbox position when W≠H.

To fix this, please build the TensorRT plugin with the below patch that has rectangular support.
(You don’t need to apply the nvdsparsebbox_ssd WAR anymore)

Update cmake 3.13.0

$ wget
$ tar xpvf cmake-3.13.0.tar.gz cmake-3.13.0/
$ cd cmake-3.13.0/
$ ./bootstrap 
$ make -j8
$ echo 'export PATH='${PWD}'/bin/:$PATH' >> ~/.bashrc
$ source ~/.bashrc

Build TensorRT plugin with the fix

0001-Add-rectangular-support-in-gridAnchor-kernel.patch (1.6 KB)

$ git clone -b release/8.2
$ cd TensorRT/
$ git submodule update --init --recursive
$ git apply 0001-Add-rectangular-support-in-gridAnchor-kernel.patch
$ mkdir -p build && cd build
$ cmake .. -DGPU_ARCHS="53 62 72"  -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc
$ make nvinfer_plugin -j$(nproc)

Update library

$ sudo cp /usr/lib/aarch64-linux-gnu/
$ sudo ln -sf /usr/lib/aarch64-linux-gnu/ /usr/lib/aarch64-linux-gnu/
$ sudo ln -sf /usr/lib/aarch64-linux-gnu/ /usr/lib/aarch64-linux-gnu/
$ sudo ldconfig


Hi thank you for your answer.
I’m at the step of running the cmake line to build TensorRT. At first I had this issue:

CMake Error at CMakeLists.txt:47 (project):
No CMAKE_CUDA_COMPILER could be found.

Tell CMake where to find the compiler by setting either the environment
variable “CUDACXX” or the CMake cache entry CMAKE_CUDA_COMPILER to the full
path to the compiler, or to the compiler name if it is in the PATH.

which I solved by adding - DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc at the end of the cmake line.

But now I have this error:

fatal error: cub/cub.cuh: No such file or directory
#include “cub/cub.cuh”
compilation terminated.
plugin/CMakeFiles/nvinfer_plugin.dir/build.make:1466: recipe for target ‘plugin/CMakeFiles/nvinfer_plugin.dir/efficientNMSPlugin/’ failed

Do you know how I can solve this? Is the first thing that I did by adding -DCMAKE_CUDA_COMPILER was a good solution?