Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Xavier
• DeepStream Version DeepStream 5.0
• JetPack Version (valid for Jetson only) Jetpack 4.4
• TensorRT Version TensorRT 7.0
• NVIDIA GPU Driver Version (valid for GPU only)
I have retrained the FasterRCNN etlt model and wrote the config file for the model.
The model is running with INT8 precision mode. But the problem is also happening in FP32 and FP16 precision mode.
Originally, in the MaxN mode, the random bounding box happened which is the same as seen in the 30W Mode picture. Then, I compiled the TensorRT OSS and the problem is resolved in MaxN mode. However, when I change the power mode to 30W, then the thing happens again. Do you know why this is happening in the 30W Mode ? I tried to increase the bounding box display threshold but it is useless.
For your reference, I am using the custom plugin provided in https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps for the etlt inference.
In 30W Mode,
In MaxN Mode,
Besides, https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/Jetson this website told us that we should replace according to this [
The compiled files in OSS are [libnvinfer_plugin.so, libnvinfer_plugin.so.7.0.0, libnvinfer_plugin.so.220.127.116.11] while the original plugins in /usr/lib/aarch64-linux-gnu/ are [libnvinfer_plugin.so, libnvinfer_plugin.so.7, libnvinfer_plugin.so.7.1.0]. Some files are not named the same as the format you gave. Could you also give more detailed description for this issue as well ?