[ERROR] Model has dynamic shape but no optimization profile specified. Aborted (core dumped)

TensorRT Version7.2.1.6
Quadro RTX 5000 dual GPU
Driver Version: 455.23.05
CUDA Version: 11.1
Ubuntu 18.04
python 3.6
Yolo_v4

nvidia/tao/tao-toolkit-tf:
docker_registry: nvcr.io
docker_tag: v3.21.08-py3

I am trying to convert my etlt model to trt engine using TOA converter

./tao-converter /home/vaaan/Downloads/cuda11.1-trt7.2-20210820T231205Z-001/cuda11.1-trt7.2/yolov4_resnet18_epoch_080.etlt -k ****************mykey******** -c /home/vaaan/Downloads/cuda11.1-trt7.2-20210820T231205Z-001/cuda11.1-trt7.2/cal.bin -o BatchedNMS -d 3,384,1248 -m 16 -i nchw -t int8 -e home/vaaan/Downloads/cuda11.1-trt7.2-20210820T231205Z-001/cuda11.1-trt7.2/resnet18_detector.trt -b 8

I am getting this error:
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[INFO] ModelImporter.cpp:135: No importer registered for op: BatchedNMSDynamic_TRT. Attempting to import as plugin.
[INFO] builtin_op_importers.cpp:3770: Searching for plugin: BatchedNMSDynamic_TRT, plugin_version: 1, plugin_namespace:
[INFO] builtin_op_importers.cpp:3787: Successfully created plugin: BatchedNMSDynamic_TRT
[INFO] Detected input dimensions from the model: (-1, 3, 608, 608)
[ERROR] Model has dynamic shape but no optimization profile specified.
Aborted (core dumped)

Please add “-p”.
Refer to [ERROR] Model has dynamic shape but no optimization profile specified

1 Like

according ot he link you provided i added

-p Input,1x3x 608 x 608 ,8x3x 608 x 608 ,16x3x 608 x 608

no trt file was produced at the location

[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[INFO] ModelImporter.cpp:135: No importer registered for op: BatchedNMSDynamic_TRT. Attempting to import as plugin.
[INFO] builtin_op_importers.cpp:3770: Searching for plugin: BatchedNMSDynamic_TRT, plugin_version: 1, plugin_namespace:
[INFO] builtin_op_importers.cpp:3787: Successfully created plugin: BatchedNMSDynamic_TRT
[INFO] Detected input dimensions from the model: (-1, 3, 608, 608)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 608, 608) for input: Input
[INFO] Using optimization profile opt shape: (8, 3, 608, 608) for input: Input
[INFO] Using optimization profile max shape: (16, 3, 608, 608) for input: Input
[INFO] Reading Calibration Cache for calibrator: EntropyCalibration2
[INFO] Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
[INFO] To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
[WARNING] Missing dynamic range for tensor (Unnamed Layer* 210) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing dynamic range for tensor (Unnamed Layer* 318) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[INFO] Detected 1 inputs and 4 output network tensors.

The tensorrt engine is already generated.
Please double check, especially the write access.

I gave all the permissions to the file using sudo chmod 777 and sudo chmod -R a+rwx ,changed the file save locations ,still can’t get the engine file.the process is working because the CPU utilisation shoots up. what the problem where can i find the file.

Please try via
$ chown -R xxx:xxx

1.I am performing the same conversion on my jetson xaviars jetpack 4.5 and cuda 10.2

./tao-converter /home/vaaan/Desktop/cuda10.2_trt7.1_jp4.5-20210209T001136Z-001/cuda10.2_trt7.1_jp4.5/yolov4_resnet18_epoch_080.etlt -k aGJuM2dxZGltbjgwaGNnb3Fxc2h0ZXBqZGk6MzlkYjAxY2EtZWE2OC00NGRiLWI5ZmUtZWRlNDZjMTI4MjA5 -c /home/vaaan/Desktop/cuda10.2_trt7.1_jp4.5-20210209T001136Z-001/cuda10.2_trt7.1_jp4.5/cal.bin -o BatchedNMS -d 3,384,1248 -m 16 -i nchw -t int8 -e /home/vaaan/Desktop/cuda10.2_trt7.1_jp4.5-20210209T001136Z-001/cuda10.2_trt7.1_jp4.5/trt.engine -b 8 -p Input,13608608,83608608,163608*608

Please provide three optimization profiles via -p <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has the format: nxcxhxw

Aborted (core dumped)

2.and if my cuda version is diffrent in my training device and jetson will it work in inference?

Please use format nxcxhxw instead of n*c*h*w

In Jetson, please make sure you download the correct version of tao-converter.

the earlier problem was resolved,Thank you

converter is of the right version, i am asking different versions of CUDA may cause problems during inference or not.

So, you already generate trt engine , right?

May I know what is the problem?

Yes the original question involved etlt to trt.engine conversion on my dGPU

This is the error on my jetson xaviors

./tao-converter /home/vaaan/Desktop/cuda10.2_trt7.1_jp4.5-20210209T001136Z-001/cuda10.2_trt7.1_jp4.5/yolov4_resnet18_epoch_080.etlt -k aGJuM2dxZGltbjgwaGNnb3Fxc2h0ZXBqZGk6MzlkYjAxY2EtZWE2OC00NGRiLWI5ZmUtZWRlNDZjMTI4MjA5 -c /home/vaaan/Desktop/cuda10.2_trt7.1_jp4.5-20210209T001136Z-001/cuda10.2_trt7.1_jp4.5/cal.bin -o BatchedNMS -d 3,384,1248 -m 16 -i nchw -t int8 -e /home/vaaan/Desktop/cuda10.2_trt7.1_jp4.5-20210209T001136Z-001/cuda10.2_trt7.1_jp4.5/trt.engine -b 8 -p Input,1x3x608x608,8x3x608x608,16x3x608x608

[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped

[INFO] ModelImporter.cpp:135: No importer registered for op: BatchedNMSDynamic_TRT. Attempting to import as plugin.

[INFO] builtin_op_importers.cpp:3659: Searching for plugin: BatchedNMSDynamic_TRT, plugin_version: 1, plugin_namespace:

[ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin BatchedNMSDynamic_TRT version 1

ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:

[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”

Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”

[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct

[INFO] Detected input dimensions from the model: (-1, 3, 608, 608)

[INFO] Model has dynamic shape. Setting up optimization profiles.

[INFO] Using optimization profile min shape: (1, 3, 608, 608) for input: Input

[INFO] Using optimization profile opt shape: (8, 3, 608, 608) for input: Input

[INFO] Using optimization profile max shape: (16, 3, 608, 608) for input: Input

[ERROR] Network must have at least one output

[ERROR] Network validation failed.

[ERROR] Unable to create engine

Segmentation fault (core dumped)

Please build a new libnvinfer_plugin.so. See YOLOv4 - NVIDIA Docs

I will check that out

now on my d-GPU device i was trying to run inference using the same Jupiter notebook

I am getting
[TensorRT] ERROR: INVALID_CONFIG: The engine plan file is not compatible with this version of TensorRT, expecting library version 7.2.3 got 7.2.1, please rebuild.

as i looked up on other forms they say to infrence on the same device ,but this is the same device .

is it because the version on my device is 7.2.1 and mabye just mabye TOA docker has 7.2.3?

where should i do the inference this engine file?

Please generate the tensorrt engine where you want to run inference.
For example,
If you want to run inference in Nano, please use one of below ways.

  1. deploy etlt model directly in deepstream
  2. Use correct version of tao-converter to generate the tensorrt engine

I found a script to infrence yolov4 on dgpu
but when i compile i get this error

CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
make: Entering directory ‘/home/vaaan/Downloads/DeepStream-Yolo/native/nvdsinfer_custom_impl_Yolo’
g++ -c -o nvdsinfer_yolo_engine.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I/opt/nvidia/deepstream/deepstream-5.1/sources/includes -I/usr/local/cuda-11.1/include nvdsinfer_yolo_engine.cpp
In file included from nvdsinfer_yolo_engine.cpp:26:0:
/opt/nvidia/deepstream/deepstream-5.1/sources/includes/nvdsinfer_custom_impl.h:128:10: fatal error: NvCaffeParser.h: No such file or directory
#include “NvCaffeParser.h”
^~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:78: recipe for target ‘nvdsinfer_yolo_engine.o’ failed
make: *** [nvdsinfer_yolo_engine.o] Error 1
make: Leaving directory ‘/home/vaaan/Downloads/DeepStream-Yolo/native/nvdsinfer_custom_impl_Yolo’

May I know what is the script?

Can you share more detailed steps?

Please use official app GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream mentioned in https://docs.nvidia.com/tao/tao-toolkit/text/object_detection/yolo_v4.html#integrating-the-model-with-deepstream

1 Like

I think its for inference on original yolo model and not etlt or trt

In the official app GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream , end user can download the etlt models.
Then deploy it directly.
Or use tao-converter to generate trt engine. Then deploy trt engine with deepstream or standalone way.

1 Like