[ERROR] Model has dynamic shape but no optimization profile specified. Aborted (core dumped)

I am following the steps the following file wasn’t working

TRT-OSS/x86/README.md                 // for x86 platform

i read into the documentation it asked for cuda 11.4 ,so my model was trained on 11.1 ,will it affect my model if i change the cuda version

Can you share the log?

You may need to git clone the compatible version of TRT when you run

git clone -b $TRT_OSS_CHECKOUT_TAG https://github.com/nvidia/TensorRT //check TRT_OSS_CHECKOUT_TAG in the above table

Or refer to below directly.
https://docs.nvidia.com/tao/tao-toolkit/text/object_detection/yolo_v4.html#tensorrt-oss-on-x86

I guess this process follows c++ could u please confirm if we can do this in python as well,thanks

i have already build the TensorRT-OSS: 7.2.1

export CUDA_VER=11.1
make

make -C post_processor
make[1]: Entering directory ‘/home/vaaan/Downloads/deepstream_tao_apps/post_processor’
g++ -o libnvds_infercustomparser_tao.so nvdsinfer_custombboxparser_tao.cpp -I/opt/nvidia/deepstream/deepstream-5.1/sources/includes -I/usr/local/cuda-11.1/include -Wall -std=c++11 -shared -fPIC -Wl,–start-group -lnvinfer -lnvparsers -L/usr/local/cuda-11.1/lib64 -lcudart -lcublas -Wl,–end-group
In file included from nvdsinfer_custombboxparser_tao.cpp:25:0:
/opt/nvidia/deepstream/deepstream-5.1/sources/includes/nvdsinfer_custom_impl.h:128:10: fatal error: NvCaffeParser.h: No such file or directory
#include “NvCaffeParser.h”
^~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:49: recipe for target ‘libnvds_infercustomparser_tao.so’ failed
make[1]: *** [libnvds_infercustomparser_tao.so] Error 1
make[1]: Leaving directory ‘/home/vaaan/Downloads/deepstream_tao_apps/post_processor’
Makefile:24: recipe for target ‘all’ failed
make: *** [all] Error 2
i get this error

For above error, please refer to Error while building deepstream_tlt_apps - #7 by Morganh

1 Like

after referring to this file i made changes and gave the locations required following this post

i ended up onto another error

make -C post_processor
make[1]: Entering directory ‘/home/vaaan/Downloads/deepstream_tao_apps/post_processor’
g++ -o libnvds_infercustomparser_tao.so nvdsinfer_custombboxparser_tao.cpp -I/opt/nvidia/deepstream/deepstream-5.1/sources/includes -I/usr/local/cuda-11.1/include -I/home/vaaan/TensorRT-7.2.1.6/include -Wall -std=c++11 -shared -fPIC -Wl,–start-group -lnvinfer -lnvparsers -L/usr/local/cuda-11.1/lib64 -lcudart -lcublas -L/home/vaaan/TensorRT-7.2.1.6/lib -Wl,–end-group
/bin/sh: 1: cannot open b: No such file
Makefile:51: recipe for target ‘libnvds_infercustomparser_tao.so’ failed
make[1]: *** [libnvds_infercustomparser_tao.so] Error 2
make[1]: Leaving directory ‘/home/vaaan/Downloads/deepstream_tao_apps/post_processor’
Makefile:24: recipe for target ‘all’ failed
make: *** [all] Error 2

I even tried this with older version deepstream_tlt_apps,
i landed onto the same error

Please try to check why there is <b> .

i solved the problem .There was a problem with dependency
i can run the python_tao_apps for inference on the default models.
when i run custom model on a image there is no detection
I get the same image ,i got detection on image while using TOA toolkit but not on deepstream

./apps/tao_classifier/ds-tao-classifier -c /home/vaaan/Downloads/deepstream_tao_apps/configs/yolov4_tao/pgie_yolov4_tao_config.txt -i /home/vaaan/Frame_2_473.jpg
Now playing: /home/vaaan/Downloads/deepstream_tao_apps/configs/yolov4_tao/pgie_yolov4_tao_config.txt
WARNING: [TRT]: TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.1.0
WARNING: [TRT]: TensorRT was linked against cuDNN 8.2.1 but loaded cuDNN 8.1.0
0:00:01.206504402 34162 0x55ba302f02a0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/vaaan/Downloads/cuda11.3-trt8.0-20210820T231234Z-001/cuda11.3-trt8.0/export_0.1_prune/trt2.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 5
0 INPUT kFLOAT Input 3x608x608 min: 1x3x608x608 opt: 8x3x608x608 Max: 16x3x608x608
1 OUTPUT kINT32 BatchedNMS 1 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT BatchedNMS_1 200x4 min: 0 opt: 0 Max: 0
3 OUTPUT kFLOAT BatchedNMS_2 200 min: 0 opt: 0 Max: 0
4 OUTPUT kFLOAT BatchedNMS_3 200 min: 0 opt: 0 Max: 0

0:00:01.206571377 34162 0x55ba302f02a0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/vaaan/Downloads/cuda11.3-trt8.0-20210820T231234Z-001/cuda11.3-trt8.0/export_0.1_prune/trt2.engine
0:00:01.214861350 34162 0x55ba302f02a0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:/home/vaaan/Downloads/deepstream_tao_apps/configs/yolov4_tao/pgie_yolov4_tao_config.txt sucessfully
Running…
End of stream
Returned, stopping playback
Deleting pipeline

If original issue is resolved, let us close this topic.
Please create a new topic if you have other issue. Thanks.

ok

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.