ERROR: [TRT]: 1: Unexpected exception _Map_base::at

Please provide complete information as applicable to your setup.

• Hardware (T4)
• DeepStream Version 6.3
• TensorRT Version 8.5.3.1
Issue in converting and running a yolov5.onnx file to int8 tensorrt. But it showing some error:
I had followed the below repository: Yolo_int8_calibration

vanorhq@vanorhq:~/DeepStream-Yolo$ deepstream-app -c deepstream_app_config.txt
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /home/vanorhq/DeepStream-Yolo/model_b1_gpu0_int8.engine open error
0:00:04.508729296 20677 0x55e27dfba400 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/home/vanorhq/DeepStream-Yolo/model_b1_gpu0_int8.engine failed
0:00:04.678274913 20677 0x55e27dfba400 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/home/vanorhq/DeepStream-Yolo/model_b1_gpu0_int8.engine failed, try rebuild
0:00:04.679096249 20677 0x55e27dfba400 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

Building the TensorRT Engine

ERROR: [TRT]: 1: Unexpected exception _Map_base::at
Building engine failed

Failed to build CUDA engine
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:728 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:794 Failed to get cuda engine from custom library API
0:00:12.599918971 20677 0x55e27dfba400 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2022> [UID = 1]: build engine file failed
0:00:12.771608107 20677 0x55e27dfba400 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2108> [UID = 1]: build backend context failed
0:00:12.772641672 20677 0x55e27dfba400 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1282> [UID = 1]: generate backend failed, check config file settings
0:00:12.772682690 20677 0x55e27dfba400 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:12.772694983 20677 0x55e27dfba400 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Config file path: /home/vanorhq/DeepStream-Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:716: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/vanorhq/DeepStream-Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

It is not provided by Nvidia. Please contact the author of the repo.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.