Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) L4
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5.2.1
• NVIDIA GPU Driver Version (valid for GPU only) 510
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Today,
I have tried the L4 GPU with the 6.1 deepstream, I’m using the yolov5 ( trained on the custom dataset ). I have used [DEEPSTREAM_YOLO] to convert the model torch model into wts and cfg file on the same device.
After running the application I’m getting following error message after failing to build the engine file.
ERROR: [TRT]: 1: [caskUtils.cpp::trtSmToCask::147] Error Code 1: Internal Error (Unsupported SM: 0x809)
Building engine failed
Failed to build CUDA engine on /yolov5_model.cfg
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:723 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
0:00:12.919100066 130 0x7588aa0 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:12.920571594 130 0x7588aa0 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:12.920641691 130 0x7588aa0 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:12.921757609 130 0x7588aa0 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:12.921784164 130 0x7588aa0 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: yolov5-6.1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized
I have tried the building .so
file using the same method on the T4
GPU’s with the same Cuda Version and trt version. I’m able to successful to run the pipeline. Also on T4
it is able to convert the model.
Same process was successfully done on Quadro P1000
and Tesla T1000
It is able to run the application. But Only with the L4 GPU’s I’m not able run the application & facing the above issue.
Can someone help me to resolve this issue ?