Getting ERROR: L4 GPU [TRT]: 1: [caskUtils.cpp::trtSmToCask::147] Error Code 1: Internal Error (Unsupported SM: 0x809) While Converting model

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) L4
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5.2.1
• NVIDIA GPU Driver Version (valid for GPU only) 510
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Today,
I have tried the L4 GPU with the 6.1 deepstream, I’m using the yolov5 ( trained on the custom dataset ). I have used [DEEPSTREAM_YOLO] to convert the model torch model into wts and cfg file on the same device.
After running the application I’m getting following error message after failing to build the engine file.


ERROR: [TRT]: 1: [caskUtils.cpp::trtSmToCask::147] Error Code 1: Internal Error (Unsupported SM: 0x809)
Building engine failed

Failed to build CUDA engine on /yolov5_model.cfg
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:723 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
0:00:12.919100066   130      0x7588aa0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:12.920571594   130      0x7588aa0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:12.920641691   130      0x7588aa0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:12.921757609   130      0x7588aa0 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:12.921784164   130      0x7588aa0 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: yolov5-6.1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized

I have tried the building .so file using the same method on the T4 GPU’s with the same Cuda Version and trt version. I’m able to successful to run the pipeline. Also on T4 it is able to convert the model.

Same process was successfully done on Quadro P1000 and Tesla T1000 It is able to run the application. But Only with the L4 GPU’s I’m not able run the application & facing the above issue.

Can someone help me to resolve this issue ?

What’s the CUDA version? It seems the TensorRT/CUDA version on your host is not compatible with L4 which has Ada architecture, you may check this specific question in TensorRT forum.

The TensorRT and Driver versions are not compatible with DeepStream6.1 dependency, please check the requirements here:
Quickstart Guide — DeepStream 6.1.1 Release documentation (nvidia.com)

CUDA Version 11.6

I tried with exact same configuration as requested to same version deep stream, but still facing this issue.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

The DeepStream 6.1 is based on driver 510.47.03 and CUDA 11.6. I don’t think L4 can run on this old version. Can you upgrade to DeepStream 6.4?

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#dgpu-setup-for-ubuntu

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.