ERROR: [TRT]: 10: Could not find any implementation for node /0/model.24/Range

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) :Jetson Orin NX
• DeepStream Version: 6.4.0
• JetPack Version (vald for Jetson only): 6.0-b52
• TensorRT Version: 8.6.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): Error while running the DeepStream
**• I need to deploy the yolov5 model in Edge device using Deep Streams I am following the steps : NVIDIA Jetson Nano Deployment - Ultralytics YOLOv8 Docs

Followed all the steps given in this documents: DeepStream-Yolo/docs/YOLOv5.md at master · marcoslucianops/DeepStream-Yolo · GitHub

And following this github repo for DeepStream: GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models

When I am running the sample yolov5 model using this command: deepstream-app -c deepstream_app_config.txt. I am facing some errors.

Getting errors even at the sample yolov5 model

sharing the error file with you please check once and let me know how you can help us with this

Error:
nvidia@tegra-ubuntu:~/DeepStream-Yolo$ deepstream-app -c deepstream_app_config.txt
WARNING: Deserialize engine failed because file path: /home/nvidia/DeepStream-Yolo/model_b1_gpu0_fp32.engine open error
0:00:06.805750905 6260 0xaaaade0e1260 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 1]: deserialize engine from file :/home/nvidia/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed
0:00:07.183321650 6260 0xaaaade0e1260 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 1]: deserialize backend context from engine from file :/home/nvidia/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed, try rebuild
0:00:07.183381140 6260 0xaaaade0e1260 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:372: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped

Building the TensorRT Engine

ERROR: [TRT]: 10: Could not find any implementation for node /model.24/Split_1_29.
ERROR: [TRT]: 10: [optimizer.cpp::computeCosts::3869] Error Code 10: Internal Error (Could not find any implementation for node /model.24/Split_1_29.)
Building engine failed

Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:04:37.705883175 6260 0xaaaade0e1260 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
0:04:38.120240987 6260 0xaaaade0e1260 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2212> [UID = 1]: build backend context failed
0:04:38.120299099 6260 0xaaaade0e1260 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings
0:04:38.120363227 6260 0xaaaade0e1260 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:04:38.120377147 6260 0xaaaade0e1260 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Config file path: /home/nvidia/DeepStream-Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:716: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/nvidia/DeepStream-Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

Thanks & Regards
Jagruti Bagul

the app failed to create TRT engine.

  1. please try the yolov5 onnx model by this link yolov5-usage.
  2. please refer these yolov5 sample.

@fanzh

Thanks for your response but we are following the same things that you have provided.

I think it is an issue with tensorrt because when we tried the following command to convert the onnx model manually

nvidia@tegra-ubuntu:~$ /usr/src/tensorrt/bin/trtexec --onnx=/home/nvidia/DeepStream-Yolo/yolov5s.onnx --saveEngine=best_yolov5_model.engine

but it failed to create the engine file and gave the error as attached in the following file.
tensorRTEngine_error.txt (7.5 KB)

Please kindly help us. Thanks.

1 Like

Thanks for the sharing! please refer to the last comment in this topic. I tested on DS7.0(jetpack6.0 GA). this issue is fixed.
log-0524.txt (17.6 KB)

@fanzh

Thanks for your response.

I understood it was a Cuda driver bug from the last comment on the topic you mentioned.

“Issue will be solved with the next release.”

Which next release?

Is the next release for DeepStream?

Is the next release for TensorRT?

or
Is the next release for Jetpack?

can you please provide the clarity?

1 Like

Jetpack includes linux,cuda,TRT and other components. DS6.4 uses Jetpack6.0 DP while DS7.0 uses Jetpack6.0 GA. I validated this issue has been fixed on Jetpack6.0 GA.

@fanzh

Once again thanks for the help.

We have Jetpack 6.0-b52. Do we need to install Jetpack 6.0 GA?

1 Like

for this bug, you need to install Jetpack 6.0 GA, which uses DeepStream7.0. please refer to the doc.

@fanzh Sorry for the late reply,

We are using the following carrier board:Carrier Board - D131 | AVerMedia

and downloaded and installed BSP from here:Carrier Board - D131 | AVerMedia

I think the Jetpack 6.0 GA is not available yet.

Can we use a docker container for Deepstream 7.0?

1 Like

nvcr.io/nvidia/deepstream:7.0-samples-multiarch supports x86 + Jetson. please refer to the link.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Hi,

The issue is resolved. AVerMedia has updated its BSP which supports the DeepStream 7.0.

nvidia@tegra-ubuntu:~$ sudo cat /etc/nv_tegra_release
[sudo] password for nvidia:
 # R36 (release), REVISION: 3.0, GCID: 36106755, BOARD: generic, EABI: aarch64, 
 DATE: Thu Apr 25 03:14:05 UTC 2024
 # KERNEL_VARIANT: oot
 TARGET_USERSPACE_LIB_DIR=nvidia
 TARGET_USERSPACE_LIB_DIR_PATH=usr/lib/aarch64-linux-gnu/nvidia
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.