" Could not find any implementation for node /0/model.24/Range"

• Hardware Platform: (Jetson)
• DeepStream Version: 6.4
• JetPack Version: 6.0
• TensorRT Version: 8.6.2.3-1+cuda12.2
• Issue Type: bugs

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I am running this command:

deepstream-app -c deepstream_app_config.txt

deepstream_app_config.txt file is:
deepstream_app_config.txt (871 Bytes)

It uses this config file:
config_infer_primary_yoloV5.txt (681 Bytes)

after running this command, I am getting this error:

WARNING: Deserialize engine failed because file path: /home/orin-metropolis/DeepStream-Yolo/model_b1_gpu0_fp32.engine open error
0:00:05.098371089 12894 0xaaaae5b81c70 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 1]: deserialize engine from file :/home/orin-metropolis/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed
0:00:05.414903196 12894 0xaaaae5b81c70 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 1]: deserialize backend context from engine from file :/home/orin-metropolis/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed, try rebuild
0:00:05.419404916 12894 0xaaaae5b81c70 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:372: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.

Building the TensorRT Engine

ERROR: [TRT]: 10: Could not find any implementation for node /0/model.24/Range.
ERROR: [TRT]: 10: [optimizer.cpp::computeCosts::3869] Error Code 10: Internal Error (Could not find any implementation for node /0/model.24/Range.)
Building engine failed

Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:03:32.231168110 12894 0xaaaae5b81c70 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
0:03:32.574531250 12894 0xaaaae5b81c70 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2212> [UID = 1]: build backend context failed
0:03:32.574595666 12894 0xaaaae5b81c70 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings
0:03:32.574664531 12894 0xaaaae5b81c70 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:03:32.574675731 12894 0xaaaae5b81c70 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Config file path: /home/orin-metropolis/DeepStream-Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:716>: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/orin-metropolis/DeepStream-Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
So Basically I am creating an onnx file from the yolov5.pt file with python3 export_yoloV5.py -w yolov5l.pt --dynamic command. Using this model I am running this deepstream-app command to generate the engine file. But it fails to generate the file.

I am trying to use yolo in place of peoplenet for metropolis/vst application.

Please refer to NVIDIA-AI-IOT/yolov5_gpu_optimization: This repository provides YOLOV5 GPU optimization sample (github.com)

Hi @Fiona.Chen,

Thanks. this worked. I followed these steps for GPU and it worked.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.