Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) :Jetson Orin NX
• DeepStream Version: 6.4.0
• JetPack Version (vald for Jetson only): 6.0-b52
• TensorRT Version: 8.6.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): Error while running the DeepStream
**• I need to deploy the yolov5 model in Edge device using Deep Streams I am following the steps : NVIDIA Jetson Nano Deployment - Ultralytics YOLOv8 Docs
Followed all the steps given in this documents: DeepStream-Yolo/docs/YOLOv5.md at master · marcoslucianops/DeepStream-Yolo · GitHub
And following this github repo for DeepStream: GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
When I am running the sample yolov5 model using this command: deepstream-app -c deepstream_app_config.txt. I am facing some errors.
Getting errors even at the sample yolov5 model
sharing the error file with you please check once and let me know how you can help us with this
Error:
nvidia@tegra-ubuntu:~/DeepStream-Yolo$ deepstream-app -c deepstream_app_config.txt
WARNING: Deserialize engine failed because file path: /home/nvidia/DeepStream-Yolo/model_b1_gpu0_fp32.engine open error
0:00:06.805750905 6260 0xaaaade0e1260 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 1]: deserialize engine from file :/home/nvidia/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed
0:00:07.183321650 6260 0xaaaade0e1260 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 1]: deserialize backend context from engine from file :/home/nvidia/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed, try rebuild
0:00:07.183381140 6260 0xaaaade0e1260 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:372: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clampedBuilding the TensorRT Engine
ERROR: [TRT]: 10: Could not find any implementation for node /model.24/Split_1_29.
ERROR: [TRT]: 10: [optimizer.cpp::computeCosts::3869] Error Code 10: Internal Error (Could not find any implementation for node /model.24/Split_1_29.)
Building engine failedFailed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:04:37.705883175 6260 0xaaaade0e1260 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
0:04:38.120240987 6260 0xaaaade0e1260 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2212> [UID = 1]: build backend context failed
0:04:38.120299099 6260 0xaaaade0e1260 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings
0:04:38.120363227 6260 0xaaaade0e1260 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:04:38.120377147 6260 0xaaaade0e1260 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Config file path: /home/nvidia/DeepStream-Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:716: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/nvidia/DeepStream-Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
Thanks & Regards
Jagruti Bagul