Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Both
• DeepStream Version 5
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.0 for DGPU and 7.1 for Jetpack
• NVIDIA GPU Driver Version (valid for GPU only) 450
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) I have followed the steps at yolov4_deepstream/README.md at master · NVIDIA-AI-IOT/yolov4_deepstream · GitHub and created a tensorRT engine that works and runs on a DGPU. When I use the exact same model and config on a Jetson NX Xavier I get
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:934 failed to build network since there is no model file matched.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:872 failed to build network.
0:00:20.557493110 12988 0x5608854d5e40 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:00:20.557510437 12988 0x5608854d5e40 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:00:20.557520575 12988 0x5608854d5e40 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
I have tried to use a static batch size tensorRT built model and still get the same errors. Do I have to build the model on a Jetson to use it on the Jetson? Is this due to different TensorRT version on DGPU deepstream 7.0 and 7.1 on Jetpack 4.4?