Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hardware Platform - T4 GPU (AWS Deep Learning AMI)
DeepStream Version - 5.1 (with Triton)
GPU version - 11.1
Issue Type - Error
I have been trying to run a model that I converted from pytorch to onnx on triton in Deepstream SDK. But I was hit by the following error:
I0317 18:31:58.856597 84 model_repository_manager.cc:810] loading: faster_rcnn_inception_v2:1
I0317 18:31:58.888504 84 onnxruntime.cc:1712] TRITONBACKEND_Initialize: onnxruntime
I0317 18:31:58.888532 84 onnxruntime.cc:1725] Triton TRITONBACKEND API version: 1.0
I0317 18:31:58.888547 84 onnxruntime.cc:1731] 'onnxruntime' TRITONBACKEND API version: 1.0
I0317 18:31:58.896717 84 onnxruntime.cc:1773] TRITONBACKEND_ModelInitialize: faster_rcnn_inception_v2 (version 1)
WARNING: Since openmp is enabled in this build, this API cannot be used to configure intra op num threads. Please use the openmp environment variables to control the number of threads.
I0317 18:31:58.897804 84 onnxruntime.cc:372] skipping model configuration auto-complete for 'faster_rcnn_inception_v2': max_batch_size, inupts or outputs already specified
I0317 18:31:58.898338 84 onnxruntime.cc:1817] TRITONBACKEND_ModelInstanceInitialize: faster_rcnn_inception_v2_0 (GPU device 0)
I0317 18:32:00.833758 84 onnxruntime.cc:1848] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0317 18:32:00.833808 84 onnxruntime.cc:1793] TRITONBACKEND_ModelFinalize: delete model state
E0317 18:32:00.834200 84 model_repository_manager.cc:986] failed to load 'faster_rcnn_inception_v2' version 1: Invalid argument: unexpected inference input 'image_tensor', allowed inputs are: image
ERROR: infer_trtis_server.cpp:1044 Triton: failed to load model faster_rcnn_inception_v2, triton_err_str:Invalid argument, err_msg:load failed for model 'faster_rcnn_inception_v2': version 1: Invalid argument: unexpected inference input 'image_tensor', allowed inputs are: image;
ERROR: infer_trtis_backend.cpp:45 failed to load model: faster_rcnn_inception_v2, nvinfer error:NVDSINFER_TRTIS_ERROR
ERROR: infer_trtis_backend.cpp:184 failed to initialize backend while ensuring model:faster_rcnn_inception_v2 ready, nvinfer error:NVDSINFER_TRTIS_ERROR
0:00:05.932771919 84 0x55d893c13ea0 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 1]: Error in createNNBackend() <infer_trtis_context.cpp:246> [UID = 1]: failed to initialize trtis backend for model:faster_rcnn_inception_v2, nvinfer error:NVDSINFER_TRTIS_ERROR
I0317 18:32:00.834392 84 server.cc:280] Waiting for in-flight requests to complete.
I0317 18:32:00.834410 84 server.cc:295] Timeout 30: Found 0 live models and 0 in-flight non-inference requests
0:00:05.932895100 84 0x55d893c13ea0 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 1]: Error in initialize() <infer_base_context.cpp:81> [UID = 1]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRTIS_ERROR
0:00:05.932909055 84 0x55d893c13ea0 WARN nvinferserver gstnvinferserver_impl.cpp:439:start:<primary_gie> error: Failed to initialize InferTrtIsContext
0:00:05.932919355 84 0x55d893c13ea0 WARN nvinferserver gstnvinferserver_impl.cpp:439:start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app-trtis/config_infer_primary_detector_faster_rcnn.txt
0:00:05.937387478 84 0x55d893c13ea0 WARN nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<primary_gie> error: gstnvinferserver_impl start failed
** ERROR: <main:655>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to initialize InferTrtIsContext
Debug info: gstnvinferserver_impl.cpp(439): start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app-trtis/config_infer_primary_detector_faster_rcnn.txt
ERROR from primary_gie: gstnvinferserver_impl start failed
Debug info: gstnvinferserver.cpp(460): gst_nvinfer_server_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie
App run failed
I get the same error regardless I set strict_model_config as true or false. What could this error mean?
The input that I am using is a 720x1280 video. The model apparently supports variable sized input, if I give it any other dimension it throws an error that the model expects [ 1, 3, -1, -1] as input. This config file was as a result of fixing my model’s requirements yet I get
gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 1]: Error in specifyBackendDims() <infer_trtis_context.cpp:143> [UID = 1]: failed to create trtis backend on model:faster_rcnn_inception_v2 when specify input:image with wrong dims:1x3x-1x-1