ERROR: TRTIS: failed to load model inception_graphdef

I want to try TensorRT Inference server samples in deepstream5.0 sdk, and i followed the instructions in READMEof the deepstream sdk. I run the sample by that commend deepstream-app -c source1_primary_classifier.txt , which the txt file located in this address /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app-trtis. Hiowever I get that error:
2020-06-03 10:12:35.796329: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2

**    Using winsys: x11 **
**    0:00:03.486556065 16412     0x2e087760 WARN           nvinferserver gstnvinferserver_impl.cpp:248:validatePluginConfig:<primary_gie> warning: Configuration file unique-id reset to: 1**
**    I0603 02:12:37.069223 16412 server.cc:120] Initializing Triton Inference Server**
**    E0603 02:12:37.250638 16412 model_repository_manager.cc:1519] instance group inception_graphdef_0 of model inception_graphdef specifies invalid or unsupported gpu id of 0. The minimum required CUDA compute compatibility is 6.000000**
**    ERROR: TRTIS: failed to load model inception_graphdef, trtis_err_str:INTERNAL, err_msg:failed to load 'inception_graphdef', no version is available**
**    ERROR: failed to load model: inception_graphdef, nvinfer error:NVDSINFER_TRTIS_ERROR**
**    ERROR: failed to initialize backend while ensuring model:inception_graphdef ready, nvinfer error:NVDSINFER_TRTIS_ERROR**
**    0:00:03.669802890 16412     0x2e087760 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 1]: Error in createNNBackend() <infer_trtis_context.cpp:199> [UID = 1]: failed to initialize trtis backend for model:inception_graphdef, nvinfer error:NVDSINFER_TRTIS_ERROR**
**    I0603 02:12:37.251076 16412 server.cc:179] Waiting for in-flight inferences to complete.**
**    I0603 02:12:37.251111 16412 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests**
**    0:00:03.669925653 16412     0x2e087760 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 1]: Error in initialize() <infer_base_context.cpp:78> [UID = 1]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRTIS_ERROR**
**    0:00:03.669955862 16412     0x2e087760 WARN           nvinferserver gstnvinferserver_impl.cpp:439:start:<primary_gie> error: Failed to initialize InferTrtIsContext**
**    0:00:03.669975185 16412     0x2e087760 WARN           nvinferserver gstnvinferserver_impl.cpp:439:start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app-trtis/config_infer_primary_classifier_inception_graphdef_postprocessInTrtis.txt**
**    0:00:03.670062426 16412     0x2e087760 WARN           nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<primary_gie> error: gstnvinferserver_impl start failed**
**    ** ERROR: <main:651>: Failed to set pipeline to PAUSED**
**    Quitting**
**    ERROR from primary_gie: Failed to initialize InferTrtIsContext**
**    Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinferserver/gstnvinferserver_impl.cpp(439): start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie:**
**    Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app-trtis/config_infer_primary_classifier_inception_graphdef_postprocessInTrtis.txt**
**    ERROR from primary_gie: gstnvinferserver_impl start failed**
**    Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinferserver/gstnvinferserver.cpp(460): gst_nvinfer_server_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie**
**    App run failed**

Add some information, I test the TRT inference server sample in Jetson nano.

Hi,

This example requires a GPU compatibility > 6.0.

**    E0603 02:12:37.250638 16412 model_repository_manager.cc:1519] instance group inception_graphdef_0 of model inception_graphdef specifies invalid or unsupported gpu id of 0. The minimum required CUDA compute compatibility is 6.000000**

However, Nano GPU architecture is 5.3.
Thanks.

This means I can not run TensorRT Inference Server samples in Jetson Nano?

Hi,

No, Nano doesn’t meet the minimal requirement.
Thanks.

Hi,

What Jetson devices meet the requirements?

Thank you

Svetlana

Hi,
NX , AGX support it

Thanks

Hi,

Thank you for your reply. Is it part of Deepstream on the AGX?

Regards

Svetlana