Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU RTX 3060
• DeepStream Version 6.1
• TensorRT Version 126.96.36.199
• NVIDIA GPU Driver Version (valid for GPU only) 510.73.05
• Issue Type( questions, new requirements, bugs) Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Thank for the deepstream triton team. I have successfully deployed some models to the deepstream triton. However, I can’t run with the model taken from customvision.ai.
deestream_triton.txtx (12.8 KB)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Detail, please take a look at this repo
After run your case, I got the failure of “ERROR: infer_trtis_server.cpp:1051 Triton: failed to load model det, triton_err_str:Invalid argument, err_msg:load failed for model ‘det’: version 1: Internal: onnx runtime error 6: Exception during initialization: /workspace/onnxruntime/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:925 SubGraphCollection_t onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: selected_indices has no shape specified. Please run shape inference on the onnx model first. Details can be found in Redirecting…”
Please refer to Redirecting…
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.