How do I use the ONNX file with the DeepStream Action Recognition Net

**• Hardware Platform (Jetson / GPU)**GPU
• DeepStream Version6.4
• TensorRT Version8.6
**• NVIDIA GPU Driver Version (valid for GPU only)**RTX4060Ti
• Issue Type( questions, new requirements, bugs)
I tried running DeepStream using the official tutorial (https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_3D_Action.html#custom-sequence-preprocess-lib-user-settings-user-configs-for-gst-nvdspreprocess) and downloaded the resnet18_2d_rgb_hmdb5_32.etlt file from Action Recognition Net | NVIDIA NGC, which worked fine.

However, I’m not sure how to run the ONNX model from this website, and I trained my own model using the TAO toolkit with custom data, the nvidia/tao/actionrecognitionnet:trainable_v1.0, and I’m unsure how to use the resulting ONNX file.

There are three configuration items in the /deepstream-6.4/sources/apps/sample_apps/deepstream-3d-action-recognition/config_infer_primary_2d_action.txt file:
tlt-encoded-model=./resnet18_2d_rgb_hmdb5_32.etlt
tlt-model-key=nvidia_tao
model-engine-file=./resnet18_2d_rgb_hmdb5_32.etlt_b4_gpu0_fp16.engine

Then I used the command “trtexec --onnx=resnet18_2d_rgb_hmdb5_32.onnx --saveEngine=model.engine” to generate the engine file. I found out from ONNX Output in TAO 5.0 - how to get an .etlt-model in TAO 5.0.0 that I should “then config it in model-engine-file. Comment out tlt-encoded-model and tlt-model-key.” However, this did not work.

I try to run it,but DeepStream threw an error, indicating “ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:865 failed to build network since there is no model file matched. ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:804 failed to build network. 0:00:09.946351543 2758 0x5625679a6720 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed 0:00:10.130772052 2758 0x5625679a6720 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2212> [UID = 1]: build backend context failed 0:00:10.130811977 2758 0x5625679a6720 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings”

So, my question is, how do I use the ONNX file with the DeepStream Action Recognition Net (including both the official ONNX file and my own trained ONNX file)?

There is onnx model configuration sample in deepstream_tao_apps/configs/nvinfer/peoplenet_tao/config_infer_primary_peoplenet.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

1 Like

I added a line under [property] in file config_infer_primary_2d_action.txt:
onnx-file=./model.onnx
Now it worked, Greatly appreciate!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.