Jetson-inference TensorRT action recognition onnx file trained on TAO 5.0

Hello, i recently trained and exported an action-recognition model using TAO toolkit 5.0 (rgb_resnet18_3.onnx). I am using the jetson-inference container to be able to use actionnet and i get an error message.

here is my command:

actionnet --model=python/training/detection/ssd/models/action-recognition/rgb_resnet18_3.onnx --labels=python/training/detection/ssd/models/action-recognition/labels.txt csi://0

here is the output( ill only paste the error message at the end):


[TRT]    Loaded 59 bytes of code generator cache.
[TRT]    device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
[TRT]    4: [network.cpp::validate::3062] Error Code 4: Internal Error (Network has dynamic or shape inputs, but no optimization profile has been defined.)
[TRT]    device GPU, failed to build CUDA engine
[TRT]    device GPU, failed to load python/training/detection/ssd/models/action-recognition/rgb_resnet18_3.onnx
[TRT]    failed to load python/training/detection/ssd/models/action-recognition/rgb_resnet18_3.onnx
[TRT]    actionNet -- failed to initialize.
actionnet:  failed to initialize actionNet

is this an issue with TensoRT not being compatible with the ONNX file given? or am i missing an optimization profile that i have to build?

thank you

Hi @castej10, jetson-inference actionNet is setup for the models from trt_pose:

DeepStream has samples that run the TAO models:

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_3D_Action.html

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.