• Hardware Platform (Jetson / GPU): Ubuntu 22.02 X86 workstation w/ Ada Lovelace RTX2000
• DeepStream Version: 7.1
• TensorRT Version: 10.3.0.26-1+cuda12.5
• NVIDIA GPU Driver Version (valid for GPU only): Driver Version: 580.65.06, cuda version 12.6
• Issue Type: bug
I am having an issue swapping out the model in deepstream-3d-action-recognition with a custom one we have trained (not using the TAO/resnet). It was built with pytorch and it has the exact same dimensions (except more action classes on output) as the previous model. Here is the code we used to export the model below:
input_tensor = torch.randn(1,3,32,256,256).to(device)
torch.onnx.export(
model,
input_tensor,
‘recognition_net.onnx’,
input_names=[‘input_tensor’],
output_names=[‘cls_scores’],
export_params=True,
do_constant_folding=True,
verbose=False,
opset_version=11,
dynamic_axes={
‘input_tensor’: {
0: ‘batch_size’,
3: ‘height’,
4: ‘width’
}
}
)
And this code snippet successfully runs inference outside of deepstream for us with the model:
session = onnxruntime.InferenceSession(‘recognition_net.onnx’)
input_feed = {‘input_tensor’: input_tensor.cpu().data.numpy()}
outputs = session.run([‘cls_scores’], input_feed=input_feed)
But when we plug it into the action recognition app, with most of the settings the same, it fails with a TensorRT error.
I have added the preprocessing and the inference configuration files too.
config_preprocess_3d_custom.txt (2.6 KB)
config_infer_primary_recognition_net.txt (2.5 KB)
Would anyone know how to help me with this? The only difference I can see between the resnet onnx and our is that it has a width and height of 224 and ours is 255, which I have fixed in the config, but nevertheless the dimensional is off and it fails to build an engine. I assume that this is an issue with our onnx model, or is there something else I am missing?
Thanks and Best Regards,
Portia