Tao-converter [ERROR] input_rgb: number of dimensions is 5 but profile 0 has 4

Please provide the following information when requesting support.

• Hardware (Nano)
JetPack 4.5.1
• Network Type (action recognition 3D)
• TAO Version (3.21.11)
• Training spec file(action_recognition_net/specs/train_rgb_3d_finetune.yaml)
• How to reproduce the issue ?

I downloaded cv_samples_v1.3.0.zip by
wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tao/cv_samples/versions/v1.3.0/zip -O cv_samples_v1.3.0.zip

and downloaded the pretrained model by
ngc registry model download-version nvidia/tao/actionrecognitionnet:trainable_v1.0 --dest $HOST_RESULTS_DIR/pretrained/
then followed the steps in action_recognition_net/actionrecognitionnet.ipynb to have done actionrecognitionnet 3D model training with hmdb51 dataset and train_rgb_3d_finetune.yaml and exporting model to etlt file with export_rgb.yaml, finally got the file rgb_resnet18_3.etlt.
but when I was using tao-converter for JetPack4.5.1 on Nano to convert rgb_resnet18_3.etlt to tensorrt engine file:

tao-converter -k nvidia_tao
-d 3,3,224,224
-o fc_pred
-t fp16
-p input_rgb,3x3x224x224,3x3x224x224,3x3x224x224
rgb_resnet18_3.etlt

the error happened, the whole logs are :

tao-converter -k nvidia_tao
-d 1,3,3,224,224
-o fc_pred
-t fp16
-p input_rgb,3x3x224x224,3x3x224x224,3x3x224x224
rgb_resnet18_3.etlt
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] Detected input dimensions from the model: (-1, 3, 3, 224, 224)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (3, 3, 224, 224) for input: input_rgb
[INFO] Using optimization profile opt shape: (3, 3, 224, 224) for input: input_rgb
[INFO] Using optimization profile max shape: (3, 3, 224, 224) for input: input_rgb
[WARNING] Setting layouts of network and plugin input/output tensors to linear, as 3D operators are found and 3D non-linear IO formats are not supported, yet.
[ERROR] input_rgb: number of dimensions is 5 but profile 0 has 4.
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

Please note ‘input_rgb’ is the input node name.

It looks like tao-converter doesn’t support 3D model?

3D model indeed has 5 dimensions (-1, 3, 3, 224, 224), but -p only supports the format xxx.
if I forcibly set -p input_rgb,1x3x3x224x224,1x3x3x224x224,1x3x3x224x224 , then the following error was seen:

tao-converter -k nvidia_tao
-d 3,3,224,224
-o fc_pred
-t fp16
-p input_rgb,1x3x3x224x224,1x3x3x224x224,1x3x3x224x224
rgb_resnet18_3.etlt
Please provide three optimization profiles via -p <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has the format: nxcxhxw
Aborted (core dumped)

Even I imitate this this guide:

also get the error:

tao-converter -k nvidia_tao
-t fp16
-p input_rgb,1x9x224x224,1x9x224x224,1x9x224x224
rgb_resnet18_3.etlt
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] Detected input dimensions from the model: (-1, 3, 3, 224, 224)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 9, 224, 224) for input: input_rgb
[INFO] Using optimization profile opt shape: (1, 9, 224, 224) for input: input_rgb
[INFO] Using optimization profile max shape: (1, 9, 224, 224) for input: input_rgb
[WARNING] Setting layouts of network and plugin input/output tensors to linear, as 3D operators are found and 3D non-linear IO formats are not supported, yet.
[ERROR] input_rgb: number of dimensions is 5 but profile 0 has 4.
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

Can you follow ActionRecognitionNet — TAO Toolkit 3.21.11 documentation and retry?

Hi Morganh,

Following the guide, I executed the following command:
tao-converter -k nvidia_tao
-t fp16
-p input_rgb,1x3x3x224x224,4x3x3x224x224,16x3x3x224x224
-e rgb_resnet18_3.engine
rgb_resnet18_3.etlt
still got the same error:
Please provide three optimization profiles via -p <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has the format: nxcxhxw
Aborted (core dumped)

Please note I downloaded tao-converter from TensorRT — TAO Toolkit 3.21.11 documentation
on Nano, https://developer.nvidia.com/tao-converter-jp4.5 is used,
on x86 PC, https://developer.nvidia.com/cuda111-cudnn80-trt72-0 is used.
the same error happened on both Nano and PC.
Thanks.

Seems that the JP4.5’s tao-converter is not updated to support NxCxDxHxW.
I will sync with internal team for this.

It is expected to get below info when run “$ tao-converter -h”

-p comma separated list of optimization profile shapes in the format <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has x as delimiter, e.g., NxC, NxCxHxW, NxCxDxHxW, etc. Can be specified multiple times if there are multiple input tensors for the model. This argument is only useful in dynamic shape case.

If possible, could you please update Nano to JP4.6?

OK, I’ll have a try with JP4.6 after I back to office next week.

Hi Morganh,

I tried with JetPack4.6 on Nano, the same command:

tao-converter -k nvidia_tao
-t fp16
-p input_rgb,1x3x3x224x224,4x3x3x224x224,16x3x3x224x224
-e rgb_resnet18_3.engine
rgb_resnet18_3.etlt

can convert rgb_resnet18_3.etlt to rgb_resnet18_3.engine successfully.

So, as you said, JP4.5’s tao-converter is not updated to support NxCxDxHxW.
Can this issue be fixed in tao-converter for JP4.5 or JP4.4 ? We are now using JP4.4 in our project for which large number of Jetson Nano boards were already manufactured with JP4.4, because of large immigration cost, we have to keep using JP4.4, and we want to use the actionrecognitionnet model with JP4.4.

Yes, we will fix it as soon as possible.

Thanks, please update me once the fixed version is ready.

More, if you use deepstream to run action recognition model, please note that only JP4.6 can work. Because actionRecognition uses a new gst-plugin introduced in DS6.0.

Are you using tensorrt sample , right?

Yes, I didn’t run actionrecognition in Deepstream yet, so, it can’t be run in Deepstream5.x ?

You can run with the tensorrt sample .

Officially, there are two ways. See ActionRecognitionNet — TAO Toolkit 3.21.11 documentation

OK, thanks, so, I’ll write a my own plugin for Deepstreem 5.x in which I call actionrecognition following tensorrt sample.

Hi,
For quick workaround, I just build a tao-converter for JP4.4. It can also work in JP4.5 boards.
See attachment.

Please use below command to generate trt engine. I verified on my side.
tao-converter resnet18_2d_rgb_hmdb5_32.etlt -k nvidia_tao -p input_rgb,1x96x224x224,4x96x224x224,16x96x224x224 -e trt2d.engine -t fp16

tao-converter resnet18_3d_rgb_hmdb5_32.etlt -k nvidia_tao -p input_rgb,1x3x32x224x224,4x3x32x224x224,16x3x32x224x224 -e trt3d.engine -t fp16

Hi Morganh, thanks a lot, I tested your tao-convert-jp4.4, it works well on my Nano.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.