Please provide the following information when requesting support.
• Hardware (Nano)
JetPack 4.5.1
• Network Type (action recognition 3D)
• TAO Version (3.21.11)
• Training spec file(action_recognition_net/specs/train_rgb_3d_finetune.yaml)
• How to reproduce the issue ?
I downloaded cv_samples_v1.3.0.zip by
wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tao/cv_samples/versions/v1.3.0/zip -O cv_samples_v1.3.0.zip
and downloaded the pretrained model by
ngc registry model download-version nvidia/tao/actionrecognitionnet:trainable_v1.0 --dest $HOST_RESULTS_DIR/pretrained/
then followed the steps in action_recognition_net/actionrecognitionnet.ipynb to have done actionrecognitionnet 3D model training with hmdb51 dataset and train_rgb_3d_finetune.yaml and exporting model to etlt file with export_rgb.yaml, finally got the file rgb_resnet18_3.etlt.
but when I was using tao-converter for JetPack4.5.1 on Nano to convert rgb_resnet18_3.etlt to tensorrt engine file:
tao-converter -k nvidia_tao
-d 3,3,224,224
-o fc_pred
-t fp16
-p input_rgb,3x3x224x224,3x3x224x224,3x3x224x224
rgb_resnet18_3.etlt
the error happened, the whole logs are :
tao-converter -k nvidia_tao
-d 1,3,3,224,224
-o fc_pred
-t fp16
-p input_rgb,3x3x224x224,3x3x224x224,3x3x224x224
rgb_resnet18_3.etlt
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] Detected input dimensions from the model: (-1, 3, 3, 224, 224)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (3, 3, 224, 224) for input: input_rgb
[INFO] Using optimization profile opt shape: (3, 3, 224, 224) for input: input_rgb
[INFO] Using optimization profile max shape: (3, 3, 224, 224) for input: input_rgb
[WARNING] Setting layouts of network and plugin input/output tensors to linear, as 3D operators are found and 3D non-linear IO formats are not supported, yet.
[ERROR] input_rgb: number of dimensions is 5 but profile 0 has 4.
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)
Please note ‘input_rgb’ is the input node name.
It looks like tao-converter doesn’t support 3D model?
3D model indeed has 5 dimensions (-1, 3, 3, 224, 224), but -p only supports the format xxx.
if I forcibly set -p input_rgb,1x3x3x224x224,1x3x3x224x224,1x3x3x224x224 , then the following error was seen:
tao-converter -k nvidia_tao
-d 3,3,224,224
-o fc_pred
-t fp16
-p input_rgb,1x3x3x224x224,1x3x3x224x224,1x3x3x224x224
rgb_resnet18_3.etlt
Please provide three optimization profiles via -p <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has the format: nxcxhxw
Aborted (core dumped)
Even I imitate this this guide:
also get the error:
tao-converter -k nvidia_tao
-t fp16
-p input_rgb,1x9x224x224,1x9x224x224,1x9x224x224
rgb_resnet18_3.etlt
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] Detected input dimensions from the model: (-1, 3, 3, 224, 224)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 9, 224, 224) for input: input_rgb
[INFO] Using optimization profile opt shape: (1, 9, 224, 224) for input: input_rgb
[INFO] Using optimization profile max shape: (1, 9, 224, 224) for input: input_rgb
[WARNING] Setting layouts of network and plugin input/output tensors to linear, as 3D operators are found and 3D non-linear IO formats are not supported, yet.
[ERROR] input_rgb: number of dimensions is 5 but profile 0 has 4.
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)