Please provide the following information when requesting support.
summary:
I followed the TAO guide, made an output that I was supposed to be able to put in my Jetson and run, and it doesn’t work!
• Hardware (T4/V100/Xavier/Nano/etc)
TAO (version I’ve tried will be listed below I’ve tried ) on DGX Station A100 | Jetson AGX Xavier (JP 4.6.1)
dockers: ['nvidia/tao/tao-toolkit-tf', 'nvidia/tao/tao-toolkit-pyt', 'nvidia/tao/tao-toolkit-lm']
format_version: 2.0
toolkit_version: 3.22.05 # [not working]
published_date: 05/25/2022
dockers: ['nvidia/tao/tao-toolkit-tf', 'nvidia/tao/tao-toolkit-pyt', 'nvidia/tao/tao-toolkit-lm']
format_version: 2.0
toolkit_version: 3.22.02 # [not working]
published_date: 02/28/2022 #[not working]
dockers: ['nvidia/tao/tao-toolkit-tf', 'nvidia/tao/tao-toolkit-pyt', 'nvidia/tao/tao-toolkit-lm']
format_version: 2.0
toolkit_version: 3.21.11 [Not working]
published_date: 11/08/2021
when I try to run in deepstream I get
ERROR: [TRT]: UffParser: Validator error: FirstDimTile_4: Unsupported operation _BatchTilePlugin_TRT
parseModel: Failed to parse UFF model
I thinks it has to with the BatchTilePlugin
[TensorRT/plugin/batchTilePlugin at master · NVIDIA/TensorRT · GitHub]
Which I belive has been there for quite a while (Clearly not something introduced in the TensorRT8.2.5 version ) So I’m confused why this is not working. Is there some other trick I’m missing here.
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
Resnet-18 SSD (pretrained_object_detection_vresnet18-2)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
ssd_train_resnet18_kitti.txt (1.6 KB)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
generated_model.zip (51.2 MB)