How to set the -p option in tao-converter?

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) : GeForce 3090
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : Yolo_v4
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): TAO Toolkit 3.0

This is a follow up question for the answer in this question Tao-converter doesn't work for Deepstream 6.1

The solution was to change -p Input,1x3x544x960,8x3x544x960,16x3x544x960 to -p Input,1x3x544x960,1x3x544x960,16x3x544x960. I noticed that the only changes was the optimization profile of <opt_shape> (format: <n>x<c>x<h>x<w>). Why does the n value needed to change from 8 to 1? How do I find the optimization profiles of models trained using the tao toolkit?

That’s a workaround for yolov4 when you generate tensorrt engine via tao-converter and deploy it directly in deepstream. For other inference, for example, standalone way or GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton , you can find some info about setting “-p”. tao-toolkit-triton-apps/download_and_convert.sh at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps · GitHub

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.