Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc) : GeForce 3090
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : Yolo_v4
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): TAO Toolkit 3.0
This is a follow up question for the answer in this question Tao-converter doesn't work for Deepstream 6.1
The solution was to change -p Input,1x3x544x960,8x3x544x960,16x3x544x960 to -p Input,1x3x544x960,1x3x544x960,16x3x544x960. I noticed that the only changes was the optimization profile of <opt_shape> (format: <n>x<c>x<h>x<w>). Why does the n value needed to change from 8 to 1? How do I find the optimization profiles of models trained using the tao toolkit?