Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc): GeForce 3090
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : Yolo_v4
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): 3.22.05
I have a few questions related to using tao-converter
for building tensorrt engine from exported model file:
In model export, I see the following options:
(source)
In tao-converter
, I see the following options:
(source)
- What is the role of
-m
/--max_batch_size
? Are they only relevant inint8
mode? - The
-b
value intao-converter
should be equal to the--batch_size
value during calibration file export right? - When setting
-p
for optimization profiles, if my app always has X input sources, should I set the<n>
value (in<n>x<c>x<h>x<w>
) of<opt_shape>
to X or should I have identical profile for min/opt/max? - What is the role of the
-s
option, I know it’s to set thestrict_type_constraints
flag forint8
mode but I’m not sure what does that mean?