Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc): GeForce 3090
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : Yolo_v4
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): 3.22.05
I have a few questions related to using
tao-converter for building tensorrt engine from exported model file:
In model export, I see the following options:
tao-converter, I see the following options:
- What is the role of
--max_batch_size? Are they only relevant in
tao-convertershould be equal to the
--batch_sizevalue during calibration file export right?
- When setting
-pfor optimization profiles, if my app always has X input sources, should I set the
<opt_shape>to X or should I have identical profile for min/opt/max?
- What is the role of the
-soption, I know it’s to set the
int8mode but I’m not sure what does that mean?