Bpnet opt_shape is invalid

Please provide the following information when requesting support.

• Hardware RTX3090
• Network Type bpnet
• TLT Version 3.22.02
• Training spec file using default
• How to reproduce the issue ?

After 28 hours with multiple crashes of the notebook while training the default bpnet notebook, I am looking to use the deployable model v_1.0.1 which I downloaded with:

!ngc registry model download-version nvidia/tao/bodyposenet:deployable_v1.0.1 \
    --dest $LOCAL_EXPERIMENT_DIR/deployable_model

This downloads the following files:

int8_calibration_224_320.txt
int8_calibration_320_448.txt
int8_calibration_288_384.txt
labels.txt
model.etlt

When I use tao to export to a tensorRT engine with:

# Convert to TensorRT engine(FP32).
!tao converter $USER_EXPERIMENT_DIR/deployable_model/model.etlt \
                -k $KEY \
                -t fp32 \
                -e $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.$IN_HEIGHT.$IN_WIDTH.fp32.engine \
                -p ${INPUT_NAME},1x$INPUT_SHAPE,${OPT_BATCH_SIZE}x$INPUT_SHAPE,${MAX_BATCH_SIZE}x$INPUT_SHAPE

I get error opt_shape is invalid :

022-07-13 06:52:38,103 [INFO] root: Registry: [‘nvcr.io’]
2022-07-13 06:52:38,284 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.11-tf1.15.5-py3
opt_shape is invalid.
2022-07-13 06:52:40,099 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

Also, the tensorRT int8 conversion expects a .bin calibration file that is not in the downloaded deployable model which contains .txt calibration files.

# Convert to TensorRT engine(INT8).
!tao converter $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.etlt
-k $KEY
-t int8
-c $USER_EXPERIMENT_DIR/models/exp_m1_final/calibration.$IN_HEIGHT.$IN_WIDTH.bin
-e $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.$IN_HEIGHT.$IN_WIDTH.int8.engine
-p ${INPUT_NAME},1x$INPUT_SHAPE,${OPT_BATCH_SIZE}x$INPUT_SHAPE,${MAX_BATCH_SIZE}x$INPUT_SHAPE

In summary:

  1. Is there a documentation on using the bpnet deployable model?
  2. Why am I getting the error opt_shape is invalid
  3. How do I export the downloaded deployable mode v_1.0.1 to use with tensorRT?

Many thanks in advance for your support!

Could you please share the explicit command?

Could you try

tao-converter model.etlt
-k nvidia_tlt
-p input_1:0,1x288x384x3,4x288x384x3,16x288x384x3
-t fp16
-m 16
-e xxx.engine

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks