Deepstream-bodypose2d-app use int8_calibration_320_448.txt log print (errors invalid input pafmap dimension.)

Please provide the following information when requesting support.

• Hardware (Jetson TX2 xavier NX JetPack4.6)
• Network Type (bodypostNet)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
step1:
git clone GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
step2:
cd deepstream_tao_apps/configs/bodypose2d_tao
vim bodypose2d_pgie_config.txt

int8-calib-file=…/…/models/bodypose2d/int8_calibration_320_448.txt
infer-dims=3;320;384
network-mode=1

step3:cd deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app
./deepstream-bodypose2d-app 1 …/…/…/configs/bodypose2d_tao/sample_bodypose2d_model_config.txt file:///media/nvidia/SD/src/deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app/dance.mp4 ./body2dout
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
terminate called after throwing an instance of ‘std::runtime_error’
what(): invalid input pafmap dimension.
Aborted (core dumped)

Hi,
This looks more related to Deepstream. We are moving this post to the Deepstream forum to get better help.
Thank you.

Hi @yezhouyin
Is it still an issue? I saw you submitted another topic (When bodypostNet is from fp16 to int8, the effect is significantly worse) and it seems the program is running already.

If you run with the default official model, please keep the original setting.
Please check if you can run with the default setting.

Thank you for your attention, these are two problems, 1. It is possible to use the default configuration of 3x288x384 with int8, but the effect is too bad. I want to use a larger size of 3x320x448 on int8, but an error is reported; 2. From fp16 to int8 detection The effect is too bad, I don’t know if there is a problem with the int8 calibration file

Thank you for your attention. The default configuration of 3x288x384 can be run with int8, but the effect is too bad. I want to use a larger size of 3x320x448 on int8. The official also gave 3 calibration files for reference: https://catalog.ngc. nvidia.com/orgs/nvidia/teams/tao/models/bodyposenet/files

Here is bodypose model doc, BodyPoseNet | NVIDIA NGC

int8-calib-file=…/…/models/bodypose2d/int8_calibration_320_448.txt
infer-dims=3;320;384
network-mode=1
here did you try infer-dims=3;320;448?

Sorry, I wrote infer-dims=3;320;384, actually I tried
infer-dims=3;320;448

The log prints invalid input pafmap dimension

Can you download tao-converter into your machine and then convert it into a new tensorrt engine?

$ tao-converter model.etlt -k nvidia_tlt -p input_1:0,1x320x448x3,4x320x448x3,16x320x448x3 -t fp16 -m 16 -e fp16.engine

Then config it to “model-engine-file” and retry.

Thank you for your attention
command:
tao-converter -k nvidia_tlt -p input_1:0,1x320x448x3,1x320x448x3,1x320x448x3 model.etlt -t int8 -c int8_calibration_320_448.txt -e tao-converter.model_int8_b1_320_448.engine -b 1
the conversion of tao-converter to engine does not report an error, and the same error is printed when running

Please share the latest full log again. Thanks.

Uploading: tao-converter.model_int8_b1_320_448.engine…
bodynet.log (2.5 KB)
bodypose2d_pgie_config.txt (3.1 KB)

Please modify deepstream_bodypose2d_app.cpp(deepstream_tao_apps/deepstream_bodypose2d_app.cpp at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub) and then make clean and make.
$ vim deepstream_bodypose2d_app.cpp

//cvcore::ModelInputParams ModelInputParams = {8, 384, 288, cvcore::RGB_U8};
cvcore::ModelInputParams ModelInputParams = {8, 448, 320, cvcore::RGB_U8};

Then, you can run with infer-dims=3;320;448

Thank you for your answer
Problem solved, in network-mode=2 (fp16)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.