FPEnet model inference with TensorRT

Hello,
i’m testing FPEnet model using my system as below:
Jetson Nano 2GB
Jetpack 4.5.1
TensorRT 7.1.3
Cuda 10.2

i’m following the above testing method, but having following error during context.execute_async :
[TensorRT] ERROR: Parameter check failed at: engine.cpp::resolveSlots::1227, condition: allInputDimensionsSpecified(routine)

i use model.etlt from NGC deployable v1.0
then implementing python script and png file from below forum topic:

then executing this command:
python3 test.py --input test.png

notes:
i also testing using fpenet trainable model, train it using TLT, exporting the model, and converting the model using tlt-converter. But same result occurred.

what am i missing?

thank you

Firstly, can you run fpenet inference inside tlt docker?

tlt fpenet inference -e <Experiment Spec File> -i <Json File With Images> -m <Trained TLT Model Path> -k <Encode Key> -o <Output Folder> -r <Images Root Directory>

i can successfully run tlt fpenet inference from jupyter fpenet.ipynb of tlt_cv_samplesv1.1.0 , but using other machine with GTX 1650 GPU…

should i test the docker from jetson nano devices?

Do you mean you can successfully run tlt fpenet inference from jupyter fpenet.ipynb of tlt_cv_samplesv1.1.0 using GTX 1650 GPU?
If yes, please run python3 test.py --input test.png in GTX1650 too.

1 Like

i use container nvidia/tensorrt:20.08-py3 on my GTX1650 , and converting the model using cmd :
tlt-converter -k nvidia_tlt -t fp32 -p input_face_images:0,1x1x80x80,1x1x80x80,2x1x80x80 -b 1 -e fpenet_fp32.trt model.tlt.etlt

then running the cmd : python3 test.py --input test.png

but same error is occurred :
[TensorRT] ERROR: Parameter check failed at: engine.cpp::resolveSlots::1228, condition: allInputDimensionsSpecified(routine)

i’m using the tlt-converter from : https://developer.nvidia.com/cuda110-cudnn80-trt71
and the conversion run successfully without error

Please retry above step inside tlt 3.0-py3 docker.

i use this container: nvcr.io/nvidia/tlt-streamanalytics:v3.0-py3

similar error still occurred :
[TensorRT] ERROR: Parameter check failed at: engine.cpp::resolveSlots::1318, condition: allInputDimensionsSpecified(routine)

Please run with nvcr.io/nvidia/tlt-streamanalytics:v3.0-py3 at your GTX1650 , can you run tlt fpenet inference successfully?

Please generate trt engine as below.
tlt-converter fpenet.etlt -k nvidia_tlt -p input_face_images:0,1x1x80x80,1x1x80x80,1x1x80x80 -b 1 -t fp32 -e fpenet_b1_fp32.trt

it is solved… thank you