Hello,
i’m testing FPEnet model using my system as below:
Jetson Nano 2GB
Jetpack 4.5.1
TensorRT 7.1.3
Cuda 10.2
i’m following the above testing method, but having following error during context.execute_async :
[TensorRT] ERROR: Parameter check failed at: engine.cpp::resolveSlots::1227, condition: allInputDimensionsSpecified(routine)
i use model.etlt from NGC deployable v1.0
then implementing python script and png file from below forum topic:
then executing this command:
python3 test.py --input test.png
notes:
i also testing using fpenet trainable model, train it using TLT, exporting the model, and converting the model using tlt-converter. But same result occurred.
Do you mean you can successfully run tlt fpenet inference from jupyter fpenet.ipynb of tlt_cv_samplesv1.1.0 using GTX 1650 GPU?
If yes, please run python3 test.py --input test.png in GTX1650 too.
i use container nvidia/tensorrt:20.08-py3 on my GTX1650 , and converting the model using cmd :
tlt-converter -k nvidia_tlt -t fp32 -p input_face_images:0,1x1x80x80,1x1x80x80,2x1x80x80 -b 1 -e fpenet_fp32.trt model.tlt.etlt
then running the cmd : python3 test.py --input test.png
but same error is occurred :
[TensorRT] ERROR: Parameter check failed at: engine.cpp::resolveSlots::1228, condition: allInputDimensionsSpecified(routine)