I am using TLT v3 and fetched the fpenet model using
ngc registry model download-version nvidia/tlt_fpenet:trainable_v1.0
Then, I converted the fpenet.tlt model to model.etlt using fpenet export. After that, using the tlt-convert tool, I created a .trt file in order to use that in a Python application.
Here is the tlt-convert command I used:
tlt-converter fpenet.etlt -k nvidia_tlt -p input_face_images:0,1x1x80x80,1x1x80x80,1x1x80x80 -b 1 -t fp32 -e fpenet_b1_fp32.trt
The problem I am facing is that the inference output is not what it is supposed to be; i.e., when compared with the output of fpenet inference, it is not even close. I suspect that either the conversion step is causing the issue, or my preprocessing is not what it should be.
Here is the inference code I am using:
test.py (4.8 KB)
and I run it with this command:
python3 test.py --input p.jpg
The output is this:
However, the output that the fpenet inference command creates by following fpenet.ipynb is this:
You can clearly see that the first output is not sensible at all.
Could you please help me find the problem in the process?
Thanks for your help.