Hi @aurelm95
I think you can debug with steps below:
1. check if uff can generate the same output as .h5 with the same input
1.1 dump the input and output of .h5 inference
1.2 input the .h5 input to trtexec+uff (reference options below) and check if it can get the almost same output as .h5 inference
/usr/src/tensorrt/bin/trtexec
...
--uffInput=<name>,X,Y,Z Input blob name and its dimensions (X,Y,Z=C,H,W), it can be specified multiple times; at least one is required for UFF models
--uffNHWC Set if inputs are in the NHWC layout instead of NCHW (use X,Y,Z=H,W,C order in --uffInput)
...
--loadInputs=spec Load input values from files (default = generate random inputs). Input names can be wrapped with single quotes (ex: 'Input:0')
Input values spec ::= Ival[","spec]
Ival ::= name":"file
...
--dumpOutput Print the output tensor(s) of the last inference iteration (default = disabled)
...
If this is confirmed, it indicates the uff model you exported is good.
2. debug the DeepStream + UFF accuracy
referring to DeepStream SDK FAQ - #21 by mchi