Run PeopleNet with tensorrt

Hello,

We are trying to run Peoplenet pruned model with tensorrt 7 without deepstream.

Where can we find a good post-processing code for detectnet model?

We have tried some post processing functions:


Please refer to postprocess code which is exposed in C++ in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer_customparser/nvdsinfer_custombboxparser.cpp .

Thanks for your anwser,

is the pre-processing similar to a Faster Rcnn pre-processing?

Hello,

I have the same problem as original op. I am trying to adapt the code of nvdsinfer_custombboxparser.cpp to a simple python example, but still cannot properly parse the output.

Does peoplenet need any specific preprocessing? On the model’s page I have just found “Input: Color Images of resolution 960 X 544 X 3”. I have tried both as HWC and CHW, without normalization or with normalization to [0, 1] (but of course, when I am trying things randomly there are tons of options and I might have made an error somewhere).

Is the output actually formatted as (xmin, ymin, xmax, ymax), like it seems to be parsed in nvdsinfer_custombboxparser.cpp, or is it (xc, yc, w, h), as written on the model’s page?

The outputs’ channel order is also not clear to me. The model’s page lists it as 60x34x12, which would be gridH * gridW * c * 4. When I look at nvdsinfer_custombboxparser it seems to be parsed differently.

Is it possible to have a few more information about how to run the model in tensor rt?
Thanks a lot!

Hi,

I’m now able to have good bouding boxes, I use the postprocessing given by Morganh.
And as pre-processing:

  • BGR Images

  • Divide all the pixel value by 255

I got same result as in TLT inference However I have to put the confidence treshold at 0.3 in my code, and0.8 for the TLT inference.

I think i have missed something in the pre processing.

In detectnet_v2 pre-processing for 3 channels, please refer to below.
a = np.asarray(img).astype(np.float32)
a= a.transpose(2, 0, 1) / 255.0

1 Like

RGB or BGR ?

RGB.
But (H, W, C) --> (C, H, W)