How to run purpose built model Peoplenet on Jetson Nano in my own application?


I am using Jetson Nano with Jetpack 4.4(recommended in forum) and I want to use Nvidia’s purpose built model Peoplenet in my own application (not deepstream).

For clarity, I downloaded the peoplenet’s pruned model from ngc container peoplenet pruned model and used TLT converter downloaded for Jetson and converted the etlt file to TRT engine successfully. I am able to use this generated engine file in deepstream-app, but I don’t find any references how to use it in my custom application.

I want to know what is the preprocessing that has been used in peoplenet. Its mentioned in the peoplenet link to perform dbscan or nms operation. And how to post process the output from model (60 * 34 * 12 and 60 * 34 * 3).

Could anyone help me with this?

Refer to Using FaceDetectIR model in Triton Server

Thanks, for the reference. The code you referred has the post processing step telling how to parse the output from model.

I implemented this in my application and when I run inference on a sample image, the confidence value coming from the model is 0 at all positions. With respect to preprocessing, I am multiplying the image with scale factor 0.00392156 and converting to RGB color space.

I am attaching the source code here, kindly let me know if I miss anything.
inferPeopleNet.cpp (8.0 KB)

For preprocessing, refer to Run PeopleNet with tensorrt

1 Like

Hi @Karthee
Can you solved the problem?

Yeah @Morganh’s answer helped me to solve it.

Is it possible to share me your python/c++ code for pre/post-processing?

Hi @Karthee,
Does you multiply the scale factor 0.00392156 with images and then normalize with @Morganh given link?

and how you can generated resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine ?

and How you implement this part in your code?

a = np.asarray(img).astype(np.float32)
a= a.transpose(2, 0, 1) / 255.0

I generated the engine file using the TLT-Converter tool

I used OpenCV’s blobFromImage API to implement that part.