FaceDetection-IR using deepstream sdk

• Hardware Platform (Jetson / GPU): jetson nano
• DeepStream Version : 2.0 DP
• JetPack Version (valid for Jetson only) : 4.4 DP
• TensorRT Version : 7.1

I want to run FaceDetection-IR on jetson nano using deep stream sdk. I installed the deep stream sdk and I follow the below commnad and I get a error:
1- in the directory :

/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models

I run :

mkdir -p ../../models/tlt_pretrained_models/facedetectir && \
    wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_facedetectir/versions/pruned_v1.0/files/resnet18_facedetectir_pruned.etlt \
    -O ../../models/tlt_pretrained_models/facedetectir/resnet18_facedetectir_pruned.etlt && \
    wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_facedetectir/versions/pruned_v1.0/files/facedetectir_int8.txt \
    -O ../../models/tlt_pretrained_models/facedetectir/facedetectir_int8.txt

This is a part of config_infer_primary_facedetectir.txt, but I don’t have resnet18_facedetectir_pruned.etlt_b1_gpu0_int8.engine

tlt-encoded-model=../../models/tlt_pretrained_models/facedetectir/resnet18_facedetectir_pruned.etlt
labelfile-path=labels_facedetectir.txt
int8-calib-file=../../models/tlt_pretrained_models/facedetectir/facedetectir_int8.txt
model-engine-file=../../models/tlt_pretrained_models/facedetectir/resnet18_facedetectir_pruned.etlt_b1_gpu0_int8.engine

How to get this file?

  • This command you mentioned only downloads facedetectir_int8.txt and resnet18_facedetectir_pruned.etlt.
  • engine file is optional because etlt can be loaded and converted into engine at runtime if engine file does not exist.
  • Could you please provide what you have done (commands executed) step by step, and what error you have met, so that we can look into your problem precisely.

@ersheng
what is the proper way to run steps to get DS to process video with " facedetectir_int8.txt and resnet18_facedetectir_pruned.etlt ." ?
Also it might be possible to get MQQT on cases of face detection at AWS using the scenario https://github.com/aws-samples/aws-iot-greengrass-deploy-nvidia-deepstream-on-edge ?

1 Like

I’m also deeply interested in instruction on how to gain tensor metadata not only for SSD but for other models like DetectNet_v2 models such as FaceDetectIR.

1 Like

so how do we do cropping the face with it? then save it? any ideas?

Do you still need full answer?
I answered a little about runnung FaceDetectIR, but I can Share sample of code if anyone need

Thank you for sharing!
it worked great with video file input.
Is there a code that crops & save to file?
I wasn’t able to get the peoplenet/ faceir to work with usb camera [ tried stereolabs zed camera as input] .Though
Also only peoplenet did show any detections visually.

hello bro,

I also meet the problem, can you share sample of code for me? Thx!

@MingGatsby which exactly is the problem you are running into? If to follow the url mentioned above you might find the extended stepsw to run the deepstream applications.

I don’t understand the pipeline that how to cutting faces with deepstream.

Because I wanna get the image of face for recogniting.

@MingGatsby
are you able to run the default deepstream samples peoplenet/facedetectionIR ?
there is an example how to do saving detected objects Using GStreamer plugin to extract detected object from NVIDIA DeepStream | by Karthick | Medium
I did not try it though but it might work for your needs.

Try this thread, there is a sample of code

@pilotfdd
there is a sample that allows in a simple way just by executing the python file on jetson
to get saved cropped faces written to the disk

However, even having the faceDetectIR deepstream example running.
it seems complicated without exact steps to get images cropped/saved from it