Inferencing of emotionnet model on Jetson

Please provide the following information when requesting support.

• Hardware: Xavier
• TLT Version: 3.0

I want to use the emotionnet deployable model on my Jetson Xavier where I’m already running the face detection model using deepstream. My use case is that, when ever the face is detected, the ROI should be fed to the emotionnet model. I went through the whole tlt documentation as well as the ncg site where the emotionnet deployable model card exists. There it is mentioned that the model can be used on the edge devices using TensorRT but there is no concrete documentation of it and the major emphasis is give on ‘tlt cv inference’. Even after using ‘tlt convert’ with the provided key, it fails to convert from .etlt to .engine file.

Please help me to resolve this issue and the possible way to integrate the emotionnet deployable model in deepstream

Can you refer to deepstream_tao_apps/apps/tao_others/deepstream-emotion-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub ?

More info for converting .etlt to .engine, refer to Could not convert Emotionnet from NGC to Triton Plan format - #4 by giangnv.soict.hust

Hello @Morganh , thank you for the reply. The repository seems promising. Just a question, can this work with Deepstream 5.1 or will I need Deepstream 6.0 explicitly?

Please use DS6.0.

Hey @Morganh, is there a way where I can deploy emotionnet model in a python code. I am using the deepstream python api. I checked the cpp implementation of it, but it seems to be for static images. I tried to use the facial landmark model as the secondary detector in my pipeline but it doesn’t seem to work. Is there a work around for that?

Actually you can use Emotion Classification — TAO Toolkit 3.0 documentation to run inference.

May I know the whole pipeline of your experiment?

I want to run the emotion detection model in the python api. Firstly, I’ll be having a face detector as a primary detector where the face ROI will be fed to the faciallandmark detection model which will be the secondary detector and finally the emotionnet model on the 68 key points provided by the secondary detector. All this is expected to happen in real time using an RTSP camera or a CSI camera rather than on a static image that too in python.

After refering to the c++ example for emotionnet app, it was clear that facenet and faciallandmark detector uses nvinfer plugin where as emotionnet has some different plugin.

So how do I go with that?

Currently there is no deepstream python api for emotion detection. I will sync with internal team about that.
And also Emotion Classification — TAO Toolkit 3.0 documentation is actually running emotion inference with python. We’ll think about how you can leverage it.

Okay, thank you!

Actually deepstream_tao_apps/apps/tao_others at master · NVIDIA-AI-IOT/deepstream_tao_apps (github.com) can run inference against video file.

An example command:

$ ./deepstream-emotion-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt 
 file:///the_path_to/test.mp4    ./emotion

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.