Use EmotionNet TAO model in Deepstream pipeline

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
8.4.1
• NVIDIA GPU Driver Version (valid for GPU only)
515.48.07
• Issue Type( questions, new requirements, bugs)
Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello all,

I’m trying to use the EmotionNet TAO model in a DeepStream pipeline. I plan to use the nvinfer plugin for the inference process but in order to use the model, which comes in .etlt format, I need to provide the input and output tensors as configurations for the inference plugin. Can you please provide the names and dimensions of the I/O tensors? I managed to find the dimensions but I failed to find the names of the tensors.

Thank you

Not sure if you are aware of this sample deepstream_tao_apps/apps/tao_others/deepstream-emotion-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub, you can use this model without knowing the input/output tensor name.

Thanks for your response.

Yes, I do know about the sample app you provided link to. However, the example you provided wraps the model inside a custom lib implementation but in my current setup, building the .so in the environment I plan to run the pipeline is a bit trickier to do. I found it easier and more convenient to use the nvinfer plugin and also, all my models inside the pipeline are currently using the nvinfer module and the entire app would be easier to understand for anyone else looking over it in the future.

Also, I failed to find the I/O tensors when I first looked at the example you mentioned but this time I think I found them inside the custom lib implementation file.

  cvcore::ModelInferenceParams eMotionInferenceParams =
  {
    "emotions_fp16_b32.engine",          /**< Path to the engine */
    {"input_landmarks:0"},               /**< Input layer name */
    {"softmax/Softmax:0"},               /**< Output layer name */
  };

I assume these are the ones I was looking for. I’m going to try to use the model with the nvinfer plugin and if I fail to do it for whatever reason, I’m going to use the custom lib method.

Also, as a suggestion, I think that it would be helpful if dimensions and I/O tensor names would be specified inside the overview of the available models on NGC, unless there is a reason why they are not currently specified.

Thanks for the help

@semadu
Yes, you can also refer to https://github.com/NVIDIA-AI-IOT/tao_toolkit_recipes/blob/main/tao_forum_faq/FAQ.md#emotionnet.
The usage is similar to https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/build_triton_engine.sh.

For any tensorrt engine, you can use polygraphy to inspect the input/output tensor.
$ python -m pip install colored
$ python -m pip install polygraphy --index-url [https://pypi.ngc.nvidia.com]
$ polygraphy inspect model your.engine

Changing to TAO forum.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.