Emotion classification in Python

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 4070
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only) 535.216.01
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Is there any Python application as reference that perform emotion classification using the model from deepstream_tao_apps?
In addition, the C application uses the “cvcore_libs”. Is it possible to use this library on Python?

No.

The cvcore_libs are used by the cuatomized nvdsvideotemplate plugin library. It is not directly used by the app. You can write the app with python and pyds APIs/Service Maker python APIs.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Python_Sample_Apps.html
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_service_maker_python.html

@Fiona.Chen, thank you for reply.
In the application that I’m developing, I have already implemented the detection and the the landmarks. In a probe, I can access the landmarks data, that have a shape of 68x2. But, according to documentation, the input for the emotion model is 1x136x1. Do you have any tips on how to reshape the output of the landmarks Sgie to match the input of the emotion model? Can I do it using pyds or PyBindings?

Here is the code snippet for accessing the landmarks:

while l_user_meta:
try:
user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
except StopIteration:
break
if user_meta and user_meta.base_meta.meta_type==pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META:
try:
tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)
except StopIteration:
break
frame_outputs =
layer_1 = pyds.get_nvds_LayerInfo(tensor_meta, 0)
layer_2 = pyds.get_nvds_LayerInfo(tensor_meta, 1)
layer_3 = pyds.get_nvds_LayerInfo(tensor_meta, 2)
layer_2_data =
# Convert NvDsInferLayerInfo buffer to numpy array
ptr = ctypes.cast(pyds.get_ptr(layer_2.buffer), ctypes.POINTER(ctypes.c_float))
layer_2_data = np.ctypeslib.as_array(ptr, shape=([80,2]))

We just provide cvcore interfaces. You may need to develop the python binding by yourself.