Thank you, i’ve already seen the NVIDIA-AI-IOT but i wanted to use the deepstream emotion app with a usb camera wbut i cannot complete that so i tried to use python script that use deepstream with a usb camera and to modify the model used with an etlt file that i already trained with tao toolkit.
currently i don’t know why but when i launch the deepstream-emotion-app from NVIDIA AI IOT i have this error appearing :
./deepstream-emotion-app: error while loading shared libraries: libnvcv_faciallandmarks.so: cannot open shared object file: No such file or directory
but i can’t find why…
i new to TOA / Deepstream and AI in general so i’m kinda struggling since the begginig of my experiments
Okey, thank you. Everything should be setup up correctly now. I’ve modified some files to fit with my component and model but I still have an error that comes from the files i’ve modified i guess.
So, I use this command to start my app :
deepstream-app -c deepstream_usbcam_emotion.txt
When it compile i got this error :
WARNING: [TRT]: onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: 4: [network.cpp::validate::3004] Error Code 4: Internal Error (input_landmarks:0: for dimension number 1 in profile 0 does not match network definition (got min=3, opt=3, max=3), expected min=opt=max=1).)
ERROR: Build engine failed from config file
Segmentation fault (core dumped)
I also have one other warning but i don’t think it is relevant right now.
I tried to look on other topics of the forum about this error and i saw that i had to modify some dimension to set it same values as the input but i cant find those values.
the emotionnet_onnx.etlt as been generated with the Jupyter Notebook provided by Nvidia with this command :
!mkdir -p $LOCAL_EXPERIMENT_DIR/experiment_dir_final
# Removing a pre-existing copy of the etlt if there has been any.
import os
output_file=os.path.join(os.environ['LOCAL_EXPERIMENT_DIR'],
"experiment_dir_final/emotionnet_onnx.etlt")
if os.path.exists(output_file):
os.system("rm {}".format(output_file))
!tao emotionnet export -m $USER_EXPERIMENT_DIR/experiment_result/exp1/model.tlt \
-o $USER_EXPERIMENT_DIR/experiment_dir_final/emotionnet_onnx.etlt \
-t tfonnx \
-k $KEY
I’m sorry i’m new to IA and as i said before i’m struggling with it and i don’t know how i should know it and where i should find it…
Yes my model is based on the emotion Classification… Should i convert it to another format with tao toolikit ?
if it’s not possible, how am i supposed to get a model to use it with deepstream ?
i’ve seen the sample but your last answer made me doubt about it, sorry.
I’m not really fluent in english but in the introduction nvinfer plugin it says
The plugin also supports the interface for custom functions for parsing outputs of object detectors and initialization of non-image input layers in cases where there is more than one input layer
anyway i wasn’t even aware of the problem of EmotionNet output being facial landmarks instead of images so thank you !
The first and only models i used before this one were Yolov5 and Yolov7 with Pytorch they look so beginner friendly.
oh i finally understood, as you said it take human facial landmarks (1 x 136 x 1) from 68 points (coordinates “X,Y”).
so as you say even with the right dimension it still doesn’t work because the entry isn’t an image :
WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1876> [UID = 1]: Could not find output layer 'output_cov/Sigmoid' in engine
ERROR: [TRT]: 3: Cannot find binding of given name: output_bbox/BiasAdd
0:00:12.400054054 14538 0xaaaaed40b800 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1876> [UID = 1]: Could not find output layer 'output_bbox/BiasAdd' in engine
0:00:12.449271994 14538 0xaaaaed40b800 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:971> [UID = 1]: RGB/BGR input format specified but network input channels is not 3
ERROR: Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:12.450229128 14538 0xaaaaed40b800 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:12.450330317 14538 0xaaaaed40b800 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-6.1/samples/configs/tao_pretrained_models/emtionnet_pretrained_etlt_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
I don’t know if it would work with Yolo on this dataset (with the problem of input not being an image probably not) but I don’t know I feel like the documentation and the Yolo tool seems less sprawling than the TAO and Deepstream app, it seems better centralized and with less special cases.
I’m sorry i know that is isn’t the initial subject of this topic but I just looked at deepstream_tao_apps from NVIDIA AI IOT and i already tried to use it but i want to use my camera instead of the URI of an image, my camera device is located at /dev/video0 but i haven’t found how to use that.
This is exactly the initial reason of this topic, i used this sample, it correctly use the webcam but recognize cars and person and i wanted it to recognize emotion. That’s why i tried to ad the etlt custom model.
Finally i only had to use this command : ./deepstream-emotion-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt v4l2:///dev/video0 ./landmarks
the ‘3’ make a pipeline with display sink and ‘v4l2’ replace URI for streaming.