Hello, I going to make a face recognition system.
As I understood I need to use deepstream-infer-tensor-meta-app
For pgie I will use yolov5 (because I need to detect not only faces)
For sgie as I researched best is insightface(arcface) or facenet(triplet loss).
Get tensor meta of face (as I understood tensor meta is embeddings of face, am I right?) and get cosine distance with every face in database
I can’t find any deepstream implementation examples, pls, help me to find it.
And Where can I get pretrained models for recognition, I mean facenet or insightface. (I know that is not a deepstream question)
Thank you very much. Am i right that face landmarks are not embeddings for recognition?
I need to use facenet output tensor as embeddings for calculation of distance?
WARNING: Overriding infer-config batch-size (1) with number of sources (3)
Failed to load config file: No such file or directory
** ERROR: <gst_nvinfer_parse_config_file:1303>: failed
Failed to load config file: No such file or directory
Config file path: /opt/nvidia/deepstream/deepstream-5.0/sources/custom_yolo_face/dstest2_pgie_config.txt
please check the config path.
I used another config and I wrote path right. I understood that in uff-input-dims first number equals number batchsize not number of dimentions. What uff-input-dims means? In documentation for yolo I need to use 3;640;640;0 because I have an RGB 640x640 image
Well I can’t say what did I changed, but now I got whis error
Now playing...
0:00:00.317954582 23234 0x55b4c9c260 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:04.648032550 23234 0x55b4c9c260 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/cv/Desktop/Mask-Detection/Deepstream-app/model.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x416x736
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x26x46
2 OUTPUT kFLOAT output_cov/Sigmoid 1x26x46
0:00:04.648579934 23234 0x55b4c9c260 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/cv/Desktop/Mask-Detection/Deepstream-app/model.etlt_b1_gpu0_int8.engine
0:00:04.660167696 23234 0x55b4c9c260 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/dstest2_sgie1_config.txt sucessfully
Deserialize yoloLayer plugin: yolo_93
Deserialize yoloLayer plugin: yolo_96
Deserialize yoloLayer plugin: yolo_99
0:00:04.961636709 23234 0x55b4c9c260 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 40001]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 40001]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/model_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT data 3x640x640
1 OUTPUT kFLOAT yolo_93 24x80x80
2 OUTPUT kFLOAT yolo_96 24x40x40
3 OUTPUT kFLOAT yolo_99 24x20x20
0:00:04.961864207 23234 0x55b4c9c260 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 40001]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1833> [UID = 40001]: Backend has maxBatchSize 1 whereas 3 has been requested
0:00:04.961913105 23234 0x55b4c9c260 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 40001]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 40001]: deserialized backend context :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/model_b1_gpu0_fp16.engine failed to match config params, trying rebuild
0:00:05.021022300 23234 0x55b4c9c260 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 40001]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 40001]: Trying to create engine from model files
YOLO config file or weights file is not specified
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:05.022360214 23234 0x55b4c9c260 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 40001]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 40001]: build engine file failed
0:00:05.022434521 23234 0x55b4c9c260 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 40001]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 40001]: build backend context failed
0:00:05.022477883 23234 0x55b4c9c260 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 40001]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 40001]: generate backend failed, check config file settings
0:00:05.023088853 23234 0x55b4c9c260 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:05.023144119 23234 0x55b4c9c260 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/dstest2_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:dstensor-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/dstest2_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline
As I understood this is YOLOv5 error. I used the same config file in deepstream-python-apps and it worked well. What means uff-input-dimms? As I seen in documentation: channel; height; width; input-order(0: NCHW
1: NHWC)
Shhesh, after hours of debugging I ran the pipeline. But I have got a question. So facenet that used in tao-app returns 2 values 46x26x4() bbox coordinate tensor and 46x26x1 class confidence tensor. So what tensor I need to use for recognition?
I know that I need to compare embeddings from my dataset and embeddings that I got, I can use cosine similarity for example. But how can I do this with deepstream? Shure I can do this every frame in sgie_pad_buffer_probe function, but is there easier way to do this?
Hi, as i understood landmarks are show me position of face- Chin, eyes and etc. But I don’t need to understand position of peoples face I need to understand who is on the image. How can I use landmarks for recognition? I need to use them as embeddings?
So, can you please explain me how landmarks can help me to identify person? I googled and seen that landmarks can shows me position of eyes, eyebrows and etc. Maybe you can send me some git-hub repos with examples?