./deepstream-segmentation-app config_infer_primary_yoloV8_face.txt test_img.jpg
Now playing: test_img.jpg,
0:00:03.582079357 124445 0x5623826a6d00 INFO nvinfer gstnvinfer.cpp:751:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output0 80x80x80
2 OUTPUT kFLOAT 389 80x40x40
3 OUTPUT kFLOAT 397 80x20x20
0:00:03.672320406 124445 0x5623826a6d00 INFO nvinfer gstnvinfer.cpp:751:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: yolov8n-face.onnx_b1_gpu0_fp32.engine
0:00:03.674044403 124445 0x5623826a6d00 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:DeepStream-Yolo-Face/config_infer_primary_yoloV8_face.txt sucessfully
Running...
in videoconvert caps = video/x-raw(memory:NVMM), format=(string)RGBA, framerate=(fraction)1/1, width=(int)80, height=(int)80
nvstreammux: Successfully handled EOS for source_id=0
Got EOS from stream 0
End of stream
Returned, stopping playback
Deleting pipeline
I did not see anything except black screen.
I tried changing the width and height of nvsegvisual to 80x80, still nothing.
can you try to reproduce this issue or guide me where Im going wrong ? yolov8-face.zip (54.2 MB)
No, I dont want effect like that.
Im building a face detection-recognition pipeline and trying various model. It works fine with models that give bbox only. I want to get the facial landmarks too.
I tried modifying the custom parser for retinaface model, I got the face bbox and landmarks too. I could do that by modifying NvDsInferObjectDetectionInfo but the accuracy was very low because of pytorch → onnx → trt conversion or because of the model. Im not sure.
So, Im just trying various models that can give me bbox and landmarks. I dont wish to display seg-mask. What the custom parser in that repo is doing is, it is using NvDsInferInstanceMaskInfo to populate the landmarks. I just want to access those landmarks.
thats what Im saying, when Im using DeepStream-Yolo-Face model, I get nothing but a black screen. I have added my config files and models. I want to understand how to do that and how to use this model. I have tried others, they dont meet my accuracy needs.
I’m closing this topic due to there is no update from you for a period, assuming this issue was resolved.
If still need the support, please open a new topic. Thanks