Custom Plugin Arcface

Please provide complete information as applicable to your setup.

**• Hardware Platform GPU RTX4080
**• DeepStream Version 7.0
• TensorRT Version
**• NVIDIA GPU Driver Version: 535.183.01
**• Issue Type Failed to parse bboxes and Could not find output coverage layer for parsing objects.
**• How to reproduce the issue ? I try to make my custom plugin with onnx model.
**• My code: include “nvdsinfer_custom_impl.h”
include “nvdsmeta.h”
include
include
include
include

static void normalizeVector(const float* input, float* output, int length) {
float norm = 0.0f;
for (int i = 0; i < length; ++i) {
norm += input[i] * input[i];
}
norm = std::sqrt(norm);
if (norm > 0) {
for (int i = 0; i < length; ++i) {
output[i] = input[i] / norm;
}
} else {
std::memset(output, 0, length * sizeof(float));
}
}

extern “C” {

bool NvDsInferParseCustomFunc_ArcFace(std::vector const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams,
std::vector& objectList)
{
const NvDsInferLayerInfo* embeddingLayer = nullptr;
for (const auto& layer : outputLayersInfo) {
if (strcmp(layer.layerName, “embeddings”) == 0) {
embeddingLayer = &layer;
break;
}
}

if (!embeddingLayer) {
    std::cerr << "Error: Layer 'embeddings' not found" << std::endl;
    return false;
}

if (embeddingLayer->inferDims.numDims != 2) {
    std::cerr << "Error: Incorrect dimensions for embeddings" << std::endl;
    return false;
}

const int numEmbeddings = embeddingLayer->inferDims.d[0];
const int embeddingDim = embeddingLayer->inferDims.d[1];
const float* embeddings = static_cast<const float*>(embeddingLayer->buffer);

for (int i = 0; i < numEmbeddings; ++i) {
    NvDsInferObjectDetectionInfo object;
    object.classId = -1;
    object.detectionConfidence = 1.0f;
    object.left = object.top = object.width = object.height = 0;

    float* normalizedEmbedding = new float[embeddingDim];
    normalizeVector(&embeddings[i * embeddingDim], normalizedEmbedding, embeddingDim);

    object.detectionConfidence = normalizedEmbedding[0];  // Usar el primer valor como confidence
    delete[] normalizedEmbedding;  // Liberar memoria

    objectList.push_back(object);
}

return true;

}

// Función wrapper que coincide con la definición esperada
NvDsInferParseCustomFunc nvds_infer_parse_custom_function = NvDsInferParseCustomFunc_ArcFace;

}

config file: [property]
onnx-file=w600k_r50.onnx
model-engine-file=w600k_r50.onnx_b1_gpu0_fp32.engine
gpu-id=0
net-scale-factor=1.0
model-color-format=0
batch-size=1
network-mode=1
num-detected-classes=1 # Se mantiene en 1, ya que estamos trabajando con embeddings de una sola clase
interval=1
gie-unique-id=4
process-mode=2
operate-on-gie-id=3
operate-on-class-ids=0
network-type=0 # Para análisis de metadatos, se mantiene como 0 (Detector), ya que no se trata de un clasificador
maintain-aspect-ratio=1
symmetric-padding=1
custom-lib-path=deepstream_plugin_facenet/libnvdsinfer_custom_impl_arcface.so
parse-classifier-func-name=NvDsInferParseArcFace # Cambiado a la función que definimos en el código
output-blob-names=683 # Nombre de la capa de salida de embeddings, debe coincidir con el nombre del tensor de salida de tu modelo ONNX

[class-attrs-all]
pre-cluster-threshold=0.5

Output: 0:00:08.654036859 1166 0x607a1a9a9800 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1244> [UID = 4]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:16.667136866 1166 0x607a1a9a9800 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 4]: deserialized trt engine from :/root/apps/myapp/web/worker/gpu/w600k_r50.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input.1 3x112x112
1 OUTPUT kFLOAT 683 512

ERROR: [TRT]: 3: Cannot find binding of given name: 683 # Nombre de la capa de salida de embeddings, debe coincidir con el nombre del tensor de salida de tu modelo ONNX
0:00:16.929288589 1166 0x607a1a9a9800 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2062> [UID = 4]: Could not find output layer ‘683 # Nombre de la capa de salida de embeddings, debe coincidir con el nombre del tensor de salida de tu modelo ONNX’ in engine
0:00:16.929327184 1166 0x607a1a9a9800 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 4]: Use deserialized engine model: /root/apps/myapp/web/worker/gpu/w600k_r50.onnx_b1_gpu0_fp32.engine
0:00:16.930140777 1166 0x607a1a9a9800 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 4]: Load new model:/root/apps/myapp/web/worker/gpu/1facenet_config.txt sucessfully

Error in run pipeline: 0:01:34.502434601 1166 0x6079f93da230 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:60> [UID = 4]: Could not find output coverage layer for parsing objects
0:01:34.502496492 1166 0x6079f93da230 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:736> [UID = 4]: Failed to parse bboxes
Segmentation fault (core dumped)

This error means that your custom parser function is not effective. Please refer to the sample code below.

/opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer_customparser/nvdsinfer_custombboxparser.cpp

This error is returned by DetectPostprocessor::parseBoundingBox in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_context_impl_output_parsing.cpp. nvinfer is open source, you can debug it yourself

Thanks very much. I see the file but i’m not try to output a bbox, i try to put into metadata an embedding vector. Do you have an example for this task?

There is no such sample code. You can add a probe function in the src pad of pgie, then configure output-tensor-meta=1, and add user-defined meta in the probe function
This approach does not require defining a plug-in

You may want to combine the following two examples

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-gst-metadata-test/deepstream_gst_metadata.c

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/deepstream_infer_tensor_meta_test.cpp

1 Like

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.