Run yolov3 tensorRT engine with deepstream

I am trying to run yolov3 engine with deepstream, but get some problem. I do not konw how to get the value what i want from feature map, cause deepstream make it become unidimensional.
By the way, i test this yolov3 engine by python, the result is right.
And this is my code. I have some confusion with the buffer value, how can the multidimensional feature map become unidimensional. Is there any transition?

Hi,

The map is mapped to unidirectional by raster scan.
You can find some information in our YOLO parser:

/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo/nvdsparsebbox_Yolo.cpp

static std::vector<NvDsInferParseObjectInfo>
decodeYoloV3Tensor(
    ...)
{
    std::vector<NvDsInferParseObjectInfo> binfo;
    for (uint y = 0; y < gridSize; ++y)
    {
        for (uint x = 0; x < gridSize; ++x)
        {
            ...

Thanks.

1 Like

yes. this code is matched yolov3 trt engine converted by yolov3.weights , but when i change yolov3.weights to yolov3.onnx, it’s not matched. Does this because of onnx is NCWH?

I convert yolov3.weights to yolov3.onnx and then to yolov3.trt. Using deepstream-app to test, the output feature-map values are different with using python script. I record it to txt files and upload part of them.


As you can see, it’s generally greater than python output which is normally value.
They are used same tensorrt engine model, i don konw what’s wrong with deepstream loading engine model

Hi,

How about the input?
Do you use the same input to get these result? Same color format?

Thanks.