Diferent confidence for same image

Hi,

I have created a tensorflow model for classification, and then conver it to uff, in order to add it as sgie in my deepstream pipeline.

When I try the model in tensorflow with the same image I have always the same confindence (as I expect), the problem cames when I test model on deepstream. For that I modify dsexample in order to load, save and pass through the model the same image, this is my code:

tmp_mat = cv::imread("img_00000000.jpg", cv::IMREAD_COLOR);
cv::cvtColor (tmp_mat, *dsexample->cvmat, cv::COLOR_RGB2BGR);

I obtain the following confidence for the same image:

Frame Number = 473 Number of objects = 1 
SGIE confidence: 0.345573
Frame Number = 474 Number of objects = 1 
SGIE confidence: 0.284995
Frame Number = 475 Number of objects = 1 
SGIE confidence: 0.539887
Frame Number = 476 Number of objects = 1 
SGIE confidence: 0.515513
Frame Number = 477 Number of objects = 1 
SGIE confidence: 0.600430

Why is happening this problem? How can I solved it? Because I try modifying model, changing layers, but I had always the same problem.

Regards.

Hi,

A common cause is the different input data.

Could you check if the color format (BGR) is same as the TensorFlow pipeline.
More, do you see the same image output on the display.

Thanks.

Hi,

I checked if there was any error in the color components, but everything was ok there and I still having the error. Morover, I understand that even if the color components had been wrong, if I am always loading and adding the same image in dsexample (“img_00000000.jpg”), should the same confidence always come out, or not?

One important thing that I have observed is that when I run the app twice, I obtain identical confidence in each execution (not identical confidence between frames, even having the same image). What makes me thing that there is an error loading the image or feeding the sgie.

This is my code of dsexample.cpp, .hpp and the image that I am loading:

https://drive.google.com/drive/folders/1exk-oiR5Zddlkz-qV-jd_-j6Erh0dElg?usp=sharing

And, what do you mean by “do you see the same image output on the display.”, what I expect to see in the display, the video streaming or the img that i load?

Regards.

Hi,

Thanks for your feedback.
Let me check your source first.

Thanks.

Hi,

Is there any other modification required for your change?
We have tested your source but meet the following error:

ERROR from dsexample0: NvBufSurfTransform failed with error -2 while converting buffer
Debug info: gstdsexample.cpp(556): get_converted_mat (): /GstPipeline:pipeline/GstBin:dsexample_bin/GstDsExample:dsexample0
Quitting
ERROR from dsexample0: NvBufSurfTransform failed with error -2 while converting buffer
Debug info: gstdsexample.cpp(556): get_converted_mat (): /GstPipeline:pipeline/GstBin:dsexample_bin/GstDsExample:dsexample0
ERROR from dsexample0: NvBufSurfTransform failed with error -2 while converting buffer
Debug info: gstdsexample.cpp(556): get_converted_mat (): /GstPipeline:pipeline/GstBin:dsexample_bin/GstDsExample:dsexample0
ERROR from dsexample0: NvBufSurfTransform failed with error -2 while converting buffer
Debug info: gstdsexample.cpp(556): get_converted_mat (): /GstPipeline:pipeline/GstBin:dsexample_bin/GstDsExample:dsexample0
App run failed

Already confirm that the image can be well-loaded via cv::imread.
Could you share a complete reproducible source with us?

Thanks.

Hi,

I will attach the completely source in this comment.

Nevertheless, what I am trying to do is one of this two things:

Option 1:

  1. IP Camera
  2. Pgie
  3. Bbox and load my target
  4. Sgie (Person-Reid) with two input (I do not know how to add to input in sgie in nvinfer)
  5. Confidence
  6. Display

Option 2:

  1. Ip Camera
  2. Pgie
  3. Bbox and load my target
  4. Concatenate bbox and target in order two have one input (I ve done this in dsexample)
  5. Sgie (Person-Reid)
  6. Confidence
  7. Display

The problems that I have is:

  • In Option 1 I dont know have to feed sgie with two inputs in order to use my siamese model.
  • In Option 2, as I understand reading in this forum I can not concatenate in dsexample the two bbox (the one detected and the target) and the attach the new dualbbox to the buffer because sgie only works with buffermetada and crop the bbox from the original frame, so I cannot make any image processing in the bbox. In this option is the problem of this post, trying to add a new image to the batch and that is why I have different confidences, beacuse the image that I am adding is not doing anything.

If one of this options is possible Iplease notice me because i will really work on it.

On the other hand, as I told you, i attached the main scripts of my deep stream project

Regards,

Pablo.

Hi,

Please noticed that pgie and sgie is an open-source component.
You can add your requirement directly to the nvdsinfer componenet.

/opt/nvidia/deepstream/deepstream-4.0/sources/libs/nvdsinfer/nvdsinfer_context_impl.cpp

Ex.

        /* Register the input layer (name, dims and input order). */
        if (!uffParser->registerInput(initParams.uffInputBlobName,
                    uffInputDims, uffInputOrder))
        {
            printError("Failed to register input blob: %s DimsCHW:(%d,%d,%d) "
                "Order: %s", initParams.uffInputBlobName, initParams.uffDimsCHW.c,
                initParams.uffDimsCHW.h, initParams.uffDimsCHW.w,
                (initParams.uffInputOrder == NvDsInferUffInputOrder_kNHWC ?
                 "HWC" : "CHW"));
            return NVDSINFER_CONFIG_FAILED;

        }
        /* Register outputs. */
        for (unsigned int i = 0; i < initParams.numOutputLayers; i++) {
            uffParser->registerOutput(initParams.outputLayerNames[i]);
        }

Thanks.