Hello AastaLLL!
I work together with dimaretunskiy and I’m helping him make a custom C app that uses Yolo v3 tiny on the Jetson Nano.
First of all, thanks for the reply! Setting the correct model color mode did indeed help. Our model was BGR, but even then the results were drastically different from what we got from Opencv. Our app seemed to be much worse, even with the tracker in the pipeline.
Yesterday I found out that I’d missed the anchor/mask settings in the yolo module.
We had them in the cfg file, but from what I understood, the custom yolo implementation overrides the settings from the config.
Specifically I mean this function here:
extern "C" bool NvDsInferParseCustomYoloV3Tiny(
std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams,
std::vector<NvDsInferParseObjectInfo>& objectList)
{
static const std::vector<float> kANCHORS = {
10, 14, 23, 27, 37, 58, 81, 82, 135, 169, 344, 319};
static const std::vector<std::vector<int>> kMASKS = {
{3, 4, 5},
//{0, 1, 2}}; // as per output result, select {1,2,3}
{1, 2, 3}};
return NvDsInferParseYoloV3 (
outputLayersInfo, networkInfo, detectionParams, objectList,
kANCHORS, kMASKS);
}
Our cfg seems to use 345 and 012 for mask values, so when I changed the commented-out part to {0, 1, 2} the results have gotten much, much better.
Now, Opencv’s results are comparable, but they are still different somehow. We can probably work with that, but we wanted to ask if maybe we’d missed something else? I’ve done everything from Custom_YOLO_Model_in_the_DeepStream_YOLO_App.pdf to set up our model.
Maybe you cold point us to other things we should check?
It’s not that we want things to be identical to Opencv, we just want the detection to work. I guess another thing that I want to know is if we can expect completely identical results in different yolo implementations, provided that we use same configs and weights. Or some deviations are possible?
Thanks!