YOLOv3 with SPP module on Jetson Xavier

Good morning, I have a question about SPP module in YOLOv3.

I checked YOLOv3 is working on Jetson Xavier with deepstream sdk 4.0. However, once SPP module in YOLOv3 is added core dump with dimension error is occurred.

In SPP module, there are 3 max pooling layers. After the last max pooling layer, the route layer connects to 4 different layers. It seems to be an error in dimension matching from the route layer after the last max pooling. In line 113 of the trt_utils.cpp file, dimension >= 1 assertion error came up. When I checked the dimension, it was -1.

Is there any way to run YOLOv3 with SPP module network?

Hi,

Currently, we only verify following YOLO models: yoloV2, yoloV2_tiny, yoloV3 and yoloV3_tiny.
This is a new model and need further investigation.

We already pass this request to our internal team.
This will be checked and prioritized internally.

Thanks.

Hi,

Do you solve this problem already?
I have run yolov3-spp successfully, but the precision decreased a lot.

Hello,

Did you use deepstream SDK to run yolov3-spp? or python version?
Can you please share some details of how you were able to run yolov3-spp?

Thank You

You can change two points for yolo.cpp; but It’s a little strange ,the precision is not very well;
If you have any good solution or idea ,please share to me .

else if (m_configBlocks.at(i).at("type") == "route")

2.        {   

3.            int cont=0;

4.            int idx[6];

5.            int lac[6];

6.            size_t found = m_configBlocks.at(i).at("layers").find(",");

7.            if (found != std::string::npos)

8.            {

9.		size_t pos;

10.		int size=m_configBlocks.at(i).at("layers").size();

11.		for(int j=0; j<size; j++)

12.		  {

13.		    pos= m_configBlocks.at(i).at("layers").find(",",j);

14.		    if(pos!= std::string::npos)

15.		    {

16.			if(cont==0)

17.			{

18.		      		lac[cont]=0;

19.			}

20.

21.

22.                      idx[cont] = std::stoi(trim(m_configBlocks.at(i).at("layers").substr(lac[cont], pos)));

23.		      printf("cont-idx[%d]=%d \n",cont,idx[cont]);

24.		      cont=cont+1;

25.		      lac[cont]=pos+1;

26.		      //std::string s= m_configBlocks.at(i).at("layers").substr(i,pos-i);

27.		      //result.push_back(s);

28.		      j=pos+1;

29. 		      printf("pos=%d \n",pos);

30.		    }

31.		else{

32.			idx[cont] = std::stoi(trim(m_configBlocks.at(i).at("layers").substr(lac[cont])));

33.		      printf("cont-idx[%d]=%d \n",cont,idx[cont]);

34.		      cont=cont+1;

35.			break;

36.			}

37.		  }

38.

39.

40.

41.                //int idx1 = std::stoi(trim(m_configBlocks.at(i).at("layers").substr(0, found)));

42.               // int idx2 = std::stoi(trim(m_configBlocks.at(i).at("layers").substr(found + 1)));

43.

44.

45.		//int idx3 = std::stoi(trim(m_configBlocks.at(i).at("layers").substr(found + 2)));

46.		//int idx4 = std::stoi(trim(m_configBlocks.at(i).at("layers").substr(found + 3)));

47.		//int idx3=-5;

48.		//int idx4=-6;

49.

50.		for(int j=0; j<cont; j++)

51.		{

52.

53.			printf("idx[%d]=%d \n",j,idx[j]);

54.		     if (idx[j] < 0)

55.		        {

56.		            idx[j] = tensorOutputs.size() + idx[j];

57.		        }

58.			printf("tensorOutputs.size=%d \n",tensorOutputs.size());

59.                	assert(idx[j] < static_cast<int>(tensorOutputs.size()) && idx[j] >= 0);

60.			printf("idx[%d]=%d \n",j,idx[j]);

61.		//printLayerInfo(idx1, idx2);

62.		}

63./*

64.                if (idx1 < 0)

65.                {

66.                    idx1 = tensorOutputs.size() + idx1;

67.                }

68.                if (idx2 < 0)

69.                {

70.                    idx2 = tensorOutputs.size() + idx2;

71.                }

72.                if (idx3 < 0)

73.                {

74.                    idx3 = tensorOutputs.size() + idx3;

75.                }

76.                if (idx4 < 0)

77.                {

78.                    idx4 = tensorOutputs.size() + idx4;

79.                }

80.                assert(idx1 < static_cast<int>(tensorOutputs.size()) && idx1 >= 0);

81.                assert(idx2 < static_cast<int>(tensorOutputs.size()) && idx2 >= 0);

82.                assert(idx3 < static_cast<int>(tensorOutputs.size()) && idx3 >= 0);

83.                assert(idx4 < static_cast<int>(tensorOutputs.size()) && idx4 >= 0);

84.*/

85.                nvinfer1::ITensor** concatInputs

86.                    = reinterpret_cast<nvinfer1::ITensor**>(malloc(sizeof(nvinfer1::ITensor*) * cont));

87.		printf("cont=%d \n",cont);

88.		for(int j=0; j<cont; j++)

89.		{

90.                	concatInputs[j] = tensorOutputs[idx[j]];

91.		}

92.

93. //               concatInputs[0] = tensorOutputs[idx1];

94. //               concatInputs[1] = tensorOutputs[idx2];

95. //               concatInputs[2] = tensorOutputs[idx3];

96.  //              concatInputs[3] = tensorOutputs[idx4];

97.                nvinfer1::IConcatenationLayer* concat

98.                    = network->addConcatenation(concatInputs, cont);

99.                assert(concat != nullptr);

100.                std::string concatLayerName = "route_" + std::to_string(i - 1);

101.                concat->setName(concatLayerName.c_str());

102.                // concatenate along the channel dimension

103.                concat->setAxis(0);

104.                previous = concat->getOutput(0);

105.                assert(previous != nullptr);

106.                std::string outputVol = dimsToString(previous->getDimensions());

107.                // set the output volume depth

108.

109.		channels= getNumChannels(tensorOutputs[idx[0]]);

110.		for(int j=1; j<cont; j++)

111.		{

112.			printf("+channels=%d \n",channels);

113.			channels=channels+getNumChannels(tensorOutputs[idx[j]]);

114.		}

115.			printf("+channels=%d \n",channels);

116.               // channels

117.                  //  = getNumChannels(tensorOutputs[idx1]) + getNumChannels(tensorOutputs[idx2]) + getNumChannels(tensorOutputs[idx3]) + getNumChannels(tensorOutputs[idx4]);

118.

119.                tensorOutputs.push_back(concat->getOutput(0));

120.                printLayerInfo(layerIndex, "route", "        -", outputVol,

121.                               std::to_string(weightPtr));

122.            }
else if (m_configBlocks.at(i).at("type") == "maxpool")

2.        {

3.            // Add same padding layers

4.            if (m_configBlocks.at(i).at("size") == "2" && m_configBlocks.at(i).at("stride") == "1")

5.            {

6.                m_TinyMaxpoolPaddingFormula->addSamePaddingLayer("maxpool_" + std::to_string(i));

7.            }

8.            if (m_configBlocks.at(i).at("size") == "5" && m_configBlocks.at(i).at("stride") == "1")

9.            {

10.                m_TinyMaxpoolPaddingFormula->addSamePaddingLayer("maxpool_" + std::to_string(i));

11.            }

12.            if (m_configBlocks.at(i).at("size") == "9" && m_configBlocks.at(i).at("stride") == "1")

13.            {

14.                m_TinyMaxpoolPaddingFormula->addSamePaddingLayer("maxpool_" + std::to_string(i));

15.            }

16.            if (m_configBlocks.at(i).at("size") == "13" && m_configBlocks.at(i).at("stride") == "1")

17.            {

18.                m_TinyMaxpoolPaddingFormula->addSamePaddingLayer("maxpool_" + std::to_string(i));

19.            }

20.            std::string inputVol = dimsToString(previous->getDimensions());

21.            nvinfer1::ILayer* out = netAddMaxpool(i, m_configBlocks.at(i), previous, network);

22.            previous = out->getOutput(0);

23.            assert(previous != nullptr);

24.            std::string outputVol = dimsToString(previous->getDimensions());

25.            tensorOutputs.push_back(out->getOutput(0));

26.            printLayerInfo(layerIndex, "maxpool", inputVol, outputVol, std::to_string(weightPtr));

27.        }

Hi

Thanks for the code. I tried this fix. But, loading weights failed saying, “number of unused weights left”. May be we are using different config and weight files. Can you share your yolov3-spp config and weight files if possible? Also, deepstream-app config file that you are using if possible?

Thank You

I have tried ,using default yolov3-spp.cfg is OK;

Do you modify the net layer of .cfg file? You can share it me? Might could found the issue point.

Since you already file a new topic. Let track the following on it directly:
https://devtalk.nvidia.com/default/topic/1063522/deepstream-sdk/tx2-ds4-0-loadeight-