Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)GPU
**• DeepStream Version 6.1
**• TensorRT Version 8.2E
**• NVIDIA GPU Driver Version (valid for GPU only) 515
Dear professor:
I meet a problem. I use pytorch to train a resnet50 for the SGIE. I hope to get the classification result of the secondary engine.
My way is:
(1) build resnet50 by the torchvision
(2) train the NN,
(3) translate NN into onnx
(4) translate the onnx model into engine file
(5) DS load the engine
but I meet a strange problem
(1) Because the resnet50 in torchvision does not has softmax. If I add the softmax layer to the end of resnet50, and train the NN->ONNX->ENGINE->DS, So I can get the secondary engine output label.
(2) But if I train resnet50 without softmax, while I add the softmax layer in the step of translation onnx as below:
class model_with_softmax(nn.Module):
def init(self):
super(model_with_softmax, self).init()
self.model = torch.load(‘saved_model.pth’)
def forward(self, x):
x = self.model(x)
x = F.softmax(x, dim=1)
return x
I can get onnx → engine → load by DS.
But this way, I can not get the output label of secondary engine!
Why do you use the second method to train models?
We suggest you use the first method to train model NN->ONNX->ENGINE->DS.
Also you can verify your tained model first with TAO tools. If the output is correct, then you can use deepstream.
Thank you for your response. Yes I use the method:NN->ONNX->ENGINE->DS.
The passed days, I tried many many times. And I found the following rule:
(1) If I do not add softmax to Resnet50, the DS can not get the result of SGIE.
(2) If I transformed the NN to ONNX with a large W and H, the DS can not get the result of SGIE.
(3) It is weirdest that, Some times if I reboot the system, and transform the NN to ONNX again, the DS can get the SGIE result.
So the reboot method makes me very confused. Thank you very much.
Before, I used the TAO method. However, it is inconvenient. especially establish the environment. So I hope to use the method of onnx and tensorrt.
You should verify your model is ok before running it in deepstream. The running of the deepstream has nothing to do with the reboot operation. Is it possible that reboot affects the transformation process?