• Hardware Platform (Jetson / GPU) : Jetson Nano • DeepStream Version : 5.1 • Jetpack version : 4.5.1 • Issue Type(questions, new requirements, bugs) : Question
Hi,
We trained a image classifier model using Pytorch and converted it from pytorch to onnx so we could generate the corresponding engine file for Deepstream.
After using the engine and checking classifier metadata we see that predictions probabilies are in a format we don´t understand, for example: 2.24324 or 5.43243 (values are usually between 0 and 7).
However, we would like to obtain values between 0 and 1.
Is there a way to convert those pytorch probabilities into 0-1 values?
Please noted that we have some newer packages released.
It’s recommended to upgrade your device to Deepstream v6.0 or v6.0.1 for a better experience.
In general, TensorRT (Deepstream inference backend) should output the same value compared to PyTorch.
Would you mind checking if you can get the same result with ONNXRuntime or TensorRT first?
We also have a Python script which uses TensorRT for inference. The output values of that script are the same as values got in Deepstream (between 0 and 7). For example, like I said here:
However, in the python script we were able to convert pytorch values into values between 0 and 1:
# Get the output (pytorch format)
output_pred = model(image)
softmax = torch.nn.Softmax(dim=1)
top_value_index = output_pred.argmax().item()
# tensor with softmax values (values between 0 and 1)
softmax_tensor = softmax(output_pred)
##top_value_index = output_pred.argmax()
result_class = labels[top_value_index]
result_class_prob = torch.max(softmax_tensor).item()
The problem we have is that in deepstream we don´t know how to do that conversion. Is there any way to do it?
It looks like your TensorRT model generates the output right before the softmax layer.
Please noted that TensorRT does support the softmax layer.
You can mark the softmax layer as output directly.
Back to your question, since softmax operation is order-preserving.
Taking argmax operation before or after softmax layer will be the same.
If you want to convert the value into the softmax output.
You can try the following conversion manually.
The problem is that we have accessed deepstream metadata and we can only get 1 prediction and probability for each frame so we cannot apply that calculation.
We tried what it is said in this post but it didn´t work: