How to convert Pytorch model probabilities in Deepstream to probabilities between 0 and 1?

• Hardware Platform (Jetson / GPU) : Jetson Nano
• DeepStream Version : 5.1
• Jetpack version : 4.5.1
• Issue Type(questions, new requirements, bugs) : Question

Hi,

We trained a image classifier model using Pytorch and converted it from pytorch to onnx so we could generate the corresponding engine file for Deepstream.

After using the engine and checking classifier metadata we see that predictions probabilies are in a format we don´t understand, for example: 2.24324 or 5.43243 (values are usually between 0 and 7).
However, we would like to obtain values between 0 and 1.

Is there a way to convert those pytorch probabilities into 0-1 values?

Thanks in advance.

Hi,

Please noted that we have some newer packages released.
It’s recommended to upgrade your device to Deepstream v6.0 or v6.0.1 for a better experience.

In general, TensorRT (Deepstream inference backend) should output the same value compared to PyTorch.
Would you mind checking if you can get the same result with ONNXRuntime or TensorRT first?

Thanks.

Hi,

We also have a Python script which uses TensorRT for inference. The output values of that script are the same as values got in Deepstream (between 0 and 7). For example, like I said here:

However, in the python script we were able to convert pytorch values into values between 0 and 1:

# Get the output (pytorch format)
output_pred = model(image)

softmax = torch.nn.Softmax(dim=1)
top_value_index = output_pred.argmax().item()
# tensor with softmax values (values between 0 and 1)
softmax_tensor = softmax(output_pred)

##top_value_index = output_pred.argmax() 
            
result_class = labels[top_value_index]
result_class_prob = torch.max(softmax_tensor).item()

The problem we have is that in deepstream we don´t know how to do that conversion. Is there any way to do it?

Hi,

It looks like your TensorRT model generates the output right before the softmax layer.
Please noted that TensorRT does support the softmax layer.
You can mark the softmax layer as output directly.

Back to your question, since softmax operation is order-preserving.
Taking argmax operation before or after softmax layer will be the same.

If you want to convert the value into the softmax output.
You can try the following conversion manually.

Let res is a k-dimension vector for k classes.

exp_res = np.exp(res)
softmax = exp_res / np.sum(exp_res)

Thanks.

Thanks for your response.

The problem is that we have accessed deepstream metadata and we can only get 1 prediction and probability for each frame so we cannot apply that calculation.

We tried what it is said in this post but it didn´t work:

Hi,

Would you mind sharing an example to reproduce this issue?
We want to check this issue internally.

Thanks.

Hi,

We have tested this with our deepstream-test2 example but didn’t find a similar issue.

Frame Number=31 Number of Objects=10 Vehicle_count=8 Person_count=2
result_class_id=9, result_label=silver, result_prob=0.517
result_class_id=2, result_label=sedan, result_prob=0.553
result_class_id=9, result_label=silver, result_prob=0.890
result_class_id=10, result_label=white, result_prob=0.966
result_class_id=16, result_label=nissan, result_prob=0.714
result_class_id=2, result_label=sedan, result_prob=0.743
...

It seems this behavior depends on the model.
Please share your source and model with us so we can check it further.

Thanks.

Hi again,

Thank you for your responses.
In the end, we could solve our problem adding a Softmax layer to the model when converting pytorch checkpoint to onnx:

model = torch.nn.Sequential(
    base_model,
    torch.nn.Softmax(dim=1)
)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.