What is the meaning of the prediction´s output in TensorRT model?

Good afternoon!

I have trained a cnn model in Matlab. Then, I exported it to a Jetson Nano
as an ONNX model. Finally, I converted it into a TensorRT engine, which I named ecg_net.trt.

With ecg_net.trt, I have tried to make an inference with a “dummy batch” as an input.
The thing is: the code does not throw any error but I am not sure that the output of the model has any sense. Maybe I am skipping some step or misunderstanding the output.

Here is the image of the output:

I will put here the files to reproduce the error/issue:
ecg_net.onnx (946.0 KB)
main.py (4.9 KB)
Any help would be much appreciated.

P.D. It´s my first time opening a topic here so feel free to correct any mistake I may be making. And again, thanks.


We run your ONNX model with trtexec tool and the output looks good for us.

$ /usr/src/tensorrt/bin/trtexec --onnx=ecg_net.onnx --dumpOutput
&&&& RUNNING TensorRT.trtexec [TensorRT v8001] # /usr/src/tensorrt/bin/trtexec --onnx=ecg_net.onnx --dumpOutput
[01/18/2022-22:02:03] [I] Explanations of the performance metrics are printed in the verbose logs.
[01/18/2022-22:02:03] [I] 
[01/18/2022-22:02:03] [I] Output Tensors:
[01/18/2022-22:02:03] [I] softmax: (1x2)
[01/18/2022-22:02:03] [I] 0.880822 0.119178
&&&& PASSED TensorRT.trtexec [TensorRT v8001] # /usr/src/tensorrt/bin/trtexec --onnx=ecg_net.onnx --dumpOutput
[01/18/2022-22:02:03] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1353, GPU 13366 (MiB)

Guess that there is some issue in the main.py.

You can find an example for running TensorRT with the ONNX model below.
Would you mind giving a check to see if anything is missing in your code first?


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.