Error Code 3: API Usage Error condition: std::isfinite(beta)

I am launching an inference on a Jetson Orin Nano using the container image dustynv/onnxruntime:r35.3.1 on JetPack 5.1.1.

I have converted from pytorch model to onnx model on a amd64 architecture using the follow line of code torch.onnx.export(model, dummy_input, onnx_path, verbose=True). The script launched at the Jetson is as follows:

sess = ort.InferenceSession('model.onnx', providers = ['TensorrtExecutionProvider','CUDAExecutionProvider'])

waveform, _ = librosa.core.load('audio.flac', sr = SAMPLE_RATE, mono = True)

waveform = waveform[None:]

print(f'--- Working with : {ort.get_device()} ---')

onnx_input = {sess.get_inputs()[0].name : waveform}

init_time = time.time()

onnx_output =,onnx_input)

end_time = time.time()

This code works on CPU IoT devices and in a Jetson Orin Nano the code won’t stop and remains in an infinite loop freezing the screen. The exact error is shown at the image attached.


TensorRT Version:
onnxruntime: 1.16.1
GPU Type: Orin Nano
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version: Docker Image dustynv/onnxruntime:r35.3.1
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):


Do you meet the error with the TensorRT provider or the CUDA provider?
Could you help to check?


thanks for your response. Can you give me some guidance on how to find out which provider displaying the error? It is because my screen freezes when I run the code and I can not continue.

Thanks for your attention.


Could you try it with the below command separately to see which (or both) generate the error?


sess = ort.InferenceSession('model.onnx', providers = ['TensorrtExecutionProvider'])


sess = ort.InferenceSession('model.onnx', providers = ['CUDAExecutionProvider'])



thanks for your response. I execute the line of code with Tensorrt and generate the error, my screen freezes again. There are no problem when I execute just with CUDA provider, it works in this way but not with Tensorrt provider.

I hope this information helps to solve the problem.



Could you try to reproduce this issue with trtexec binary?

$ /usr/src/tensorrt/bin/trtexec --onnx=[file]

If the same error occurs, please share the model with us.

thanks for your response. I have executed the line of code that you have sent me, and the error I get is the one attached in the screenshot.

I also attach the link to download the model in onnx format.

Thanks for your help,

Best regards.


Thanks for the testing.

It looks like some layers in your model cannot run with TensorRT.
This might cause an error when you use onnxruntime with TensorrtExecutionProvider.

Please find the TensorRT supporter matrix below:


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.