ONNX Runtime Session Fails on Nvidia Jetson – pthread_setaffinity_np Error

• Hardware Platform (Jetson / GPU) : NVIDIA Jetson AGX Orin
• DeepStream Version : 7.1
• JetPack Version (valid for Jetson only) : 6.1
• TensorRT Version : 10.3
• Issue Type( questions, new requirements, bugs) : question
Hello,

I have an ONNX model stored on my Nvidia Jetson device. I successfully converted it to a TensorRT engine, but before integrating it into a DeepStream pipeline, I would like to test the ONNX model independently.

However, when attempting to create an ONNX Runtime session:

import onnxruntime as ort

# Load the model and create an InferenceSession
session = ort.InferenceSession('path/to/model')

I encounter the following error:

2025-01-15 09:50:47.813701633 [E:onnxruntime:Default, env.cc:234 ThreadMain] pthread_setaffinity_np failed for thread: 26958, index: 0, mask: {10, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2025-01-15 09:50:47.813721888 [E:onnxruntime:Default, env.cc:234 ThreadMain] pthread_setaffinity_np failed for thread: 26960, index: 2, mask: {8, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2025-01-15 09:50:47.813839196 [E:onnxruntime:Default, env.cc:234 ThreadMain] pthread_setaffinity_np failed for thread: 26959, index: 1, mask: {9, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
/opt/rh/gcc-toolset-12/root/usr/include/c++/12/bits/stl_vector.h:1123: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](size_type) [with _Tp = unsigned int; _Alloc = std::allocator<unsigned int>; reference = unsigned int&; size_type = long unsigned int]: Assertion '__n < this->size()' failed.

To resolve this, I tried setting ONNX Runtime session options:

session_options = ort.SessionOptions()
session_options.inter_op_num_threads = 1  # Number of threads for parallel model execution
session_options.intra_op_num_threads = 1 

However, the error persists.

Question:

How can I resolve this error and successfully run inference on Nvidia Jetson using my ONNX model?

Switching power mode using sudo nvpmodel -m 0 fixes the issue.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.