Does onnxruntime use data parallelism or model parallelism?

Hi,

I build onnxruntime with ‘openmp’ on Nvidia AGX Xaiver. When i run onnx model (resnet) on cpu with onnxruntime, multiple threads are created to perform code. If I set four cpu cores, four threads will be created. The follow pictures are the result of running ‘pythoncifar10.py’. So I have two questions: 1. Is the num of threads producing by ‘openmp’ equal to cpu cores? 2. When onnxruntime produces multiple threads, whether it uses data parallelism or model parallelism?

Thanks!

image
image

Hi,

This issue is related to onnxruntime implementation.
You can get a better support from onnxruntime team.

1. This depends on the onnxruntime source you used.
For example, the threads number is set to the CPU cores number in this test:

2. It seems an operation-level parallelism.

Thanks.

Thanks for fast reply! Get it!