OMPi compiler v2.7.0 (support for Jetson GPU offloading)

Hello everybody!

In case anybody is interested, the latest version (2.7.0) of our OMPi OpenMP compiler supports full offloading to the GPU of Jetsons. Here is the latest release:

We have tested it exhaustively in Jetson Nano 2GB/4GB, but it should work on all other Jetson boards as it only requires JetPack. By the way, OMPi is an open source project.

We had a paper on this in the ICPP-EMS 2022 workshop.

If you have any questions feel free to contact me!
Ilias

2 Likes

I’m interested…but I have no Nano.

It builds fine and very fast on AGX Orin running R35.1.0, CUDA device seems detected but not sure it has the correct numbers:

/usr/local/ompi-2.7/bin/ompicc --devvinfo
1 configured device module(s): cuda

MODULE [cuda]:
------
OMPi CUDA device module.
Available devices : 1

device id < 1 > { 
  name: Orin (SM v8.7)
  16 multiprocessors
  64 cores per multiprocessor
  1024 cores in total
  1024 maximum thread block size
  31268720 Kbytes of device global memory
}
------

Total number of available devices: 1


sudo /usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery 
/usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Orin"
  CUDA Driver Version / Runtime Version          11.4 / 11.4
  CUDA Capability Major/Minor version number:    8.7
  Total amount of global memory:                 30536 MBytes (32019169280 bytes)
  (016) Multiprocessors, (128) CUDA Cores/MP:    2048 CUDA Cores
  GPU Max Clock rate:                            1300 MHz (1.30 GHz)
  Memory Clock rate:                             1300 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 4194304 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        167936 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.4, NumDevs = 1
Result = PASS

Might you please share some benchmark that could be tried with this if any ?
Or should we wait for more recent CUDA archs support ?

Thanks for your sharing

1 Like

Greetings @Honey_Patouceul,

Indeed, the number of cores per multiprocessor should be 128 for your Orin. This issue is due to the fact that the number cannot be retrieved programmatically, but only kept hard-coded in a function (CUDA samples do the same, see _ConvertSMVer2CoresDRV). In the next minor release we will update the numbers.

The good news is that this does not prevent you from using OMPi w/ offloading, as the only limit in the compiler is the maximum number of threads per block.

As far as the benchmarks are concerned, you can try using the Unibench suite and specifically the tests located in the Polybench folder.

Feel free to contact me through email (i.kasmeridis [at] uoi.gr) for more details!

Ilias

2 Likes