For my PhD research I am trying to use the cuQuantum and cuTensor libraries with pennylane-lightning-gpu on the nVidia Xavier AGX board for Quantum Machine Learning.
I have seen that for ARM64 architecture only packages are available in the repository for SBSA hardware.
Although I have managed to compile everything on my Xavier with these packages and cuda 11.8, I have seen that the execution of both pennylane with lightning-gpu and the cuQuantum examples fail as soon as they use the custatevec library (cuQuantum state vector).
Do you know of any workaround to run on Jetson architecture? If not,
do you plan to release any package for Jetson Xavier besides SBSA?
Installing nvidia-l4t-core (35.1.0-20220825113828) first, then cuda (11.8.0-1), nvidia-cudnn8 (5.0.2-b231), tensorrt (8.4.1.5-1+cuda11.4), nvidia-vpi (5.0.2-b231) packages and also updating ld paths accordinly.
deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "Xavier"
CUDA Driver Version / Runtime Version 11.4 / 11.8
CUDA Capability Major/Minor version number: 7.2
Total amount of global memory: 31011 MBytes (32517586944 bytes)
(008) Multiprocessors, (064) CUDA Cores/MP: 512 CUDA Cores
GPU Max Clock rate: 1377 MHz (1.38 GHz)
Memory Clock rate: 1377 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 524288 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 98304 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: Yes
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 0 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.8, NumDevs = 1
Result = PASS
After that I install the two local SBSA repos for libcutensor (1.6.1.5-1) and cuquantum (22.11.0.13-1) and build the demos.
In a later stage I finish installing pennylane and building pennylane-lightning-gpu with cuquantum and cuda support.
I guess the problem will come from using the binaries with the SBSA hardware configuration in Jetson.
As I told you this is a very important part of my actual PhD research so I can help you in whatever you consider to make the appropriate tests for the Jetson support.
Ok I understand. Too bad nVidia has no plans for Quantum on Edge, it’s a very interesting area of research and would surely expand the use of cuQuantum. Also from what I have seen it would be not hard to port to the Jetson architecture from SBSA.
Thanks @AastaLLL for your help and If at any time it comes up please do not hesitate to let me know.