ADIOI_Set_lock64 error

Greeting Everyone,

I’m working on the Nvidia self-paced online course
“Scaling GPU accelerated applications with the c++ standard library”
and trying to do the exercise1 from the course on two jetson orin nanos that I have set up. The course does supply a VM setup to run the course but I thought I would try it out on my own rigs.

I get the following error that seems to be related to the nfs settings and ADIOI_Set_lock64

I have added the ‘noac’ option but it doesn’t seem to do anything.

Best Regards,

E(t=0) = 0.000775146
E(t=0.00305176) = 0.0563158
E(t=0.00610352) = 0.0781322
E(t=0.00915527) = 0.0939739
E(t=0.012207) = 0.106721
E(t=0.0152588) = 0.11749
E(t=0.0183105) = 0.126854
E(t=0.0213623) = 0.135153
E(t=0.0244141) = 0.142608
E(t=0.0274658) = 0.149375
E(t=0.0305176) = 0.155564
E(t=0.0335693) = 0.161261
E(t=0.0366211) = 0.166532
E(t=0.0396729) = 0.17143
E(t=0.0427246) = 0.175996
E(t=0.0457764) = 0.180266
Rank 0: local domain 256x256 (0.00104858 GB): 0.758245 GB/s

File locking failed in ADIOI_Set_lock64(fd C,cmd F_SETLKW64/7,type F_WRLCK/1,whence 0) with return value FFFFFFFF and errno 25.
If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the ‘noac’ option (no attribute caching).
ADIOI_Set_lock64:: No locks available
ADIOI_Set_lock:offset 2097176, length 524288
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 4
[mpiexec@node0] [pgid: 0] got PMI command: cmd=abort exitcode=1

Found the solution!

in the clients /etc/fstab file added ‘local_lock=all’

so /etc/fstab entry looks like:
node0:/home/mpiuser/cloud /home/mpiuser/cloud nfs noac,local_lock=all

giving a mount info gathered by cat /etc/proc of:

node0:/home/mpiuser/cloud /home/mpiuser/cloud nfs rw,sync,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=,mountvers=3,mountport=50902,mountproto=udp,local_lock=all,addr= 0 0

1 Like

Could you provide more information of this course? Where is the sample/exercise from?
Also, which JetPack SW you’re using?

Added the course hyperlink in the initial post.

Some more details on my installs.
two Jetson Orin Nanos

Package: nvidia-jetpack
Version: 5.1.2-b104
Architecture: arm64

MPI working. The above error was about getting the nfs for the mpi working properly. Followed instructions in the prerequisites MPI Tutorial - running an mpi cluster within a lan

Ubuntu 20.04 required upgrading g++ and gcc to 13 [2nd post] to get the functionality of -std=c++2b. This is required because the course makes use of std::cartesian_product function. The course VM has a header file cartesian_product.hpp that gives the functionality to -std=c++2a (aka c++20), but the header file was not available for download or any information on its install was given.

Hope this helps,

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.