CUDA not accessible for ggc_user

Hi,

we run aws greengrass python based lambda on jetson nano.
we face a problem where lamdba function does not work under ggc_user and ggc_group (privileges problem?)
The lambdas are successful when executed as root but that’s not the best practices.

The error we get is:
[2019-07-05T11:55:05.92-04:00][ERROR]-THCudaCheck FAIL file=../aten/src/THC/THCGeneral.cpp line=50 error=38 : no CUDA-capable device is detected

do we need to add some privileges to ggc_user?

thanks a lot,
Martin

Hi,

Based on your log:

no CUDA-capable device is detected

It looks like the torch library doesn’t build with Nano GPU architecture.
Nano GPU capacity is sm=5.3. Would you mind to check if ggc_user support the architecture first?

Thanks.

Hi @aastalll,

this error messahe does not come up when you ssh in as root (sudo su -) then the CUDA is detected correctly. That rules out the torch library.

Can you elaborate on how to check if ggc_user supports the architecture, please? We ssh in using ggc_user instead of root and that’s when this no CUDA detected problem occurs.

If it’s a permission based problem, you should be able to grant the appropriate permission to the user.
These are the groups that the default user are in for jetpack-ubuntu: adm cdrom sudo audio dip video plugdev lpadmin gdm sambashare

Taking a quick look in the /dev directory, it seems many NVIDIA devices require the “video” group, so make sure the user you’re using is a member of this group. (After changing groups, a user needs to log out and back in again for it to apply.)