Hi,
I created a new user account and am trying to deploy a TensorFlow (TF) model.
However, if I run the python program normally, I get the error:
2019-02-04 17:43:41.142183: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2019-02-04 17:43:41.142262: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:150] kernel driver does not appear to be running on this host (jetson-0423018054844): /proc/driver/nvidia/version does not exist
Traceback (most recent call last):
File "/home/projects/SANATA/.venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/projects/SANATA/.venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1317, in _run_fn
self._extend_graph()
File "/home/projects/SANATA/.venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1352, in _extend_graph
tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'TRTEngineOp' with these attrs. Registered devices: [CPU,XLA_CPU,XLA_GPU], Registered kernels:
device='GPU'
This error goes away if I run the program using sudo
.
I would like to give the user the ability to access the GPU w/o/ sudo
. How could I do this?
Also, I tried changing permissions for /dev/nvidia*
to 777 as some answers suggested, but still no luck:
ls -l /dev/nvidia*
crwxrwxrwx 1 root root 195, 255 Feb 4 17:23 /dev/nvidiactl