Error from nvidia-smi on aws instance with GPU

I have installed and loaded nvidia drivers successfully (all the 4 modules).
However, I can’t run cuda applications and get error number 10 from getCudadevice
when trying to run a simple program as devicequery.
I have also tried to run nvidia-smi which has generated the following error:

Unable to determine the device handle for GPU 0000:00:1E.0: The NVIDIA kernel module detected an issue with GPU interrupts.Consult the "Common Problems" Chapter of the NVIDIA Driver README for
details and steps that can be taken to resolve this issue.

OS is centos 6
Driver version 390.116 to support cuda 9-1
It is g3.4xlarge instance in AWS.

1 Like