This is a EC2 instance set up as g4dn.xlarge.
This is what nvidia-smi returns when ran from that machine:
$ nvidia-smi
Fri Feb 24 18:37:56 2023
±----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 38C P8 15W / 70W | 2MiB / 15360MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+
deviceQuery seems to run just fine, as well.
In this particular machine, an app I created that uses OptiX crashes as reported above. In another machine, which is setup as g4dn.4xlarge, the error does not show up.
Since these are instances which only can be used through command line interface, I wasn’t able to compile or run the OptiX SDK examples.
I’m at a loss on why this would happen. Do you have any pointers on what to check to make this machine run not just my app, but any app with OptiX?