I’ve got the SDK examples to build now (see other posts). Here’s an example output:
NVIDIA: could not open the device file /dev/nvidiactl (No such file or directory).
Using device 0: Device Emulation (CPU)
Initializing data for 24000000 samples…
Loading CPU and GPU twisters configurations…
Generating random numbers on GPU…
Generated samples : 24002560
RandomGPU() time : 0.044000
Samples per second: 5.455127E+11
Applying Box-Muller transformation on GPU…
Transformed samples : 24002560
BoxMullerGPU() time : 0.040000
Samples per second : 6.000640E+11
Reading back the results…
Checking GPU results…
…generating random numbers on CPU using reference generator
…applying Box-Muller transformation on CPU
…comparing the results
Max absolute error: 5.283487E+00
L1 norm: 1.000000E+00
Press ENTER to exit…
So it seems that the GPU (9500GT) is not being used and the CPU is emulating the GPU.
Toolkit release notes say X needs to be started for it to work. I don’t have X on this machine and would rather not install it. I have seen a post with a script for initialising the card, but its for RHEL and won’t work as-is on ubuntu (and my linux skills aren’t up to converting it unfortunately)
Can somebody please advice if there’s a way to get it workling without installing X?