I want to buy a laptop for coding and debugging CUDA software to be launched on a desktop with GT470.
I have seen many posts on this forums saying that there are no drivers for FERMI mobile GPUs with Optimus technology under LINUX. Is that true?
If so, what about Windows 7 using Visual Studio 2010?
Is anybody actually using FERMI enabled laptops for CUDA computing?
I want to buy a laptop for coding and debugging CUDA software to be launched on a desktop with GT470.
I have seen many posts on this forums saying that there are no drivers for FERMI mobile GPUs with Optimus technology under LINUX. Is that true?
If so, what about Windows 7 using Visual Studio 2010?
Is anybody actually using FERMI enabled laptops for CUDA computing?
Using Fermi based laptops shouldn’t be a problem on windows 7 and Visual studio. I read people had some issues with VS2010 but later hade them resolved.
Not to hijack your thread but if can add a question to yours, is it possible to do nexus debugging on a laptop with both integrated graphics and a discrete GF1xx/GT200 GPU ? Debug the discrete NV GPU while the IGP handles the screen output. Does this in any way depend on optimus?
Using Fermi based laptops shouldn’t be a problem on windows 7 and Visual studio. I read people had some issues with VS2010 but later hade them resolved.
Not to hijack your thread but if can add a question to yours, is it possible to do nexus debugging on a laptop with both integrated graphics and a discrete GF1xx/GT200 GPU ? Debug the discrete NV GPU while the IGP handles the screen output. Does this in any way depend on optimus?
I am not smart enough to answer whether or not Optimus is required. I can only tell you that when I look at my 1215N, there are three stickers. Intel Atom. Nvidia Ion. Nvidia Optimus. So this combination works. (Note that it is equal to a GT218, which is compute capability 1.2). It definitely has limited capability (deviceQuery output below) but it is easy to work on code snippets or learning new functionality.
devicequery Starting…
CUDA Device Query (Runtime API) version (CUDART static linking)
There is 1 device supporting CUDA
Device 0: “ION”
CUDA Driver Version: 3.20
CUDA Runtime Version: 3.20
CUDA Capability Major/Minor version number: 1.2
Total amount of global memory: 431620096 bytes
Multiprocessors x Cores/MP = Cores: 2 (MP) x 8 (Cores/MP) = 16 (Cores)
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 16384
Warp size: 32
Maximum number of threads per block: 512
Maximum sizes of each dimension of a block: 512 x 512 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 2147483647 bytes
Texture alignment: 256 bytes
Clock rate: 1.09 GHz
Concurrent copy and execution: Yes
Run time limit on kernels: No
Integrated: No
Support host page-locked memory mapping: Yes
Compute mode: Default (multiple host threads can use this device simultaneously)
Concurrent kernel execution: No
Device has ECC support enabled: No
Device is using TCC driver mode: No
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 3.20, CUDA Runtime Version = 3.20, NumDevs = 1, Device = ION
I am not smart enough to answer whether or not Optimus is required. I can only tell you that when I look at my 1215N, there are three stickers. Intel Atom. Nvidia Ion. Nvidia Optimus. So this combination works. (Note that it is equal to a GT218, which is compute capability 1.2). It definitely has limited capability (deviceQuery output below) but it is easy to work on code snippets or learning new functionality.
devicequery Starting…
CUDA Device Query (Runtime API) version (CUDART static linking)
There is 1 device supporting CUDA
Device 0: “ION”
CUDA Driver Version: 3.20
CUDA Runtime Version: 3.20
CUDA Capability Major/Minor version number: 1.2
Total amount of global memory: 431620096 bytes
Multiprocessors x Cores/MP = Cores: 2 (MP) x 8 (Cores/MP) = 16 (Cores)
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 16384
Warp size: 32
Maximum number of threads per block: 512
Maximum sizes of each dimension of a block: 512 x 512 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 2147483647 bytes
Texture alignment: 256 bytes
Clock rate: 1.09 GHz
Concurrent copy and execution: Yes
Run time limit on kernels: No
Integrated: No
Support host page-locked memory mapping: Yes
Compute mode: Default (multiple host threads can use this device simultaneously)
Concurrent kernel execution: No
Device has ECC support enabled: No
Device is using TCC driver mode: No
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 3.20, CUDA Runtime Version = 3.20, NumDevs = 1, Device = ION