Ability of Concurrent Rendering GT 610 is better than GTX 770 ?

I’ve asked this question at the geforce forums, but have no luck yet.
So I repost it here hope someone here may know something about my problem.


What’s your operating system, version, bitness?

Are the system configurations of the two servers identical? (CPUs, RAM, display driver, etc.)
If not, does it behave the same if you swap the GPUs.

If apps are running with 60 Hz normally, do you measure performance while VSync is enabled?
If yes, how does it perform when disabling that? E.g. under Windows: NVIDIA Display Control Panel -> Manage 3D Settings -> Global Settings -> Vertical Sync -> Off.

I’m testing under Linux(Ubuntu 14.04 64bit);

The two server has a little different, the CPU and RAM with GTX 770 are a little better than the one with GT 610. RAM is both 32G(enough for all the applications) they all using X11 display system.
Besides I use some CPU benchmark to press the CPU loading when my GPU program is running, but the FPS do not drop as much as I start another GPU program. when the CPU is busy, the FPS drops only less than 5.

Since the two GPU server is provided by different server provider, I can’t swap the GPU.

I’ve not used VSync yet, I’ll try it and reply the result later.

What I have find by now, the GPU driver does affect:
When two GPU all using the driver of version 304.XX, the result is as described above. But when I upgrade the driver on the GPU GTX 770 to 346.XX the newest on from the Nvidia website. the FPS drop seems become less, the MIN FPS when starting another GPU program can be better than 20.
But another problem comes up: the maximum number of concurrent GPU program drops a lot, the number when using 304.XX is about 30, while it become 12 when using 346.XX, with error of makeCurrent failed.

There is not enough information on what you’re doing exactly in your benchmarks.
Running CPU benchmarks in parallel will take resources away from the driver threads and anything could happen then on different machines.

I remember some explanation for the limited number of OpenGL contexts under Linux but don’t find it right now. I’ve moved the thread to the Linux forum.

It’s a simple benchmark that I write myself which creates 1000 threads to do some random mathematical calculations in a while(1) {} loop.

Thanks a lot for helping.