Quadro K5200 exceeds Geforce Titan X (PASCAL) when processing large datasets

Hi - I hope to find some valuable insight…

Heres my issue:

I process large datasets consisting of imagery thats georeferenced using Pix4D Mapper Pro. Drone data. The app is CUDA friendly!

I have a dual Xeon HP Z840 with 128 MB RAM and I can either use my Quadro K5200 or my Geforce TITAN (PASCAL) card. They are not at any time both in the machine together. Different drivers.

The Titan has more CUDA and is more powerful, however the K5200 processes the data faster and without hangups. Why?

Is there a general incompatibility between Xeon machines and Geforce cards? Or do Nvidia purposely downgrade the capabilities of GeForce when used in a Xeon machine?

GeForce doesn’t have the presets in the control panel either. The app recommended setting under Quadro is ‘3D Game App Development’.

Any thoughts or assistance please…Thanks!

I think it would be hard to say without a careful study of the app, preferably by having access to the source code for the app.

No.

No.

Probably best to ask these questions of the developer of the app. It’s possible the app doesn’t fully recognize the newer GPU, and so is behaving differently.

Thanks for your reply. The app fully harnesses CUDA and the CUDA device can be seen as recognized in the app.

Why don’t you see many Xeon configs with GeForce options? Also the GeForce card requires more power than a Quadro card. 8 pin instead of 6.

Vague assertions on top of a vague description with some conspiracy theories thrown in does not convey enough information to render technical assistance. If this is your own code, use the CUDA profiler to find the bottlenecks. If this is a third-party application, request assistance from the software vendor.

Presumably because system integrators find that the features provided by Xeon processors and Quadro GPUs target roughly the same markets, e.g. engineering workstations. Plus it maximizes their profits because they add a surcharge to the “raw” price of components, therefore using premium building blocks means more money for the integrator. Th target market will bear the higher price.

As for power consumption, that is a question of GPU architecture, number of CUDA cores in the GPU, memory used on the card, and operating frequencies. If you look closely, you will find that a Quadro K5200 and a GTX Titan (Pascal) have quite different specifications. Wikipedia’s article on Quadro provides a reasonably accurate matching of Quadro and Geforce models. For the K5200, it lists the Geforce GTX 780 as the closest consumer grade equivalent.

I’m also having a similar issue with PIX4d, so will likely be turning in a ticket with them as well. I put this info in the consumer community forum and was directed here, saw this thread and thought i’d chime in.

Although the titan xp looks better on paper it is being slaughtered in comparison to the 1080ti using pix4d. We’ve tried a number of different drivers and driver configurations for the titan xp with no luck. Others are having this issue:

https://support.pix4d.com/hc/en-us/community/posts/115020346003-Quadro-vs-GeForce#gsc.tab=0
https://www.pugetsystems.com/labs/articles/Pix4D-GPU-Comparison-GeForce-Titan-and-Quadro-1085/

We were having a similar issue with the quadro’s underperforming but the driver has a simple option that seems to fix it : “3d app - game development” in the global options and then it runs in pix4d like a boss. This option is not available in the drivers for the Titan Xp. Is there something we are missing for the titan xp or something else we could try.