which type (series) of Nvidia GPU is the best to give high performance for scientific field such as sorting arrays and matrix multiplication?
It depends on a number of factors, such as the need to 64-bit precision etc.
If you need ECC or GPUdirect, or you will be using the GPU(s) continuously over a long period then the the Tesla line would be a good choice (K40).
If the use is more research based and the workload will in spurts then the GTX Titan Black, GTX 780ti or GTX 980 could work.
My personal favorite is the GTX 780ti, because most of my problems are memory-bound and I seem to get the most out of that GPU. The GTX 980 is faster for 32-bit compute but has slower bandwidth, and struggles with 64-bit calcs.
what is ECC?
The simplest and most common implementation off ECC, also found in various NVIDIA GPUs, is the variant that provides single-bit error correction and dual-bit error detection (SECDED). For some GPUs, the driver implements a mechanism on top of ECC that “retires” portions of the memory that incur repeated bit errors:
ECC is most important in large systems with many GPUs (cluster) or when doing very lengthy calculations as it guards against silent failures in applications due to bit errors in memory. Memory bit error rates will differ based on type of memory, the age of the device, likelihood of cosmic ray strikes (increases with altitude), etc.