Fast k nearest neighbor search using GPU: code now available

Hi everybody,

I am proud to announce today that the code of “Fast k nearest neighbor search using GPU” is now available.
Indeed, I have received a lot of mails asking me the source code used in the paper “Fast k nearest neighbor search using GPU” presented in the proceedings of the CVPR Workshop on Computer Vision on GPU.

You can obtain the code at: http://www.lix.polytechnique.fr/~garciav/research_knn.php.
It will also be avaible soon at: http://www.i3s.unice.fr/~garciav/research_knn.php.
However, the location of the code may change in the future and a search on google will allow you to find it.

If you plan to use our code, do not forget to cite our paper in your publications.
If you find some optimizations to do or some mistakes, feel free to contact me.
I will try to update the code regularly.

I hope our work will be useful for you.

Best GPU regards ;-)

Vincent Garcia

Vincent, thanks for publishing your code. I think this will be a useful tool to many people! :)

If you are going to allow others to use it (and still update it with your own optimizations), why not publish it as an open source project on Google Code (or similar), under a BSD or MIT license? This way, others could make use of it (or even help out with some contributions) without worrying if the project website is going to move around (plus, you can track your source changes and so forth).

Hi,

Actually, the code is already published under a creative commons license who allows everybody to use and modify the code.
However, your idea (profquail) is very good and we are already looking for the most appropriated website to host our code.
I think we will publish it very soon as an open source project.

In the meantime, I am already working on a new faster version of the code :) .

Cheers,

Vincent Garcia

Hi CUDA programmers,

I just have updated the code. I provide now two implementations of the kNN search to fit to your needs:

    With indexes: provides for each query point the distances to the k nearest neighbors and the indexes of these neighbors.

    Without indexes: provides for each query point only the distance to the k-th nearest neighbor. This version is faster than the version with indexes because first it uses less memory and second it does not spent time on processing indexes.

Cheers,

Vincent

very interesting work, “simple” algorithm and excellent results.

Thank you for your compliment :-)

I’m working now on a faster version of kNN.
I’ll publish the new versions during the summer.

Vincent

Nice work indeed, I attended the presentation at CVPR(CVGPU) in Alaska last year ;)

N.

Hi all,

I’ve published my code about KNN at http://www.i3s.unice.fr/~creative/KNN/

The provided CUDA code is usable for C, C++, and Matlab programs. I provide 4 implementations of the kNN search to fit to your needs:

    KNN CUDA — implementation CUDA of the k-nearest neighbor search.

    KNN CUBLAS — implementation CUBLAS of the k-nearest neighbor search. This version is usually faster than KNN CUDA and is based on matrix multiplication to compute the distances between points.

    With indexes — provides for each query point the distances to the k nearest neighbors and the indexes of these neighbors.

    Without indexes — provides for each query point only the distance to the k-th nearest neighbor. This version is faster than the version with indexes because first it uses less memory and second it does not spent time on processing indexes.

The CUBLAS version was submitted to ICIP 2010. I hope the reviewers will like my work ;-)

Enjoy my code and feel free to contact me for any question/remark.

Vincent

Just a quick note/question - I’ve noticed some of your float related code uses this notation:

float ssd = 0;

		for (int i=0; i<dim; i++){

			float tmp  = tex2D(texA, (float)yIndex, (float)i) - B[ i * pB + xIndex ];

			ssd += tmp * tmp;

		}

using ssd = 0 and not ssd = 0.f might cause the code to use doubles instead of floats hurting performance…

maybe its something worth looking at…

eyal

I’ll check soon!

Thanks for your comment :-)

Cheers,

Vincent

Actually, float ssd = 0 will evaluate to float ssd = float(0) where 0 is an integer. It will be a one-time issue of converting an int to a float. If it happens before the main loop, it shouldn’t cause any performance issues. Subsequent uses of ssd will see a float.

That’s very interesting! The implicit conversion makes sens to me. I don’t think this can cause a big performance issue.

Anyway, writing float ssd = 0.f; is of course better than what I did (thanks to eyalhir74).

I’ll change the code when I’ll have time :-)

Thanks for your comments.

Vincent

Hi gals/guys,

The link to the project “Fast k nearest neighbor search using GPU” cited above is not active anymore.
I’ve just created a GitHub repository to host this project. I hope you’ll like it:

http://vincentfpgarcia.github.com/kNN-CUDA/

:)

Vincent Garcia

“Fast k nearest neighbor search using GPU” will be really useful in clustering algorithms. I would appreciate if you have information about how to compile it on linux.

Very easy to read code, thank you Vincent for bumping this thread.

If you ever need contributors to future projects drop me a line.

https://github.com/OlegKonings?tab=repositories