My Numerical Analysis professor assigned a Travelling Sales Problem as part of my homework. To solve it, I must produce (but not necessarily write) software that finds an efficient path involving 19 vertices, with the optimal route being ideal, but not strictly necessary. I already know ways of doing this, but I would like to take a different approach to solve it so that I learn something new. My professor said that we can solve this by writing software in any language we want and that we may use code not necessarily written by ourselves as long as we cite the source, so I have tremendous freedom in terms of what I can do. For those reasons, I would like to solve this problem on my desktop’s XFX GeForce GTS 250 graphics card.
There are a few things that I would like to know. The first being whether or not this has been done already. My homework is due on December 2, so I would prefer to recycle someone else’s code so that I can start the execution of a TSP solver sooner rather than later. The second is whether or not it is possible to compile the program to run on a CPU. While I will run on my GPU, I am not sure if the grader has a CUDA-capable GPU available to him, so I would like to provide him with the ability to compile a generic version for his CPU to make grading easier.
My Numerical Analysis professor assigned a Travelling Sales Problem as part of my homework. To solve it, I must produce (but not necessarily write) software that finds an efficient path involving 19 vertices, with the optimal route being ideal, but not strictly necessary. I already know ways of doing this, but I would like to take a different approach to solve it so that I learn something new. My professor said that we can solve this by writing software in any language we want and that we may use code not necessarily written by ourselves as long as we cite the source, so I have tremendous freedom in terms of what I can do. For those reasons, I would like to solve this problem on my desktop’s XFX GeForce GTS 250 graphics card.
There are a few things that I would like to know. The first being whether or not this has been done already. My homework is due on December 2, so I would prefer to recycle someone else’s code so that I can start the execution of a TSP solver sooner rather than later. The second is whether or not it is possible to compile the program to run on a CPU. While I will run on my GPU, I am not sure if the grader has a CUDA-capable GPU available to him, so I would like to provide him with the ability to compile a generic version for his CPU to make grading easier.
However it looks like it needs some redesigning to use more threads.
You should probably use a smarter algorithm than the brut force approach.
The last will give you 19! different combinations to check. If you launch the maximum number of threads in cuda you still have to check 50 000 combinations per thread.
However it looks like it needs some redesigning to use more threads.
You should probably use a smarter algorithm than the brut force approach.
The last will give you 19! different combinations to check. If you launch the maximum number of threads in cuda you still have to check 50 000 combinations per thread.
but with only 19 points, you could probably solve it by hand by dec. 2 no problem. and at that size even brute-force would probably be faster on a cpu.
a quick google search for ““traveling salesman problem” cuda” turns up some results. looks like ant colony optimization is a popular method.
in any case i’d write a simple alogrithm for the cpu first. just get it to work and make sure you can get it in on time first. then in the remaining time you can mess around with a better solution. it’s often faster and easier to write the code for cpu and then convert it over to gpu, anyways.
but with only 19 points, you could probably solve it by hand by dec. 2 no problem. and at that size even brute-force would probably be faster on a cpu.
a quick google search for ““traveling salesman problem” cuda” turns up some results. looks like ant colony optimization is a popular method.
in any case i’d write a simple alogrithm for the cpu first. just get it to work and make sure you can get it in on time first. then in the remaining time you can mess around with a better solution. it’s often faster and easier to write the code for cpu and then convert it over to gpu, anyways.