I have a piece of code that I previously compiled and executed using gcc. But, when I try to execute the code in cuda, I get segmentation fault errors and I found out it is because of memory shortage in the host side. That is because when I decrease the size of data, the segmentation fault goes away, While I didn’t have this problem with gcc. Is there any way I can fix this?
Hi,
my first step would be to calculate the amount of memory you need. Of course not all variables but the big arrays or data structures you use.
If linux try top, to see wether the memory size is really the problem.
Second, how can you be sure that a segmentation fault by wrong adress etc. is not randomly not a direct seg fault problem with another size.
Often had this in own projects that everything went wrong before going to a real problem with the code.
Perhaps you can post your code or a small example showing the problem?
Thanks for your reply. The problem is, I have already compiled and executed the code with the actual size of data (not the reduced one) using GCC and it works fine, but when I try to execute the code as a cuda host code, I get such error.
That’s why I think there is something different about cuda configurations that makes this happen !!