Memory shortage in Host Code

Hi All,

I have a piece of code that I previously compiled and executed using gcc. But, when I try to execute the code in cuda, I get segmentation fault errors and I found out it is because of memory shortage in the host side. That is because when I decrease the size of data, the segmentation fault goes away, While I didn’t have this problem with gcc. Is there any way I can fix this?


my first step would be to calculate the amount of memory you need. Of course not all variables but the big arrays or data structures you use.
If linux try top, to see wether the memory size is really the problem.

Second, how can you be sure that a segmentation fault by wrong adress etc. is not randomly not a direct seg fault problem with another size.

Often had this in own projects that everything went wrong before going to a real problem with the code.

Perhaps you can post your code or a small example showing the problem?




Thanks for your reply. The problem is, I have already compiled and executed the code with the actual size of data (not the reduced one) using GCC and it works fine, but when I try to execute the code as a cuda host code, I get such error.

That’s why I think there is something different about cuda configurations that makes this happen !!

Any Suggestion??


Execute your code through valgrind and it will tell you where your segfault is occuring. Then perhaps you can find out why.

I don’t fully understand what you mean by “execute the code as a cuda host code”. After all, host code is host code and it is all compiled by gcc.