This question is quite old, but i found this post on top of the list when i searched for an answer having the same problem. So answering this may help others: unsigned long has different sizes on a 64 Bit Windows (4 Byte unsigned long) and a 64 Bit Linux (8 Byte unsigned long). As the internal GPU texture hardware only supports 32 Bit the unsigned long texture functions are disabled when compiling under 64 Bit Linux. Using unsigned int instead resolves the problem as unsigned int has 32 Bit (4 Bytes) size on both systems.
How i found this? As the SDK Include file texture_fetch_functions.h contains an unsigned long tex1Dfetch function definition in Linux as well I compiled with nvcc -E to see the result of the Preprocessor. The unsigned long tex1Dfetch function was missing in Linux while unsigned int tex1Dfetch was still there. This lead me to the #if !defined(LP64) statement in texture_fetch_functions.h which gave me the final clue for the solution.