Decuda error

Hi,
I am using Cuda 2.1 with GeForce 9800
I am getting following error for decuda

[su1015@tarjan laanwj-decuda-c30bd17]$ python decuda.py -p /home/su1015/project/matrixMul/matrixMul
Traceback (most recent call last):
File “decuda.py”, line 92, in
main()
File “decuda.py”, line 55, in main
cu = load(args[0])
File “/usr/local/pkgs/laanwj-decuda-c30bd17/CubinFile.py”, line 258, in load
inst = [int(x,0) for x in inst]
ValueError: invalid literal for int() with base 0: ‘\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00>\x00\x01\x00\x00\x00`2@\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00\xe0\x99\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00@\x008\x00\x08\x00@\x00’

Please help me.

Decuda requires that you run it on an pre-CUDA 3.0 cubin file. From the look of the ELF header in the error message, you are trying to disassemble a host executable. That won’t work. You will need to compile the device code using nvcc -cubin and then run decuda on the resulting cubin file.

Registered developers have access to cuobjdump, which allows the disassembly of CUDA binaries up to sm_13. If I recall correctly the GT9800 is sm_11 so is covered. Since cuobjdump is fairly new, you’d probably would have to upgrade to a recent CUDA version, such as CUDA 3.2, to use it succesfully (the format of binary files has changed since CUDA 2.1).