How to combine a binary data file with shared object file

I am making a GPU-based functions library using CUDA. Some functions in the library need to use one data array. For now, I store the data in one binary data file, the size is about 250KB. But I have to specify the binary file location and if I just want to give the compiled .so file to someone else, they can not change the directory for that binary file. So I am thinking can I directly write the data into my generated .so file? or Is there any other better way to do it? Thanks

You can use the CUDA bin2c utility to convert your data into a byte/int/longlong array and then link it as needed.

bin2c --version
bin2c: NVIDIA (R) File to C data converter
Copyright (c) 2005-2012 NVIDIA Corporation
Built on Tue_Sep_25_09:17:08_PDT_2012
Cuda compilation tools, release 5.0, V0.2.1221

For example:

bin2c -c foo > foo.c

generates:

#ifdef __cplusplus
extern "C" {
#endif

const unsigned char foo[] = { ... };

#ifdef __cplusplus
}
#endif

Hi,

Thank you for your reply.

What if the binary data file is float format? This bin2c wont work for float size data.

Treat the generated array as raw bits/bytes and cast it to “float*” or whatever structure or type you require.

The bin2c utility blindly converts a binary file into an array of integer values.

The choice of whether to generate an array of 8-bit, 16-bit, 32-bit or 64-bit integers is up to you. There are options for controlling alignment, trailing byte padding, etc.

See “bin2c -h”.

Hi,

My binary data file is 4-byte float size data. If the binary file name is “test”, and I want to read the data into c ifle “test.c”. I used following command:

bin2c test -t int > test.c

Then I open test.c, it shows something like this:

#ifndef CACODE_H_
#define CACODE_H_

unsigned int imageBytes[] = {
0xbf800000,0x00000000,0xbf800000,0x00000000,0x3f800000,0x00000000,0x3f800000,0x00000000,
0xbf800000,0x00000000,0x3f800000,0x00000000,0x3f800000,0x00000000,0x3f800000,0x00000000,
0x3f800000,0x00000000,0x3f800000,0x00000000,0xbf800000,0x00000000,0x3f800000,0x00000000,
......
};

#ifdef __cplusplus
}
#endif

But how to cast it to float size? I tried, but the value is not correct. The correct values in the array should be +1/-1 and 0.

Thank you very much.

The code snippet above shows that the data was stored correctly. 0xbf800000 corresponds to -1.0f, and 0x3f800000 corresponds to 1.0f. Can you show the code you used to access this data? I would suggest something like:

float f;
int i;
f = __int_as_float (imageBytes[i]);

Hi,
If I just use

for (int i=0; i<10; i++)
    cout << float(imageBytes[i])

Why it does it show something like:
3.21284e+09 0 3.21284e+09 0 1.06535e+09 0 1.06535e+09 0 3.21284e+09 0

The code uses an int-to-float conversion, but what you need is re-interpretation of the bit pattern defined by the integer as a float. For something that works in both host and device code, use a C++ reinterpret_cast.

What if I just change the code like this:

float imageBytes[] = {
0xbf800000,0x00000000,0xbf800000,0x00000000,0x3f800000,0x00000000,0x3f800000,0x00000000,
0xbf800000,0x00000000,0x3f800000,0x00000000,0x3f800000,0x00000000,0x3f800000,0x00000000,
0x3f800000,0x00000000,0x3f800000,0x00000000,0xbf800000,0x00000000,0x3f800000,0x00000000,
......
};

In this way, I think it will correctly store the data as float, right?

But why is the print out still wrong?

That won’t work as you desire. For an illustration why this is so, try this:

float foo[3] = {1,2,3};

and print foo[0], foo[1], foo[2]. The result should be self-explanatory. The basic difference you need to consider is between type conversion and the re-interpreation of bit-patterns. What you need here is re-interpration. CUDA has C-style re-interpration functions __float_as_int(), __int_as_float() and also supports C++'s reinterpret_cast. Since I do not know whether you are looking at host code or device code I suggested using reinterpret_cast as that will work in both host and device code (this is a standard C++ feature).

Hi njuffa,

Thanks for your comments.

One more question, How can the reinterpret_cast work for an array? Can it in one command reinterpret the whole array?

Thanks

float f = reinterpret_cast<float&>(imageBytes[i]);