Embedding optix-ir

Is it possible to embed optix-ir into an application, and if so, is there an example online to look at?

I have previously embeded ptx following Ingo’s examples, with bin2c, encoding the ptx as a char sting. Now I whish to migrate to optix-ir. For this I changed the nvcc flag from -ptx to --optix-ir. When compiling module I get

Program aborted due to an unhandled Error:
Invalid bitcode signature

from optixModuleCreate at runtime. Hints on how to move forward, and online examples would be appreciated.

Yes, that should just work, unless you did anything with strings, because there can be null characters inside the binary *.optixir.
So if you didn’t also adjust the optixModuleCreate function arguments throughout the code to not use things like ptxCode.c_str(), this ain’t working.

And of course the OptiX IR needs to be generated with a CUDA toolkit version which is supported by the installed display driver, or you get errors like this: https://forums.developer.nvidia.com/t/optix-8-module-creation-version-mismatch/272885

For faster compile turnaround times in my OptiX examples, I’m not embedding OptiX module input files but simply store them into an application specific folder relative to the executable and load them as binary from there. This code is not using strings to handle either PTX or OptiX IR input code!

The OptiX SDK 8.0.0 has some CMake macros using BIN2C as well. I don’t know if that is actually used, yet.

What is your display driver version?
There was an error in the optixModuleCreate function not adhering to the input code size argument correctly but that should have been fixed in current R535 drivers (535.92 and newer).

Thanks for the reply.

I am embedding and using the code as

static const unsigned char theoptixir[] = {...}; // from bin2c
auto size = sizeof(theoptixir);
optixModuleCreate(context, &module_compile_options, &pipeline_compile_options,
                          (const char*) theoptixir, size, olog, &sizeof_log, &module_);

It does indeed contain 0x00 very early.
I am using cuda toolkit 12.0, Optix 8.0, display driver 537.58.

I don’t see why that would fail.

Have you tested if the generated *.optixir files themselves work, like when reading them from file with the function readData() I’m using in that example code?

I gave that a short test by converting the raygeneration program of one of my examples with bin2c (from CUDA 12.2.2) and used the embedded code like this and everything kept working.

#ifdef __cplusplus
extern "C" {

static const unsigned char raygen[] = {

#ifdef __cplusplus


      OPTIX_CHECK( m_api.optixModuleCreate(m_context, &moduleCompileOptions, &pipelineCompileOptions, (const char*) raygen, sizeof(raygen), nullptr, nullptr, &modules[MODULE_ID_RAYGENERATION]) );

Problem is solved! I narrowed the problem down to bin2c by testing with your readData

bin2c.exe -c --padd 0 --static --type char --name optixir optixir.optix > optixir.c

Removing the padding argument solved the issue. Not sure why padding 0 was needed at some point, but taken from here:

Great! Yeah, I was using the bin2c command line without padd or type because the defaults are fine:

bin2c.exe --static --const --name raygen raygen.optixir

The default alignment is 1 which doesn’t need any padding for char data. Though if it post-added a null char at the end to make that array data a C string, which wouldn’t hurt for the PTX input case but isn’t required due to the explicit size argument in optixModuleCreate, it’s definitely incorrect for binary data.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.