I am using the Texture Tools SDK from C, using the low-level API. This is what I am doing so far:
NvttRefImage nvtt_image = {
.data = linear_rgba,
.width = width,
.height = height,
.depth = 1,
.num_channels = 4,
.channel_swizzle = {NVTT_ChannelOrder_Red, NVTT_ChannelOrder_Green, NVTT_ChannelOrder_Blue, NVTT_ChannelOrder_Alpha},
.channel_interleave = true,
};
int num_tiles;
NvttCPUInputBuffer* nvtt_input_buffer = nvttCreateCPUInputBuffer(&nvtt_image, NVTT_ValueType_UINT8, 1, 4, 4, 1, 1, 1, 1, NULL, &num_tiles);
u32 out_size = width * height;
void* output = tallocate_safe(out_size);
nvttEncodeBC7CPU(&nvtt_input_buffer, false, true, output, false, false, NULL);
nvttDestroyCPUInputBuffer(&nvtt_input_buffer);
The Encode function crashes, and I am suspecting it might be because I am running on an old computer (dating back to 2014 or so), but I cannot confirm it (as far as I can tell, I am using the API correctly here, but who knows). As you can see, I am not using the GPU for compression, I am telling the library to use the CPU. So, assuming I am correctly using the API, could the old CPU be the culprit here?
More info:
output is a piece of memory aligned on 16 bytes (I tried allocating it with malloc to make sure and it does not make a difference).
The debugger reports “access violation reading address 0xffffffffffffffff”.
I am using the DLL from the SDK folder, nvtt30205.dll
CPU: Intel Ivy Bridge, with SSE 4.2, SSSE 3 and AVX (no FMA)
OS: Windows 10
Here is the call stack as viewed in WinDbg:
