According to the OptiX Programming Guide that should just work:
What 3D type is your gridIndexOffset?
There are multiple ways to express this, for example making the buffer of bufferIds a 1D integer array and then casting it explicitly:
rtBuffer<int, 1> mip;
rtBufferId<lsOptiX::Grid, 3> grids = rtBufferId<lsOptiX::Grid, 3>(mip);
const size_t3 gridIndexOffset = make_size_t3(0, 0, 0);
lsOptiX::Grid grid = grids[gridIndexOffset];
If neither doesn’t work, does it complain about the 3D index?
The operator on the buffer is implemented inside the optix_device.h as templates around line 390 in OptiX 6.0.0 and should work for any dimensions as long as the index can be converted to size_t.
Note that CUDA 9.2 is not officially supported by OptiX 5. It lists CUDA 9.0 as maximum version inside the OptiX Release Notes. If this is not just about the 3D index, I would try with CUDA 9.0.
When installing the CUDA toolkits, please use the Custom option, then disable all display driver components before installing. If the NVIDIA Control Panel of your display driver disappears (bug in the standalone CUDA 9.x toolkit installers), reinstall your current or the newest display driver for your board afterwards.