Volume rendering + perspective correction

I’m using the sdk volume rendering example. However I’m really struggling getting the volume to sync up with OpenGL geometry that I overlay on top. Its close, but not perfect.

I believe this is because I can’t get the perspective information into the eye space transformation part of the volume view. Can anyone with experience in ray tracing please tell me what I’m doing wrong.

//modified sdk example

__global__ void

d_render(uint *d_output, uint imageW, uint imageH,

		float density, float brightness)

{

	int maxSteps = 500;

	float tstep = 0.01f;

	float3 boxMin = make_float3(-1.0f, -1.0f, -1.0f);

	float3 boxMax = make_float3(1.0f, 1.0f, 1.0f);

	uint x = __umul24(blockIdx.x, blockDim.x) + threadIdx.x;

	uint y = __umul24(blockIdx.y, blockDim.y) + threadIdx.y;

	float u = ((x / (float) imageW)*2.0f-1.0f);

	float v = ((y / (float) imageH)*2.0f-1.0f);

	//what the heck am I doing wrong here???

	float fovy = 0.785398163f;   //45.0 degrees;

	float fovx = ((float)imageW/(float)imageH)*fovy;	

	//u *= __tanf(fovx/2 * ((y - imageW/2.0f)/(imageW/2.0f)));

	//v *= __tanf(fovy/2 * ((x - imageH/2.0f)/(imageH/2.0f)));

	u *= __tanf(fovx);

	v *= __tanf(fovy);

		

	// calculate eye ray in world space

	Ray eyeRay;

	eyeRay.o = make_float3(mul(c_invViewMatrix, make_float4(0.0f, 0.0f, 0.0f, 1.0f)));

	eyeRay.d = normalize(make_float3(u, v, -2.0f));

	eyeRay.d = mul(c_invViewMatrix, eyeRay.d);

	 //.............

}

Essentially. I tried to follow this http://www.unknownroad.com/rtfm/graphics/rt_eyerays.html but it didn’t work (image aspect is wrong, seems arbitrary)

The math at the site you linked to is bogus; basically, it says that (fovx / fovy) == (imageW / imageH). This is not correct. If you calculate your v coordinate based on fovy, then you probably want u = v * (imageW / imageH).

thanks for your reply. But I’m confused. Why do I need to use v to compute u?

That site did seem wrong. I need to send rays as determined by fovy (45.0) and the aspect ratio of my pbo so that hopefully it’ll link up with my geometry which uses the same coordinate system (ie a point located at [100,100,100] corresponds to that same voxel within my volume texture)

I feel like the texture normalization & unit cube used by the example my also be messing things up but I won’t want to screw up the sampling by changing it. Either way, I’ll worry about this when I get the aspect right.

For simplicity, the volume rendering sample in the SDK uses a simple projection which won’t match OpenGL.

Here’s some code that should match the OpenGL projection (not necessarily efficient):

__constant__ float4x4 c_invViewMatrix;  // copy inverse view matrix here

...

	// calculate eye ray in world space

	Ray eyeRay;

	eyeRay.o = make_float3(mul(c_invViewMatrix, make_float4(0.0f, 0.0f, 0.0f, 1.0f)));

	// transform from clip (-1,1) space back to world space

	float4 p = mul(c_invViewProjMatrix, make_float4(u, v, 0.0f, 1.0f));

	p.x /= p.w;

	p.y /= p.w;

	p.z /= p.w;

	eyeRay.d = normalize(make_float3(p.x - eyeRay.o.x, p.y - eyeRay.o.y, p.z - eyeRay.o.z));