Wrong Projection reimplementing progressive example

Hello,

i’m currently trying to reimplement the “progressive” example from the SDK in an own framework.
Therefore the output on the window should look like this:

But somewhere in my code is an error regarding the projection of the image, which is why it looks like this:

So my question is, does anyone of you have an idea where the error could be?

My Code:

void OptiXRenderer::createContext()
{	
	//context for pinhole_camera
        ...
	//initialize camera parameters
	eye_ = { 3.0f, 2.0f, -3.0f };
	setCameraValues(hfov_org_);
        
        //creating buffers and programs
        ...

}
void OptiXRenderer::Update()
{
	//launch and set texture
	context_->launch(0, width_, height_);
	glBindBuffer(GL_PIXEL_UNPACK_BUFFER, buffer_->getGLBOId());
	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, OptiX_tex_);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, (GLsizei)width_, (GLsizei)height_, 0, GL_BGRA, GL_UNSIGNED_BYTE, nullptr); // BGRA8
	glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
}

void OptiXRenderer::setCameraValues( float hfov)
{	
	optix::float3 lookat, up;
	optix::float3 camera_u, camera_v, camera_w;

	lookat = { 0.0f, 0.3f, 0.0f };
	up = { 0.0f, 1.0f, 0.0f };

	const float aspect_ratio = (float)width_ / (float)height_;

	float ulen, vlen, wlen;
	camera_w.x = lookat.x - eye_.x;
	camera_w.y = lookat.y - eye_.y;  /* Do not normalize W -- it implies focal length */
	camera_w.z = lookat.z - eye_.z;

	wlen = sqrtf(dot(camera_w, camera_w));
	camera_u = cross(camera_w, up);
	normalize(camera_u);
	camera_v = cross(camera_u, camera_w);
	normalize(camera_v);
	ulen = wlen * tanf(hfov / 2.0f * 3.14159265358979323846f / 180.0f);
	camera_u.x *= ulen;
	camera_u.y *= ulen;
	camera_u.z*= ulen;
	vlen = ulen / aspect_ratio;
	camera_v.x *= vlen;
	camera_v.y *= vlen;
	camera_v.z*= vlen;

	context_["eye"]->setFloat(eye_);
	context_["U"]->setFloat(camera_u);
	context_["V"]->setFloat(camera_v);
	context_["W"]->setFloat(camera_w);
}

void OpenGLRenderer::Init(OptiXRenderer* optix_renderer)
{
	glViewport(0, 0, optix_renderer->GetWidth(), optix_renderer->GetHeight());

	glMatrixMode(GL_PROJECTION);
	glLoadIdentity();

	glMatrixMode(GL_MODELVIEW);
	glLoadIdentity();

	glGenBuffers(1, optix_renderer->GetPBO());
	// Buffer size must be > 0 or OptiX can't create a buffer from it.
	glBindBuffer(GL_PIXEL_UNPACK_BUFFER, *optix_renderer->GetPBO());
	glBufferData(GL_PIXEL_UNPACK_BUFFER, optix_renderer->GetWidth() * optix_renderer->GetHeight() * sizeof(unsigned char)* 4, nullptr, GL_STREAM_READ); // BRGA8
	glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);

	glGenTextures(1, optix_renderer->GetOptiXTexture());
	//DP_ASSERT(tex_ != 0);
	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, *optix_renderer->GetOptiXTexture());
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

	glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
	glBindTexture(GL_TEXTURE_2D, 0);
	
}

//display loop
bool OpenGLRenderer::Do()
{
	glPushAttrib( GL_ENABLE_BIT );
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

	glEnable(GL_LIGHTING);
	glEnable( GL_BLEND );

	glBindTexture(GL_TEXTURE_2D, *OptiX_tex);
	glEnable( GL_TEXTURE_2D );

	glBegin( GL_TRIANGLE_STRIP );
	
	glTexCoord2f( 0.0f, 0.0f );
	glVertex3f(-1.0f, -1.0f, -1.0f);

	
	glTexCoord2f(0.0f, 1.0f );
	glVertex3f(-1.0f, 1.0f, -1.0f);

	
	glTexCoord2f(1.0f, 0.0f );
	glVertex3f(1.0f, -1.0f, -1.0f);


	glTexCoord2f(1.0f, 1.0f );
	glVertex3f(1.0f, 1.0f, -1.0f);

	glEnd();

	glBindTexture( GL_TEXTURE_2D, 0 );
	glDisable( GL_TEXTURE_2D );

	glPopAttrib();

	return true;
}

The display loop and the update function are called in a higher level class.
Would be great if someone has an idea.

Thank you.

Since the floor looks okay and only the sphere geometry seems flattened out and in the same plane as the floor, you either provided the incorrect vertex positions for the sphere, e.g. copy-paste error with position.y == position.z, or something else in your OptiX setup or device code is incorrect.
Neither is part of the provided code excerpts, so not possible to say.

Additionally I would not recommend to place the texture blit exactly onto the OpenGL far plane z-depth -1.0f in your case. Since you’re using a unit orthographic projection (all OpenGL matrices are identity), you could simply place it at 0.0f depth or use glVertex2f() which does that automatically.
You should also definitely not enable lighting for that texture blit. Blending wouldn’t also be needed unless you’re compositing the ray traced result with something. There is also no need to clear the depth buffer if you don’t have depth testing enabled, but it’s unclear what the rest of the code needs.

Since i’m not quite sure what you mean, i just provide you with the rest of the code on the optix side except for the material part, which shouldn’t be relevant:

void OptiXRenderer::createContext()
{	
	//context for pinhole_camera
	context_ = optix::Context::create();
	context_->setRayTypeCount(2);
	context_->setEntryPointCount(1);
	context_->setStackSize(320);

	context_["scene_epsilon"]->setFloat(1.e-3f);
	context_["radiance_ray_type"]->setUint(0);
	context_["frame"]->setInt(0u);

	//initialize camera parameters
	eye_ = { 3.0f, 2.0f, -3.0f };
	setCameraValues(hfov_org_);

	//create output buffer
	buffer_ = context_->createBufferFromGLBO(RT_BUFFER_OUTPUT, pbo_);
	buffer_->setFormat(RT_FORMAT_UNSIGNED_BYTE4);
	buffer_->setSize(width_, height_);
	context_["output_buffer"]->set(buffer_);


	//ray generation program
	context_->setRayGenerationProgram(0, context_->createProgramFromPTXFile(&path_to_ray_gen_ptx[0], "pinhole_camera"));
	context_->setExceptionProgram(0, context_->createProgramFromPTXFile(&path_to_ray_gen_ptx[0], "exception"));
	context_["bad_color"]->setFloat(0.0f, 1.0f, 0.0f);

	//miss program
	context_->setMissProgram(0, context_->createProgramFromPTXFile(&path_to_miss_ptx[0], "miss"));
	context_["bg_color"]->setFloat(0.462f, 0.725f, 0.0f);
	//context["envmap"]->setTextureSampler(loadTexture(context, "C:/ProgramData/NVIDIA Corporation/OptiX SDK 3.9.1/SDK/tutorial/data/CedarCity.hdr", {1.0f,1.0f,1.0f}));

	//Random seed buffer
	seed_buffer_ = context_->createBuffer(RT_BUFFER_INPUT);
	seed_buffer_->setFormat(RT_FORMAT_UNSIGNED_INT);
	unsigned int* seeds;
	seed_buffer_->setSize(width_, height_);
	seeds = (unsigned int*)seed_buffer_->map();
	for (unsigned int i = 0; i < width_*height_; ++i)
		seeds[i] = rand();
	seed_buffer_->unmap();
	context_["rnd_seeds"]->set(seed_buffer_);
}

void OptiXRenderer::createGeometry()
{
	optix::GeometryInstance sphere_gi, ground_gi;
	optix::GeometryGroup geom_group;

	//geometry sphere
	sphere_ = context_->createGeometry();
	sphere_->setPrimitiveCount(1u);
	sphere_->setBoundingBoxProgram(context_->createProgramFromPTXFile(&path_to_sphere_ptx[0], "bounds"));
	sphere_->setIntersectionProgram(context_->createProgramFromPTXFile(&path_to_sphere_ptx[0], "intersect"));
	sphere_["sphere"]->setFloat(2.0f, 3.2f, 0.0f, 1.3f);

	/* Ground geometry. */
	ground_ = context_->createGeometry();
	ground_->setPrimitiveCount(1u);
	ground_->setBoundingBoxProgram(context_->createProgramFromPTXFile("G:/ex_mb/TOS-Prototype/build-win32-x64/ptx/TOS-Prototype_generated_parallelogram.cu.ptx", "bounds"));
	ground_->setIntersectionProgram(context_->createProgramFromPTXFile("G:/ex_mb/TOS-Prototype/build-win32-x64/ptx/TOS-Prototype_generated_parallelogram.cu.ptx", "intersect"));
	
	optix::float3 anchor = { -30.0f, 0.0f, 20.0f };
	optix::float3 v1 = { 40.0f, 0.0f, 0.0f };
	optix::float3 v2 = { 0.0f, 0.0f, -40.0f };
	optix::float3 normal = cross(v2, v1);
	normal = normalize(normal);
	float d = dot(normal, anchor);
	v1 *= 1.0f / dot(v1, v1);
	v2 *= 1.0f / dot(v2, v2);
	optix::float4 plane = make_float4(normal, d);

	ground_["plane"]->setFloat(plane);
	ground_["v1"]->setFloat(v1);
	ground_["v2"]->setFloat(v2);
	ground_["anchor"]->setFloat(anchor);

	////geometry instance
	sphere_gi = context_->createGeometryInstance();
	sphere_gi->setGeometry(sphere_);
	sphere_gi->setMaterialCount(1);
	sphere_gi->setMaterial(0, ground_matl_);

	ground_gi = context_->createGeometryInstance();
	ground_gi->setGeometry(ground_);
	ground_gi->setMaterialCount(1);
	ground_gi->setMaterial(0, ground_matl_);

	//geometry group & acceleration
	geom_group = context_->createGeometryGroup();
	geom_group->setChildCount(2);
	geom_group->setChild(0, sphere_gi);
	geom_group->setChild(1, ground_gi);
	geom_group->setAcceleration(context_->createAcceleration("NoAccel", "NoAccel"));

	//attach to context
	context_["top_object"]->set(geom_group);

}

Ok, so you’re using a parametric sphere primitive. I thought this might be a tessellated sphere.

From the given sources I can’t say what’s wrong at first glance. I fear you need to keep debugging this yourself.

Ok, thank you nevertheless.

The code is pretty similar to the examplecode, could there still be a possibility that the problem is on the Optix side?

You have the working OptiX ambient occlusion example code inside the SDK.
If you can’t get the same mechanism to work inside your own OpenGL framework, you’d need to identify what exactly is different, for example the enabled lighting and blending which could easily result in pure white results when blitting a texture.

What I do in such cases is to debug both projects at the same time, single stepping inside the debugger through both until I find something different in the non- working project, then change that to what I know is working and step there again, until the problem is solved. Sounds tedious but is the easiest method in such cases to spot an error in the new code.

Only if that doesn’t reveal the problem, you could also use the usual debug methods I linked to in this post [url]How could I debug the code in cuda files in a Optix project? - OptiX - NVIDIA Developer Forums and see if the OptiX side actually does everything correctly.

I would first look at the texture blit itself, then at the image data, then at the OptiX code
If the lighting and blending didn’t screw up the image, look at the data in the output buffer without using OpenGl interop by mapping that and looking into a debug memory window to see if there are actually only green and white results and no ambient occlusion greyscale values. If yes, the OptiX side or variables sent to it are incorrect.
The same could be done by printing out the output buffer results inside the ray generation program for some launch index where you expect the ambient occlusion to happen.

Ok thank you, I’ll try that.