Objects appearing in the wrong order after scaling

So I have noticed a bug, where I put in multiple instances of a sphere geometry into the scene, sometimes they appear in the wrong order.
Basically I have this Sphere code as a geometry:

#define NOMINMAX
#include <optix.h>
#include <optixu/optixu_math_namespace.h>

using namespace optix;

rtDeclareVariable(float3, normal, attribute normal, );
rtDeclareVariable(optix::Ray, ray, rtCurrentRay, );

RT_PROGRAM void intersect(int primIdx)
{
	auto o = ray.origin - make_float3(0.f,0.f,0.f);
	auto r = 1.f;
	auto dir = normalize(ray.direction);
	
	auto b = dot(o, dir);
	auto c = dot(o, o) - r * r;

	auto disc = b * b - c;

	if (disc >= 0) {
		auto sdisc = sqrtf(disc);
		auto t = -b - sdisc;
		auto check_second = true;
		if (rtPotentialIntersection(t)) {
			normal = o + dir * t;
			if (rtReportIntersection(0)) check_second = false;
		}
		if (check_second) {
			t = -b + sdisc;
			if (rtPotentialIntersection(t)) {
				normal = o + dir * t;
				rtReportIntersection(0);
			}
		}
	}
}

RT_PROGRAM void boundingbox(int primIdx, float result[6])
{
	result[0] = -1.f;
	result[1] = -1.f;
	result[2] = -1.f;
	result[3] = 1.f;
	result[4] = 1.f;
	result[5] = 1.f;
}

This is uploaded into the geometry, which is then instantiated, put into a GeometryGroup, which is Transformed, then put under the big top group thing.

And the issue is that if I have a sphere close to the camera with 0.5 scaling, and then a big sphere in the back ground with a 5 scaling, then I should always have the small sphere in front of the big one. (Like second upload.)
But if I move backwards, the small sphere will go into the big sphere, then disappear completely. (First uplaod.)

What am I doing wrong? This sphere thing should be so simple compared to the other things I have done in Optix already.


Hi adam95,

I don’t see anything wrong with your intersection program. The problem might lie in the scene graph construction or the acceleration structure handling. Are you moving things in the scene dynamically in between launches, or setting everything up once and rendering only a single frame? Are you always marking your accel dirty after making any transform or scene graph changes?

It might help to inspect the t values for a couple of the pixels, one corresponding to each sphere, in your intersection program, to determine whether your transforms are positioning them in space the way you think they are. It might also help to orient your camera to the side or top, rotate the camera around one of the spheres to get more points of view and see if that gives you more clues. Sometimes debugging issues like with with a dolly in/out can be confusing.


David.

The scene is static, all I move around is the camera. The objects are where I expect them to be, except they still overlap somehow with each other.

I did the t rendering, and apparently the smaller sphere thinks it’s farther away than the larger one, even though it’s actually closer in the scene.

Here is the instanciator code, it’s only called before the first frame is rendered.

GeometryInstanceDescriptor Scene::instanciateGeometry(optix::Geometry geometry, std::vector<optix::Material>& materials, optix::Matrix4x4 M) {
	auto geometryInstance = context->createGeometryInstance();
	geometryInstance->setMaterialCount(materials.size());
	geometryInstance->setGeometry(geometry);
	for (int i = 0; i < materials.size(); i++) {
		geometryInstance->setMaterial(i, materials[i]);
	}

	auto geometryGroup = context->createGeometryGroup();
	geometryGroup->addChild(geometryInstance);
	geometryGroup->setAcceleration(context->createAcceleration("Sbvh"));
	
	optix::Transform transform = context->createTransform();
	transform->setMatrix(true, M.getData(), nullptr);
	transform->setChild(geometryGroup);

	auto group = context["top_object"]->getGroup();
	group->addChild(transform);
	group->getAcceleration()->markDirty();

	return GeometryInstanceDescriptor{ geometryInstance, geometryGroup, transform };
}

What’s really awkward about this is that I have a working triangle mesh renderer, with textures, shadows, and ambient occlusion, but the spheres refuse to work…

I know how it feels to have something complex working and something simple breaks. :)

How are you verifying that the objects are where you expect them to be? The t rendering is some evidence that the small sphere isn’t sitting where expected, right? Have you looked directly at both matrices, and gone through the double-check that it’s not a matrix transpose problem or index error somewhere, all the boring stuff?


David.

I can move the camera around in real time. The matrix looks good too,

It’s like the sphere gets placed in the correct spot, but the t value is wrong, so sometimes they swap places along the ray.

I think it has something to do with scaling. I should be allowed to change the size of an object through the Transform node, right?

I think I figured out the issue.
Optix expect the program to calculate the world t value, not the object t value.
Which is kind of ruined by me normalizing the direction vector.

On the other hand simply removing that normalize call doesn’t work for some reason either.
So I ended up dividing the reported t by the length of the direction vector, and that seems to have fixed it.

This seems to work, but at this point I’m really confused.

I recently hit that as well when writing a custom geometric primitive intersection inside OptiX 7 and assume OptiX 6 works the same way.
Though I’m pretty sure OptiX 5 and before always normalized the ray direction.

  • The intersection program works in object coordinates. Or more precisely means for OptiX < 7 the rtCurrentRay sematic variable is in object coordinate space.
  • The optixGetObjectRayOrigin() and optixGetObjectRayDirection() are inverse transformed by the current transformation matrix. Means if there is a scaling transform over the custom geometry that ray direction is not normalized. (You have access to the optixGetWorldRayOrigin() and optixGetWorldRayDirection(), but they might be more expensive to get in the intersection program because of the required transformation.)
  • The optixGetRayTmin(), optixGetRayTmax() and the intersection distance in optixReportIntersection(tHit, …) are not touched at all. Means they are all in world coordinates because they are set by the optixTrace() normally inside the raygen and closest hit programs.

Normalizing the tHit by multiplying with the inverse object space ray direction length is actually the right thing to do when calculating the intersection distance with a normalized ray direction before.
I needed to do that because I built an ortho-normal basis with that ray direction and that was wrong with an unnormalized vector, then I hit the wrong intersection distance issue which completely screwed my lighting because the shadow rays all started at the wrong position.

The calculation fails with the given sphere intersection program with an unnormalized ray direction vector because that code is optimized for the a == 1.0 case in the quadratic formula which is only the case for a normalized vector.
See this post: https://devtalk.nvidia.com/default/topic/1030431/intersection_sphere-in-sphere-cu/

Thanks. Now that does make a lot more sense than whatever I did.

I compared 2 intersection programs within the OptiX 7.0.0 SDK

OptiX 7.0.0 SDK optixSphere optixSphere.cu

const float3 dir  = optixGetObjectRayDirection();  // "object" space   not-normalized, right?
const float  l = 1 / length( dir );
const float3 D = dir * l;                          // normalizing the direction
...
root1 = ( -b - sdisc );   // here "root1" not multiplied wih inverse "object" length; 
// so obviously a scale of 1.0 assumed, because there is no transform added to this GAS;
optixReportIntersection(root1,...);  // passing "t" in "world" space

OptiX 7.0.0 SDK optixWhitted geometry.cu

const float3  ray_dir  = optixGetWorldRayDirection();  // "world" space; normalized, right?
float  l = 1 / length(ray_dir);                        // l here would always be 1.0; so it seems to be redundant
float3 D = ray_dir * l;
...
float root1 = (-b - sdisc);
float root11= 0; // if not refined
t = (root1 + root11) * l;     // here multiplied with inverse "world" length of ray direction 
                              // (but since its 1.0; it seems to be redundant)
optixReportIntersection( t, ...);  // passing "t" in "world" space

And if optixGetWorldRayDirection has a normalized ray direction in world coordinates, using “* l” is redundant here, isn’t it?

But in optixSphere.cu t_hit is passed to optixReportIntersection without that “* l”. Obviously because scale of 1.0 is assumed, because in that scene only the GAS is present without any transforms. So its clear now.
But in the whitted sample its multiplied with the inverse world space ray direction length, but when its normalized, then it would be always 1.0;

Right, the optixWhitted example is not using transforms either (see initLaunchParams() line: state.params.handle = state.gas_handle;) and the scaling is redundant.

Calculating everything in world space there is at least consistent and produces the required intersection distance in this case, but that would be more expensive when there were actual instances with scaling transforms inside the scene. Though that would get really different and much slower when handling the individual spheres in object space with instances placing them. Don’t do that.

The OptiX SDK examples are sometimes very special cases and normally not interchangeable easily.

Thank you very much for the clarification.