resizeBuffer?

Hi, I studied hard about optiX.

I have questions while studying and I want to ask a question.

I have modified the optiX tutorial sample basically to perform the calculations I want to do, and the calculations are progressing more smoothly than I thought.

Before this topic, I didn’t mentioned about what I do and how I modified the code.

I’have study about radiative heat transfer, view factor using optiX ray tracing.
So, I calculated the viewfactor using classic raytracing and got the accuracy of the old papers.
But the error is as big as I thought. We think that it is necessary to carry out an extended analysis through detailed division.

I am getting the projection of the required object based on the hit area, and doing the recalculation by expanding only that part.

Partially there is a problem, but my problem is divided into mathematical problem and coding. What I would like to ask about this topic is the size of the buffer.

here’s resize code I used:
1. Read the buffer and calculate hit area of object

void PR_T(const char* filename, RTbuffer buffer)
{
	int i, j;
	RTsize B_W, B_H;
	void *Data;
	
	RT_CHECK_ERROR(rtBufferMap(buffer, &Data));
	RT_CHECK_ERROR(rtBufferGetSize2D(buffer, &B_W, &B_H));
	
	CORE *TMP = (CORE*)Data;
	const int W = static_cast<int>(B_W);
	const int H = static_cast<int>(B_H);
		
	viewfactor *RE;
	RE = (viewfactor*)malloc(sizeof(viewfactor)*W*H);
	
	float total_viewfactor = 0.0f;
	unsigned int total_hit = 0;
	
	unsigned int maxx = 0;
	unsigned int maxy = 0;
	unsigned int distx = 0;
	unsigned int disty = 0;

	for (i = 0; i < W; i++)
	{
		for (j = 0; j < H; j++)
		{
			RE[i*H + j].x = TMP[i*H + j].point.x;
			RE[i*H + j].y = TMP[i*H + j].point.y;
			RE[i*H + j].z = TMP[i*H + j].point.z;
			RE[i*H + j].VFT	= TMP[i*H + j].VFT;
			RE[i*H + j].hit	= TMP[i*H + j].hit;
			RE[i*H + j].idx = TMP[i*H + j].index.x;
			RE[i*H + j].idy = TMP[i*H + j].index.y;
			
			if(RE[i*H + j].hit==true)
			{
				total_hit ++;
				if (W - RE[i*H + j].idx > minx) minx = W - RE[i*H + j].idx;
				if (H - RE[i*H + j].idy > miny) miny = H - RE[i*H + j].idy;
				if (RE[i*H + j].idx > maxx) maxx = RE[i*H + j].idx;
				if (RE[i*H + j].idy > maxy) maxy = RE[i*H + j].idy;
			}
			total_viewfactor += RE[i*H + j].VFT;
		}
	}
	minx = W - minx;
	miny = H - miny;
	distx = maxx - minx+1;
	disty = maxy - miny+1;

	if (sub == true)
	{
		ARS(distx, disty, minx, miny);
	}
	free(RE);
	RT_CHECK_ERROR(rtBufferUnmap(buffer));
}

2. ResizeBuffer

void ARS(int w, int h, int iw, int ih)
{
	W = (int) w*SDX;
	H = (int) h*SDY;
	context["AW"]->setUint(SDX);
	context["AH"]->setUint(SDY);
	context["WW"]->setUint(w);
	context["HH"]->setUint(h);
	context["WS"]->setUint(iw);
	context["HS"]->setUint(ih);
	sutil::ensureMinimumSize(W, H);
	sutil::resizeBuffer(GOBUF(), W, H);
}

3. related variables, structures

uint32_t     W = 400u;		
uint32_t     H = 400u;
//////////////////////////////////////////////////////////////////////////////////////
unsigned int SDX =6;
unsigned int SDY =6;
bool sub = true;
//////////////////////////////////////////////////////////////////////////////////////
struct viewfactor
{
	bool hit;
	float VFT;
	float x;
	float y;
	float z;
	unsigned int idx;
	unsigned int idy;
};

This code works well based on generally calculated values.
But sometimes it does not work. At that time, it involves the following error.

OptiX Error: ‘Unknown error (Details: Function “_rtContextLaunch2D” caught exception: Encountered a CUDA error: cudaDriver().CuEventSynchronize( m_event ) returned (719): Launch failed)’

The problem is that sometimes it does not work. I am wondering what steps I need to take to overcome this problem.

“Launch failed” is a generic CUDA error. The provided information is not sufficient to tell what’s going wrong.

Please always list the following system configuration information when asking about OptiX isuses:
OS version, installed GPU(s), VRAM amount, display driver version, OptiX major.minor.micro version, CUDA toolkit version used to generate the input PTX, host compiler version.

Your rtBufferUnmap() is at the wrong place!
If sub == true, your ARS() function resizes the buffer you have currently mapped.
That’s illegal and should fail with RT_ERROR_ALREADY_MAPPED if you reach that code.

That wouldn’t have happened if you map() and unmap() buffers for the shortest possible time.
Means move the rtBufferGetSize() and all temporary host allocations out of the map/unmap block.
The buffer only needs to be mapped directly before the for-loops and can be unmapped immediately afterwards, before the ARS() call.

Note that after resizing a buffer in OptiX, the data inside it is undefined and for input buffers you need to upload it again.

Other things:

  • Your buffer indexing looks transposed. Why aren’t your indices [j * W + i]?
  • Also calculate that index once.
  • Buffers have a row-major memory layout. Running over width in the inner for-loop would be more efficient for memory accesses.
  • I would reorder the fields in your viewfactor structures to lie on their required CUDA alignment offsets.
    [url]https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#vector-types[/url]
    Replacing bool hit; with int hit; and adjusting the device code would be sufficient in your case.

I don’t expect any of the above changes to solve your issue though, well, maybe the incorrect resize location since sub == true, but that should have resulted in an error during the resize and invalid accesses or out of bounds exceptions in the device code.

  • Please make sure you have an exception program set which can indicate any device code errors and have OptiX exceptions enabled during debug mode. That will impact compile and runtime performance!
  • Make sure you have the stack sizes set correctly. Note that OptiX 6 changed the API for that!
  • Enable the usage report callback for additional information.
    (Search the OptiX forum for the above topics.)

Thanks for the advice.
I am checking my problem with the advice you gave me.

Inspired by your advice, I think I will study harder.

In fact, there are a few more important issues.

It is part of box_closest_hit_radiance () of tutorial. :

float3 ffnormal = faceforward (world_shade_normal, -ray.direction, world_geo_normal);

This is sometimes seen as the ‘ffnormal’ of another surface normal if the hit point penetrates through the opaque surface of the object.

It also shows a similar case when the hit point is located at the boundary of the geometry. Of course, this problem seems naturally arises, not only optiX’s.
 
I work with people but I feel always study and work lonely.
I am happy that there is someone who can ask something. Thank you.

Sure, coplanar faces cannot be resolved with a single ray test.
Depending on the floating point precision and traversal order it’s either one or the other.
That’s the same effect as depth bleeding in a rasterizer.

Here’s one possible solution:
[url]https://devtalk.nvidia.com/default/topic/930666/radiation-physics-problems[/url]

The Raytracing Gems Book [url]http://www.realtimerendering.com/raytracinggems/[/url] contains an article about self-intersection avoidance, but that won’t help with coplanar faces either.
The better solution would be to not have any coplanar faces in the scene, but let all geometries be a bounding surface between two volumes.

thank you. This is what I am looking for and need. I do not know if this can solve the problem I have now, but I think it will solve many of the fundamental questions I had.