Tx2 segmentation fault

hi,
i’m trying to convert float* to mat.
float * imgRGBA;
Mat dst( height, width, CV_32FC1, imgRGBA);
imshow(“img”,dst);

this code results segmentation fault.
[cuda] registere 4915200 byte openGL texture for interop access (640x480)
segmentation fault (core dumped)

what should i do, to solve this error?

Hi @imini, can you show the code that sets the imgRGBA pointer? My guess is that it is CUDA device memory, and not accessible from the CPU like Mat and imshow() are trying to use.

If you are capturing imgRGBA from camera, change the call to this:

camera->CaptureRGBA(&imgRGBA, 1000, true);  // 'true' enables ZeroCopy

Specifying true for the optional third parameter to CaptureRGBA() will put the image in mapped ZeroCopy memory, where both the CPU and GPU have access. So you can use the memory from the CPU then.

hi, thank you so much for your help.
https://github.com/dusty-nv/jetson-inference/blob/master/examples/detectnet-camera/detectnet-camera.cpp
here is the code that sets imgRGBA pointer.

like what you said, i specified true
the imshow() runs well now,
but the problem is by imshow(“img”,dst), i only can see white window.
i cannot see my dst image…

I think you may need to create your Mat object as CV_32FC4 (4-channel float) instead of CV_32FC1 (single-channel float).

i’ve changed CV_32FC1 to CV_32FC4, but i still have same problem…

Hmm. Before you make your Mat object, can you insert a call to cudaDeviceSynchronize()?

And do you know if imshow() supports floating-point formats?

cudaDeviceSynchronize();
Mat dest(480,640,CV_32Fc4,imgRGBA);
imshow(“pic”,dest)

i’ve tried this, but same problem…

you mean imshow(“pic”,imgRGBA)…?
this makes error, so i don’t think imshow supports floating-point formats

You might want to us cvCvtColor to change your Mat from floating-point to 8-bit unsigned char format.

I’m so sorry, but i don’t understand well yet.
where should i use cvCvtColor…?

Between Mat dst() and imshow()

Mat dest(480,640,CV_32Fc4,imgRGBA);
<-- cvCvtColor() here
imshow(“pic”,dest)

what about cvCvtColor()'s arguments?

Let me give it a try here to see if I can get it working.

BTW is there a reason just to not use the glDisplay? It would avoid the extra overhead.

because to combine my code,
i should convert imgRGBA to mat type

OK, I did a test with OpenCV and found the problem:

  • OpenCV expects that floating-point images have pixel values in the range of [0,1] but jetson-inference uses pixels in the range of [0,255]
  • OpenCV expects the image to be in BGR format, but jetson-inference uses RGB format

With this code, it is working:

#include "cudaNormalize.h"  // include this above

CUDA(cudaNormalizeRGBA((float4*)imgRGBA,  make_float2(0,255),
					   (float4*)imgRGBA, make_float2(0,1),
					   imgWidth, imgHeight));
						 
CUDA(cudaDeviceSynchronize());
cv::Mat cv_image(cv::Size(imgWidth, imgHeight), CV_32FC4, imgRGBA);
cv::Mat cv_image2(cv::Size(imgWidth, imgHeight), CV_8UC3);
cv::cvtColor(cv_image, cv_image2, cv::COLOR_RGBA2BGR);
cv::imshow("Display window", cv_image2);
cv::waitKey(0);
1 Like

oh you helped me so hard that i was able to fix the error!
thanks a lot!!

No problem, glad you got it working!

1 Like