Can not use OpenCV to display image from jetson.utils.gstCamera

Hi,

I want to use jetson inference for object detection and some OpenCV functions.

First of all, I use jetson.inference.gstCamera and CaptureRGBA() method to get my image.
Then, I want to ask user to define a ROI by clicking on an image, so I get a first image with CaptureRGBA(), then I would like to use OpenCV Mouse event function to display an image and ask the user to click on it.
My problem is the image I try to display is blank. But when I use the same image with method net.Detect() and jeston.utils.glDisplay().RenderOnce(), the image is correct.

Here my code :

net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
camera = jetson.utils.gstCamera(1280, 720, "0")
display = jetson.utils.glDisplay()

imgFromCamera, width, height = camera.CaptureRGBA(zeroCopy=1)
img = jetson.utils.cudaToNumpy(imgFromCamera, width, height, 4)
cv2.imshow("image", img)

With that code, I get a white image in my opencv show window.
I try the function cv2.cvtColor(img, cv2.COLOR_RGBA2RGB) or with cv2.COLOR_RGBA2GRAY), I always have the same result.

Can anyone help me ?
Thanks,
JI

Hi JI, you may want to try a jetson.utils.cudaDeviceSynchronize() call after CaptureRGBA()

Does cv2.cvtColor() know the input image is floating point with values between 0.0 and 255.0?

1 Like

Thank you for your answer.

Ok I will try your solution this Monday.
When I check value of img return by the function cudaToNumpy, I see that the Alpha channel has value between 0 and 1. I think that openCV imshow expects value between 0 and 255. I will try to multiply the alpha channel by 255 and try to display the image.

Why do I need to call cudaDeviceSynchronize after CaptureRGBA() ?

It’s because CUDA is used for colorspace conversion of the camera image inside CaptureRGBA(), so you want to wait for GPU to finish before using it on CPU. You didn’t need it with OpenGL display because that is using GPU.

Hi,
I did some tests and I got the same result.

After CaptureRGBA(), I call cudaDeviceSynchronize().
Just after that instruction, I print the img and I got strange values like :

[[[30.420702 12.350568 13.881562 1.]
[29.424608 11.354474 12.8854685 1.]
[25.59961 18.60205 11.825624 1.]

[ 24.723047 5.1916413 -4.2075005 1.]
[ 28.129686 3.8489065 -6.231563 1.]
[ 32.11406 7.8332815 -2.247188 1.]]

Why do i get some negative value ?

In OpenCV, imshow need a BGR image so I use cvtColor to convert my image from RGBA to BGR. I get same result, the image is blank, just little of pixel have color (like yeloow or red ???)

this works for me but it may not be perfect and I didn’t find any other way:

img, width, height = camera.CaptureRGBA (zeroCopy = True)
        jetson.utils.cudaDeviceSynchronize ()
        # create a numpy ndarray that references the CUDA memory
        # it won't be copied, but uses the same memory underneath
        aimg = jetson.utils.cudaToNumpy (img, width, height, 4)
        #print ("img shape {}".format (aimg1.shape))
        aimg1 = cv2.cvtColor (aimg.astype (np.uint8), cv2.COLOR_RGBA2BGR)

I’m hoping to find a way to read from gstCamera as numpy so we don’t have to do this because otherwise we need cudaToNumpy to process the frame then numpyToCuda to classify… and this sux!

I remembered we have this GitHub - NVIDIA-AI-IOT/jetcam: Easy to use Python camera interface for NVIDIA Jetson
works well so far and I only need to numpyToCuda for classification

If you wish to do pre-processing in OpenCV, you may wish to just use OpenCV for capture. Then you can apply your ROI, and only need to do numpyToCUDA() once.

thanks Dusty. that’s exactly what I’m thinking of doing.

btw, much appreciate the work you’ve done on GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

Thanks, it works for me :)

1 Like

that’s good to know.
keep an eye on weird colours because I don’t think this is 100% what it must be but I’m too new to this.

cheers

Proposed solution is not working for me. I am using

net = jetson.inference.detectNet('ssd-mobilenet-v2', threshold=0.5)
camera = jetson.utils.gstCamera(1280, 720, "0")
display = jetson.utils.glDisplay()
img, width, height = camera.CaptureRGBA(zeroCopy = True)
jetson.utils.cudaDeviceSynchronize() 
numpyImg  = jetson.utils.cudaToNumpy(img,width,height,4)
aimg1 = cv2.cvtColor(numpyImg.astype (np.uint8), cv2.COLOR_RGBA2BGR)
cv2.imwrite('tst.png',aimg1)

The result is https://www.dropbox.com/s/ptpji7emxwli6zr/tst.png?dl=0

Any idea what is wrong here?

1 Like

no idea, but you might want to save the cuda frame too, for comparison.
also, try to dump/print the two arrays: numpyImg and aimg1 to check how the conversion works and if it makes sense in your case.

you use:

jetson.utils.cudaToNumpy(img,width,height,4)

is your width/height in line with the camera resolution 1280/720?

Hi, camera resolution is 1280/720.

I printed min/max for each channel numpyImg

-4.542187 91.46133
-2.893652 76.25795
-21.268595 40.266094
1.0 1.0

aimg1 has min/max 0 255 for each channel.

you should dump the print of both via print(numpyImg) and print(aimg1) as you want to see how it converts floats to ints.
that’s where your problem might be as it may lose some of the data.

did you already compare the 2 saved images? you can also save to file from cuda with jetson.utils.saveImageRGBA(opt.filename, cuda_mem, opt.width, opt.height)

it works for me, thank you