Jetson-utils renderOnce blank when called in thread

I’m collecting data to quantify the improvement from @dusty_nv jetson-utils vs opencv imshow. The test examples run with no issues confirming a proper install, but the call from my threaded capture program isn’t when using the same functions.

Lets say I have a numpy array named pixels and I’ve successfully opened a gldisplay named display. The following is called in my frame consumer thread.

imshow(pixels)

shows my streaming output with no problem. I then take pixels and convert it to cuda with:

bgr_img = jetson.utils.cudaFromNumpy(pixels, isBGR=True)
rgb_img =jetson.utils.cudaAllocMapped(width=bgr_img.width,height=bgr_img.height,format='rgb8')
jetson.utils.cudaConvertColor(bgr_img, rgb_img)
display.RenderOnce(rgb_img,width, height)

However my rendered output is only black. My image is smaller than my screen resolution, but that shouldn’t be the issue because the test case with a CSI camera correctly shows the input image.

I tried adding the synchronize function but that didn’t rectify it either.

One more detail I noticed is I only see the following output on terminal for the first frame. Not sure if that’s the expected behavior.

[OpenGL] creating 2064x1544 texture (GL_RGB8 format, 9560448 bytes]

Hi @spacebaseone, you can try this instead:

display.RenderOnce(rgb_img, width=width, height=height, normalize=0, format='rgb8')

If that doesn’t work, I recommend changing to the more recent videoOutput interface that is demonstrated here:

https://github.com/dusty-nv/jetson-utils/blob/916023814a1ac1401380c5d4313454c78c0a5f40/python/examples/video-viewer.py

I’ll check it out. I’m reading in bytes from a non-V4l USB camera, so I’ll need to look into how to go from my nparrays to one of the accepted uris/uri like format.

Also, specifying the input vars didn’t rectify.

You can skip/remove using videoSource in lieu of your custom input method from numpy array, and just use videoOutput.