Custom bounding boxes for detectNet

Is there anyway I can draw my own custom bounding boxes on the detectNet demo within the Jetson inference package?

I have tried using opencv to initialize the camera feed, instead of using gstreamer which comes with the original demo but the issue is that opencv runs at a much lower FPS rate than the original demo.

On the original detectNet demo, I get around 22-23 FPS but using opencv decreases the FPS down to 10-12.

Is there anyway I can run opencv off the GPU to increase the FPS or could I edit the bounding boxes which come with the original detectNet?

Hi @zerksinthegreat, you can use the cudaRectFill() function to draw your own bounding boxes: jetson-utils/cudaOverlay.h at c9cfddba393013cefc7bf0316c512cdf6676b50e · dusty-nv/jetson-utils · GitHub

If you are using Python, you can use cudaToNumpy() and then process the buffer with cv2:

That way you can still capture with jetson.utils.videoSource(), and then pass the image to OpenCV after the detection processing.

Thank you for your response,
It seems like the openCV output stream drastically lowers the FPS from 22 to 12. Is it possible to convert the buffer back to cuda from numpy, after openCV has drawn the bounding boxes?

My only issue is that I am receiving this error due to this conversion method,

[OpenGL] failed to create X11 Window.

the program runs but fails to display the output stream, I have set up the code as such,

    img = input.Capture()
	detections = net.Detect(img)
	# convert cuda to numpy
	frame = jetson.utils.cudaToNumpy(img)
	frame = cv.cvtColor(frame,cv.COLOR_RGBA2BGR).astype(np.float32)

    #opencv processing ...

    frame = cv.cvtColor(frame,cv.COLOR_BGR2RGBA).astype(np.float32)
    img = jetson.utils.cudaFromNumpy(frame)

You have the cudaFromNumpy() in there, so to get around the X11 error I would try importing cv2 after you import jetson.utils (or if that doesn’t work, after you create the videoOutput object). For some reason, importing cv2 seems to mess with the OpenGL driver bindings.