multiple camera stitch in detectnet-camera

If I were using opencv, I could just load multiple streams,
capLeft= VideoStream(src=0).start()
capRight=VideoStream(src=1).start()
then grab the frames
frameLeft = capLeft.read()
frameRight = capRight.read()
and stitch together
vis = np.concatenate((frameLeft,frameRight),axis=1)

In detectnet.camera, I can grab multiple cameras by varying
camera = jetson.utils.gstCamera(opt.width, opt.height, opt.camera)
but how do I stitch them together in img before feeding it to net.Detect?

Hi,

Since you are using OpenCV interface, you can check if this function can meet your requirement directly.
https://docs.opencv.org/master/d8/d19/tutorial_stitcher.html

Thanks.

  1. I am not looking for smart stitching, as I am using more than 2 cameras and smart stitching starts to get confused after two.
  2. I tried to feed a cv2 “frame” to the (eg net.classify) Nvidia example, but it seems to need more than image values:

Traceback (most recent call last):
File “imagenet-console.py”, line 59, in
class_idx, confidence = net.Classify(frame, width, height)
Exception: jetson.inference – imageNet.Classify() failed to get image pointer from PyCapsule container

  1. I am working with python, not c++

Really I just need to know how to concatenate multiple nvidia images :
img1, width1, height1 = camera1.CaptureRGBA()
img2, width2, height2 = camera2.CaptureRGBA()

into one

Hi,

This will require some update for the script.

After getting the image from camera, you can try to concatenate the image together.
And update the width/height (depends on how you stitch the images) for the inference:
https://github.com/dusty-nv/jetson-inference/blob/master/python/examples/detectnet-camera.py#L61

Thanks.