VPI pipeline with CSI camera and output render on screen

I am new to this and I am sorry if this is an obvious question but I cannot seem to find an answer. I am trying to understand the proper method to do processing on camera streams using VPI 1.0 and I was wondering where I can get the source code for the VPI Remap Demo v1.1, especially loading the input from cameras and displaying on screen.

I tried the VPI Python API but I am stuck, see the code below

import jetson.inference
import jetson.utils
import vpi
import numpy as np
from PIL import Image

# create display window
display = jetson.utils.glDisplay()

# create camera device
camera = jetson.utils.gstCamera(1920, 1280, '0')

# open the camera for streaming
camera.Open()

# capture frames until user exits
while display.IsOpen():
	image, width, height = camera.CaptureRGBA()
	copied_img = jetson.utils.cudaAllocMapped(width=image.width, height=image.height, format=image.format)
	jetson.utils.cudaMemcpy(copied_img, image) 
	arr=jetson.utils.cudaToNumpy(copied_img);
	input=vpi.asimage(np.uint8(arr))
	with vpi.Backend.CUDA:
		output = input.convert(vpi.Format.U8).box_filter(5, border=vpi.Border.ZERO)
	display.RenderOnce(jetson.utils.cudaFromNumpy(np.array(Image.fromarray(output.cpu()))), width, height)
	display.SetTitle("{:s} | {:d}x{:d} | {:.1f} FPS".format("Camera Viewer", width, height, display.GetFPS()))
	# close the camera
camera.Close()

I tried the jetson.utils to open a camera and read an image (which is saved as CudaImg). I converted it to VPI image, did the processing but I could not get it to render back using jetson.utils renderer.

I tried the same with opencv and I do not understand how to convert VPIimage to an image that can be used with cv2.imshow. Is there a full python API reference with all functions for VPI somewhere, I could not find that either.

Ideally, if there was the source code for the VPI demos published somewhere, it would be very helpful. Kindly let me know how to proceed with this.

Hi,

You can find several example for OpenCV ↔ VPI in the below folder:
(includes C++ and python version)

/opt/nvidia/vpi1/samples

We don’t have an example for jetson-utils+VPI currently.
Will check if we can share one for you later.

Thanks.

Thank you, but the problem I am having with OpenCV is a lot of latency as well as low FPS for a simple box filter application. I used the old visionworks box filter with nvxio rendering method and I was able to get close to 60FPS for 1920x1080 video stream and I cannot get that frame rate with OpenCV, I have tried both python and C++ implementations. This is why I am looking for using jetson multimedia api alongwith VPI library to get ideally close to the performance I need. Any alternative suggestions that I should follow to get maximum performance with no unnecessary image conversions or memory copies?

Hi,

We can use jetson-utils with VPI without issue.
Since Box filer only support gray scale image, please remember to convert the image back to RGB for rendering.

Below is an example for your reference:

import numpy as np
import jetson.utils
import vpi


display = jetson.utils.glDisplay()

camera = jetson.utils.gstCamera(1920, 1280, '0')
camera.Open()

while display.IsOpen():
    frame, width, height = camera.CaptureRGBA(zeroCopy=1)
    input = vpi.asimage(np.uint8(jetson.utils.cudaToNumpy(frame)))
    with vpi.Backend.CUDA:
        output = input.convert(vpi.Format.U8) 
        output = output.box_filter(11, border=vpi.Border.ZERO).convert(vpi.Format.RGB8)
        vpi.clear_cache()

    display.RenderOnce(jetson.utils.cudaFromNumpy(output.cpu()), width, height)
    display.SetTitle("{:s} | {:d}x{:d} | {:.1f} FPS".format("Camera Viewer", width, height, display.GetFPS()))

camera.Close()

Thanks.

Hello AastaLLL,

Thanks again for this. The code works and gives good FPS but I believe there are synchronization issues here, the images are very choppy (attached video), a certain set of pixels lag behind the others in the image and it is an unusable output. I know about the lock and unlock images for VPI, is it something we can use with Python? I saw a vpi.stream.sync() option, can we submit streams with Python? FYI I have jetpack 4.6 with vpi1.1.12 and python 3.6.9. Also, is there documentation of python VPI interface?

Also, on another note, what would be an ideal way to do simple image arithmetic with VPI images (addition, subtraction, multiplication etc.).

Hi,

The blocking effect is caused by the rounding process.
For example, the same issue can be reproduced with the following script:

while display.IsOpen():
    frame, width, height = camera.CaptureRGBA(zeroCopy=1)
    input = np.uint8(jetson.utils.cudaToNumpy(frame))
    display.RenderOnce(jetson.utils.cudaFromNumpy(input), width, height)
    display.SetTitle("{:s} | {:d}x{:d} | {:.1f} FPS".format("Camera Viewer", width, height, display.GetFPS()))

To fix this, you can align all the data format into uint8 (from cudaToNumpy).
Or extract grayscale image before calling vpi.asimage(.) with format=vp.Format.F32.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.