Slow streaming for camera by using raspi on jetson nano


I am using deep learning model yolov4tiny + social distancing algorithm on jetson nano. I am streaming the camera for people detection by using raspberry camera and CSI on jetonnano. However, it is noted the streaming is too slow

This is the code :-

vs = cv2.VideoCapture(‘nvarguscamerasrc ! video/x-raw(memory:NVMM), width=800, height=600, format=NV12, framerate=30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink drop=1’)

# Get video height, width and fps
height = int(vs.get(cv2.CAP_PROP_FRAME_HEIGHT))
width = int(vs.get(cv2.CAP_PROP_FRAME_WIDTH))
fps = int(vs.get(cv2.CAP_PROP_FPS))

# Set scale for birds eye view
# Bird's eye view will only show ROI
scale_w, scale_h = utills.get_scale(width, height)

fourcc = cv2.VideoWriter_fourcc(*"XVID")
output_movie = cv2.VideoWriter("./output_vid/distancing.avi", fourcc, fps, (width, height))
bird_movie = cv2.VideoWriter("./output_vid/bird_eye_view.avi", fourcc, fps, (int(width * scale_w), int(height * scale_h)))
points = []
global image

while True:

    (grabbed, frame) =
    frame = cv2.resize(frame, (416, 416))
    if not grabbed:
    (H, W) = frame.shape[:2]
    # first frame will be used to draw ROI and horizontal and vertical 180 cm distance(unit length in both directions)
    if count == 0:
        while True:
            image = frame
            cv2.imshow("image", image)
            if len(mouse_pts) == 8:

kindly looking for your support if we can accelerate the streaming , and other question is it faster to stream camera using USB than CSI ?


For having optimal performance, we would suggest use DeepStream SDK. You can install the package and check


For running YuloV4 tiny, please refer to
Jetson/L4T/TRT Customized Example -

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.