Hi everyone, I was succeed to install and use nvidia docker to access camera and process it via pretrained networks( jetson-inference/aux-docker.md at master · dusty-nv/jetson-inference (github.com)), but it felt like I was using prebuild tools and everything was like a black box for me. My task was to receive video/image from camera and send it to host via rtp. I’ve used opencv and gstreamer for this task, while researching web have not find any proper guide on gstreamer, therefore attaching what i’ve done here.
import sys import cv2 def read_cam(): cap = cv2.VideoCapture('nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3280, height=2464, format=(string)NV12, framerate=21/1 ! nvvidconv flip-method=0 ! video/x-raw, width=960, height=616, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink') w = cap.get(cv2.CAP_PROP_FRAME_WIDTH) h = cap.get(cv2.CAP_PROP_FRAME_HEIGHT) fps = cap.get(cv2.CAP_PROP_FPS) print('Src opened, %dx%d @ %d fps' % (w, h, fps)) gst_out = "appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc ! h264parse ! rtph264pay pt=96 config-interval=1 ! udpsink host=192.168.1.62 port=15000 " out = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, float(fps), (int(w), int(h))) if not out.isOpened(): print("Failed to open output") exit() if cap.isOpened(): while True: ret_val, img = cap.read(); if not ret_val: break; out.write(img); cv2.waitKey(1) else: print("pipeline open failed") print("successfully exit") cap.release() out.release() if __name__ == '__main__': read_cam()
Partly code probably have been taken from nvidia answers on other topics.
My hardware: nvidia jetson nano 2GB, Raspberry PI camera v2.
Script works on both python2 and python3.
During development stacked with few problems according to opencv gstreamer for python 3 compatibility or opencv unavailability on board, it these cases I would suggest simple flash sd card and boot nvidia from scratch.
Thanks everyone, hope this post would be helpful for someone.