How to get video-viewer.py to receive stream from raspberry pi

I have the following on a raspberry pi zero w,

raspivid -t 0 -w 1296 -h 730 -fps 30 -b 2000000 -awb auto -n -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=0.0.0.0 port=8554

rpi ip address set to static = 10.0.0.15

and on the Jetson Nano, running Jetpack 4.4 openvc 4.5.2 compiled with cuda support, able to receive video feed from rpi using the following python code without lag.

-----xxxxxxxxxxxxxxxxxxxx------------------xxxxxxxxxxxxxxxxxxxxxxx

import cv2
print(cv2.version)
dispW=1296
dispH=730
flip=2

tested working parameter

camSet=’ tcpclientsrc host=10.0.0.15 port=8554 ! gdpdepay ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv flip-method=’+str(flip)+’ ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, width=’+str(dispW)+’, height=’+str(dispH)+’,format=BGR ! appsink drop=true sync=false ’

cam= cv2.VideoCapture(camSet)

while True:
ret, frame = cam.read()
cv2.imshow(‘nanoCam’,frame)
#cv2.moveWindow(‘nanoCam’,0,0)
if cv2.waitKey(1)==ord(‘q’):
break
cam.release()
cv2.destroyAllWindows()

I would like to use the following nice tool but I am having zero luck!!

run as:
I have tried this “python3 video-viewer.py rtp://10.0.0.15:8554 --input-codec=h264” and no dice.

(I have a reolink ip camera, and the video-viewer.py works with that stream on the jetson nano although its very delayed video etc)

getting following error, if anyone can guide in the right direction, would appreciate your help (I am not most familiar with gstreamer and all of the things that going on etc):

test@nano-desktop:~/jetson-inference/build/aarch64/bin$ python3 video-viewer.py rtp://10.0.0.15:8554 --input-codec=h264
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder – creating decoder for 10.0.0.15
[gstreamer] gstDecoder – resource discovery not supported for RTP streams
[gstreamer] gstDecoder – pipeline string:
[gstreamer] udpsrc port=8554 multicast-group=10.0.0.15 auto-multicast=true caps=“application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264” ! rtph264depay ! h264parse ! omxh264dec ! video/x-raw ! appsink name=mysink
[video] created gstDecoder from rtp://10.0.0.15:8554

gstDecoder video options:

– URI: rtp://10.0.0.15:8554
- protocol: rtp
- location: 10.0.0.15
- port: 8554
– deviceType: ip
– ioType: input
– codec: h264
– width: 0
– height: 0
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000

[OpenGL] glDisplay – X screen 0 resolution: 2560x1440
[OpenGL] glDisplay – X window resolution: 2560x1440
[OpenGL] glDisplay – display device initialized (2560x1440)
[video] created glDisplay from display://0

glDisplay video options:

– URI: display://0
- protocol: display
- location: 0
– deviceType: display
– ioType: output
– codec: raw
– width: 2560
– height: 1440
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000

[gstreamer] opening gstDecoder for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstDecoder – failed to set pipeline state to PLAYING (error 0)
video-viewer: failed to capture video frame
video-viewer: shutting down…
video-viewer: shutdown complete

Hi @therock112, can you try changing to udpsink and removing gddpay? Also insert the IP address of your Jetson where it says $JETSON_IP

raspivid -t 0 -w 1296 -h 730 -fps 30 -b 2000000 -awb auto -n -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=$JETSON_IP port=8554

And then run video-viewer like this:

python3 video-viewer.py rtp://@:8554 --input-codec=h264

made the recommended changes and still getting failed to capture video frame error

tried the python video-viewer.py code which just sits and then I also tried the c compiled video-viewer and getting the following info:

any suggestions?

test@nano-desktop:~/jetson-inference/build/aarch64/bin$ ./video-viewer rtp://@:8554 --input-codec=h264
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder – creating decoder for 127.0.0.1
[gstreamer] gstDecoder – resource discovery not supported for RTP streams
[gstreamer] gstDecoder – pipeline string:
[gstreamer] udpsrc port=8554 multicast-group=127.0.0.1 auto-multicast=true caps=“application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264” ! rtph264depay ! h264parse ! omxh264dec ! video/x-raw ! appsink name=mysink
[video] created gstDecoder from rtp://@:8554

gstDecoder video options:

– URI: rtp://@:8554
- protocol: rtp
- location: 127.0.0.1
- port: 8554
– deviceType: ip
– ioType: input
– codec: h264
– width: 0
– height: 0
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000

[OpenGL] glDisplay – X screen 0 resolution: 2560x1440
[OpenGL] glDisplay – X window resolution: 2560x1440
[OpenGL] glDisplay – display device initialized (2560x1440)
[video] created glDisplay from display://0

glDisplay video options:

– URI: display://0
- protocol: display
- location: 0
– deviceType: display
– ioType: output
– codec: raw
– width: 2560
– height: 1440
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000

[gstreamer] opening gstDecoder for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> omxh264dec-omxh264dec0
[gstreamer] gstreamer changed state from NULL to READY ==> h264parse0
[gstreamer] gstreamer changed state from NULL to READY ==> rtph264depay0
[gstreamer] gstreamer changed state from NULL to READY ==> udpsrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer changed state from READY to PAUSED ==> omxh264dec-omxh264dec0
[gstreamer] gstreamer changed state from READY to PAUSED ==> h264parse0
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtph264depay0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> udpsrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> omxh264dec-omxh264dec0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> h264parse0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtph264depay0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> udpsrc0
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame
video-viewer: failed to capture video frame

If you run ifconfig on your Jetson, do you see the RX packets/data increasing after you start the streaming from the rPi?

In the meantime, since you already have it working with cv2.VideoCapture(), you can use cudaFromNumpy() function to convert it to CUDA image like in this example:

https://github.com/dusty-nv/jetson-utils/blob/c373f49cf21ad2cae7e4d7da7c41f4fd6473958f/python/examples/cuda-from-cv.py#L45

what I would really like to do is feed the video stream from the raspberry pi into the following following detectnet code:

Yep. def seeing a ton of packets coming into the Jetson nano from the rpi. Not sure why the python code cant display the video stream.

Since you have the video decoding working through cv2.VideoCapture, try modifying the script like so:

# input = jetson.utils.videoSource(opt.input_URI, argv=sys.argv)  # comment this out
output = jetson.utils.videoOutput(opt.output_URI, argv=sys.argv+is_headless)

cam = cv2.VideoCapture(camSet)
rgb_img = None

while True:
  ret, cv_frame = cam.read()

  bgr_img = jetson.utils.cudaFromNumpy(cv_img, isBGR=True)

  if rgb_img is None:
      rgb_img = jetson.utils.cudaAllocMapped(width=bgr_img.width,
							                 height=bgr_img.height,
							                 format='rgb8')

  jetson.utils.cudaConvertColor(bgr_img, rgb_img)

  detections = net.Detect(rgb_img)
  output.Render(rgb_img)

its working!! Thank you @dusty_nv !!

for anyone interested, run the following on the Nano, the sender runs on a raspberry pi zero with wifi:

use the tcp streamer posted above and receive with the below with detection, has about .5 second to 1 second delay, not bad!

import argparse
import sys
print(cv2.version)
dispW=1296
dispH=730
flip=2

camSet=’ tcpclientsrc host=IP of Streamer port=your_srvr_port ! gdpdepay ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv flip-method=’+str(flip)+’ ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, width=’+str(dispW)+’, height=’+str(dispH)+’,format=BGR ! appsink drop=true sync=false’

cam = cv2.VideoCapture(camSet)

parser = argparse.ArgumentParser(description=“Locate objects in a live camera stream using an object detection DNN.”,
formatter_class=argparse.RawTextHelpFormatter, epilog=jetson.inference.detectNet.Usage() +
jetson.utils.videoSource.Usage() + jetson.utils.videoOutput.Usage() + jetson.utils.logUsage())

parser.add_argument(“input_URI”, type=str, default="", nargs=’?’, help=“URI of the input stream”)
parser.add_argument(“output_URI”, type=str, default="", nargs=’?’, help=“URI of the output stream”)
parser.add_argument("–network", type=str, default=“ssd-mobilenet-v2”, help=“pre-trained model to load (see below for options)”)
parser.add_argument("–overlay", type=str, default=“box,labels,conf”, help=“detection overlay flags (e.g. --overlay=box,labels,conf)\nvalid combinations are: ‘box’, ‘labels’, ‘conf’, ‘none’”)
parser.add_argument("–threshold", type=float, default=0.5, help=“minimum detection threshold to use”)

is_headless = ["–headless"] if sys.argv[0].find(‘console.py’) != -1 else [""]

try:
opt = parser.parse_known_args()[0]
except:
print("")
parser.print_help()
sys.exit(0)

load the object detection network

net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)

output = jetson.utils.videoOutput(opt.output_URI, argv=sys.argv+is_headless)

cam = cv2.VideoCapture(camSet)
rgb_img = None
bgr_img = None

while True:
ret, cv_frame = cam.read()

bgr_img = jetson.utils.cudaFromNumpy(cv_frame, isBGR=True)

if rgb_img is None:
rgb_img = jetson.utils.cudaAllocMapped(width=bgr_img.width, height=bgr_img.height, format=‘rgb8’)

jetson.utils.cudaConvertColor(bgr_img, rgb_img)

detections = net.Detect(rgb_img)
output.Render(rgb_img)
if cv2.waitKey(1)==ord(‘q’):
break
cam.release()
cv2.destroyAllWindows()

1 Like