FLIR Lepton Streaming from Pi Zero to Nano

Hello folks,

I have got the logitech C920 connected to the USB port on the pi zero streaming video to the jetson Nano. Using UDP. This works flawlessly and I am able to do image recognition, classification on the video that is streamed to the nano.

Next, I checked the Lepton 3.5 connected to the nano via Puterthermal mini. I am able to capture the video using openCV. This was literally plug and play, super easy.

Now, I’m trying to stream the video from the lepton/pure thermal connected to USB port of raspberry pi zero.

I’m able to launch the video on the pi using gstreamer. No errors so I assume it launched properly (pipeline set to playing, GST clock set etc)

What I can’t figure out is how to capture this video stream on the nano.

Has anyone tried capturing the FLIR lepton stream via streaming protocols/gstreamer?

Thank you in advance for your help.

Hi,
You may use RTSP streaming. Please refer to
Jetson Nano FAQ
Q: Is there any example of running RTSP streaming?

The server is raspberry pi zero. You can run test-launch on it to launch a RTSP server. The nvv4l2h264enc can be replaced with hardware encoder of pi zero or software encoder x264enc. Jetson Nano is the client and you can run

$ gst-launch-1.0 uridecodebin uri=rtsp://<SERVER_IP_ADDRESS>:8554/test ! nvoverlaysink

Thanks, DaneLLL.

This is what I use for Logitech C920 connected to USB port on pi zero for streaming to my nano that works:

Launch Pipleine on pi:
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=30/1 ! videoconvert ! jpegenc ! rtpjpegpay ! udpsink host=192.168.0.187 port=5000

Receive Pipeline on nano:

camSet=‘udpsrc port=5000 ! application/x-rtp, encoding-name=JPEG,payload=26 ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink sync=false’

Now I’m connecting the purethermal 2 with lepton 3.5 connected to the USB port (in place of the logitech camera)

Using this pipeline on the pi, I am able to launch on the pi side without any errors:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=RGB16 ! videoscale ! video/x-raw,width=640,height=480 ! videoconvert ! rndbuffersize max=65000 ! udpsink host=192.168.0.187 port=5000

I can’t figure out what the receive pipeline looks like. Probably because I’m unsure how to encode and decode the formats that are possible with purethermal 2. Supported formats are: BGR, RGB16, UYVY, GRAY8 etc

How do you suggest I capture this on my nano?

Thank you for your help.

Micky

Hi,
On PI Zero, you can try a pipeline like

$ gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,format=UYVY,width=640,height=480,framerate=30/1 ! videoconvert ! video/x-raw,format=I420 ! xvimagesink

Per your description, format=UYVY should be correct. You need to set correct width, height, and framerate to fit the sensor mode. If you can see video preview, please try

raw,format=UYVY,width=640,height=480,framerate=30/1 ! videoconvert ! video/x-raw,format=I420 ! x264enc ! h264parse ! rtph264pay ! udpsink host=192.168.0.187 port=5000

And on Jetson Nano, you can run

$ gst-launch-1.0 udpsrc port=5000 ! 'application/x-rtp,encoding-name=H264,payload=96' ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink sync=false

Thanks, DaneLLL.

I can launch the camera on the pi using ximagesink instead of xvimagesink. But it does launch.

As for streaming, the closest I’ve gotten is this:

Using this pipeline, looks like the camera launches but it shows redistribute latency. I do see the camera blinking, which means its on and capturing, I know this from when I launch the the camera on the pi (when connected to monitor)

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=UYVY ! videoscale ! videoconvert ! video/x-raw,format=I420,width=640, height=480,framerate=9/1 ! x264enc ! h264parse ! rtph264pay ! rndbuffersize max=65000 ! udpsink host=192.168.0.187 port=5000
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
Redistribute latency…

And the closest I’ve come to capturing the stream on the nano is that it opens in BLOCKING MODE. I mean thats what the console displays, the stream never actually displays. I think we are close, but I can’t figure this out.

Using this pipeline:(I modified a bit from the pipeline I have for streaming from the raspberry pi camera.)

camSet=‘udpsrc port=5000 ! gdpdepay ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv flip-method=’+str(flip)+’ ! video/x-raw,format=UYVY ! videoconvert ! video/x-raw, width=‘+str(dispW)+’, height=‘+str(dispH)+’,format=I420 ! appsink drop=true sync=false’

Full code on the nano is this (simple video capture)

#############################################################
import cv2
import numpy as np

#Dimensions for raspberry pi camera version 2 is 1280 x 720
dispW=640
dispH=480
flip=4 # flip method 4 to flip image horizontally, otherwise ths camera gives a mirror image

#Open the raspberry pi camera attached to jetson nano
#camSet=‘nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method=’+str(flip)+’ ! video/x-raw, width=‘+str(dispW)+’, height=‘+str(dispH)+’, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink’

#To use the raspberry pi camera instead of USB camera connected to raspberry pi for wireless streaming use the line below
#camSet=‘tcpclientsrc host=192.168.0.43 port=8554 ! gdpdepay ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv flip-method=’+str(flip)+’ ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, width=‘+str(dispW)+’, height=‘+str(dispH)+’,format=BGR ! appsink drop=true sync=false’

#For USB camera connected to raspberry pi. Don’t change line below! ITS WORKING FINALLY!
#camSet= ‘udpsrc port=5000 ! clock-rate=90000,payload=96 ! rtph263pdepay queue-delay=0 ! ffdec_h263 ! xvimagesink ! appsink sync=false’

#Testing For USB purethermal camera connected to raspberry pi.
#camSet= ‘udpsrc port=5000 ! clock-rate=90000, payload=96 ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! appsink sync=false’
camSet=‘udpsrc port=5000 ! gdpdepay ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv flip-method=’+str(flip)+’ ! video/x-raw,format=UYVY ! videoconvert ! video/x-raw, width=‘+str(dispW)+’, height=‘+str(dispH)+’,format=I420 ! appsink drop=true sync=false’
cam=cv2.VideoCapture(camSet)

#Display the camera in a window named “Thermal Video”
while True:
ret,frame=cam.read()
#flipped=cv2.flip(frame, flipCode=1) #have to have this otherwise get horizontal mirror image
cv2.imshow(‘Thermal Video’, frame)
if cv2.waitKey(1)==ord(‘q’):
break

cam.release()
cv2.destroyAllWindows()

##########################################################################

Any other things I can try? I know I’m almost there, any help is appreciated!

Thank you,
Micky

Hi,
On Jetson Nano, are you able to see video playback in running:

$ gst-launch-1.0 udpsrc port=5000 ! 'application/x-rtp,encoding-name=H264,payload=96' ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink sync=false

Would like to make sure the UDP stream can be received/decoded correctly on Jetson Nano.

If I type the pipeline on the nano console, I get the following (doesn’t seem like it is being decoded properly):

micky@micky:~$ gst-launch-1.0 udpsrc port=5000 ! ‘application/x-rtp,encoding-name=H264,payload=96’ ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink sync=false
Setting pipeline to PAUSED …
Opening in BLOCKING MODE
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
WARNING: from element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Could not decode stream.
Additional debug info:
gstrtpbasedepayload.c(466): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0:
Received invalid RTP payload, dropping
WARNING: from element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Could not decode stream.
Additional debug info:
gstrtpbasedepayload.c(466): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0:
Received invalid RTP payload, dropping
WARNING: from element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Could not decode stream.
Additional debug info:
gstrtpbasedepayload.c(466): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0:
Received invalid RTP payload, dropping
WARNING: from element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Could not decode stream.
Additional debug info:
gstrtpbasedepayload.c(466): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0:
Received invalid RTP payload, dropping
WARNING: from element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Could not decode stream.
Additional debug info:
gstrtpbasedepayload.c(466): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0:
Received invalid RTP payload, dropping
WARNING: from element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Could not decode stream.
Additional debug info:
gstrtpbasedepayload.c(466): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0:
Received invalid RTP payload, dropping
WARNING: from element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Could not decode stream.
Additional debug info:
gstrtpbasedepayload.c(466): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0:
Received invalid RTP payload, dropping
WARNING: from element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Could not decode stream.
Additional debug info:
gstrtpbasedepayload.c(466): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0:
Received invalid RTP payload, dropping
ERROR: from element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: The stream is in the wrong format.
Additional debug info:
gstrtph264depay.c(1270): gst_rtp_h264_depay_process (): /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0:
NAL unit type 27 not supported yet
Execution ended after 0:00:04.310688626
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …

Hi,
Please run the pipeline on Pi Zero and try again:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=UYVY ! videoscale ! videoconvert ! video/x-raw,format=I420,width=640, height=480,framerate=9/1 ! x264enc ! h264parse ! rtph264pay ! udpsink host=192.168.0.187 port=5000

Generally we don’t link with rndbuffersize plugin. It should work without the plugin.

And please share your release version( $ head -1 /etc/nv_tegra_release ).

Hi DaneLLL,

The reason for the rndbuffersize is because when I was running it locally on nano (for testing on the thermal camera) it was giving a error, and on searching I found that it had to do with specifying the buffer size, hence the line. It does run without the buffer, but sometimes it causes an issue.

Anyway, just tried your pipeline and this is the response I get:

Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
Redistribute latency…

Same as before, I think the camera is streaming (fast blinking), but 1. ) I have noticed that when I see the redistribute latency line when using other camera’s, usually it’s an error . 2. Even if it is indeed streaming, how do we capture this on the nano, that is the main issue - The correct pipeline to receive the stream.

I have tried multiple versions with changes for trial and error to see if any pipeline works, but no luck.

Thank you for your continued help, I appreciate your time very much.

Micky

Also, the release version:

R32 (release), REVISION: 4.4, GCID: 23942405, BOARD: t210ref, EABI: aarch64, DATE: Fri Oct 16 19:44:43 UTC 202

Hi,
So the lepton/pure thermal is connected to USB port of raspberry pi zero, and you can successfully run the pipeline on Pi Zero:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=UYVY ! videoscale ! videoconvert ! video/x-raw,format=I420,width=640, height=480,framerate=9/1 ! x264enc ! h264parse ! rtph264pay ! udpsink host=192.168.0.187 port=5000

Jetson Nano is with IP 192.168.0.87. When you run the pipeline on Nano:

$ gst-launch-1.0 udpsrc port=5000 ! 'application/x-rtp,encoding-name=H264,payload=96' ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink sync=false

You cannot see video preview. Is this correct?

Hi DaneLLL,

See attached screenshot please.

The window on the right is the Pi zero (SSH)

Window on the left nano.

The camera launches as you can see but it shows the redistribute latency …

On the nano, you can see the output:

(gst-launch-1.0:24007): GStreamer-CRITICAL **: 23:15:32.988: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed
WARNING: erroneous pipeline: syntax error

Hi,
When copying the pipeline,

'application/x-rtp,encoding-name=H264,payload=96'

becomes

‘application/x-rtp,encoding-name=H264,payload=96’

This leads to syntax error. Please correct the marks.

hmm, strange. So, I fixed that syntax and it took a little while, but for the first time I did see the video preview! I have hope now! I had to reboot the nano because there was no option to close the almost full size window. Video was super slow, but I’m just happy to see a stream.

Should I try the openCV capture now and what pipeline do you recommend, knowing that the above preview works?

See attached picture for the output

Thank you so much, I think we are almost there!

Micky

Hi,
You may try this python sample on Jetson Nano:
Doesn't work nvv4l2decoder for decoding RTSP in gstreamer + opencv - #3 by DaneLLL
Would need to modify the gstreamer pipeline in cv2.VideoCapture() accordingly.

Hi DaneLLL,

This is what I used:(replaced the rtsp url from the example you quoted). It doesn’t work. See output attached.

###########################################################
import cv2
import numpy as np

cam=cv2.VideoCapture("rtspsrc location=rtsp://192.168.0.43:5000 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink ")

#Display the camera in a window named “Thermal Video”
while True:
ret,frame=cam.read()
#flipped=cv2.flip(frame, flipCode=1) #have to have this otherwise get horizontal mirror image
cv2.imshow(‘Thermal Video’, frame)
if cv2.waitKey(1)==ord(‘q’):
break

cam.release()
cv2.destroyAllWindows()

##########################################################

The output, if you can’t read from the picture is this:

#####################################################
micky@micky:~/Desktop/pyPro$ /usr/bin/python3 /home/micky/Desktop/pyPro/Flir/flir-stream-2.py
Opening in BLOCKING MODE
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module rtspsrc0 reported: Could not open resource for reading and writing.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
[ERROR:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap.cpp (116) open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.1.1) /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can’t find starting number (in the name of file): rtspsrc location=rtsp://192.168.0.43:5000 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink in function ‘icvExtractPattern’

Traceback (most recent call last):
File “/home/micky/Desktop/pyPro/Flir/flir-stream-2.py”, line 29, in
cv2.imshow(‘Thermal Video’, frame)
cv2.error: OpenCV(4.1.1) /home/nvidia/host/build_opencv/nv_opencv/modules/highgui/src/window.cpp:352: error: (-215:Assertion failed) size.width>0 && size.height>0 in function ‘imshow’
##########################################################

What else do you think I should modify in the openCV capture?

Thank you,
Micky

@DaneLLL

The pipeline to launch camera on the pi as suggested by you yesterday that I tested and it worked, was using UDP

The example you asked to try today on the nano to receive the stream is rtsp…wouldn’t this be an issue?

Just so you know I am using this pipeline on the pi to launch the camera:

gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,format=UYVY ! videoscale ! videoconvert ! video/x-raw,format=I420,width=640, height=480,framerate=9/1 ! x264enc ! h264parse ! rtph264pay ! udpsink host=192.168.0.187 port=5000

and yesterday I used the following piepline on the nano to see the preview on the nano;

gst-launch-1.0 udpsrc port=5000 ! ‘application/x-rtp,encoding-name=H264,payload=96’ ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink sync=false

Thanks,
Micky

@DaneLLL

If I use the udp source to capture the video streaming (using the modified version of what worked yesterday on nano CLI, then I get error:

######################################################

Using UDP source to recieve the video stream

camSet= ‘udpsrc port=5000 ! ‘application/x-rtp,encoding-name=H264,payload=96’ ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink appsink’

cam=cv2.VideoCapture(camSet)

#Display the camera in a window named “Thermal Video”
while True:
ret,frame=cam.read()
#flipped=cv2.flip(frame, flipCode=1) #have to have this otherwise get horizontal mirror image
cv2.imshow(‘Thermal Video’, frame)
if cv2.waitKey(1)==ord(‘q’):
break

cam.release()
cv2.destroyAllWindows()
#######################################################

Output:

micky@micky:~/Desktop/pyPro$ /usr/bin/python3 /home/micky/Desktop/pyPro/Flir/flir-stream-2.py
File “/home/micky/Desktop/pyPro/Flir/flir-stream-2.py”, line 24
camSet= ‘udpsrc port=5000 ! ‘application/x-rtp,encoding-name=H264,payload=96’ ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink appsink’
^
SyntaxError: invalid syntax

The syntax error is pointing underneath the “application/x-rtp…”

see attached:

Hi,
Please ensure you can run the gst-launch-1.0 command:

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! fakesink

If the above pipeline works, please try the string:

camSet="udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink"

Hi @DaneLLL

I tried opening the cam using the pipeline above, but it doesn’t open. Blocking mode 261.

Please see attached the CLI on pi and nano side by side.

So far, only the following command has allowed me to get a preview (from one of your earlier post)
gst-launch-1.0 udpsrc port=5000 ! ‘application/x-rtp,encoding-name=H264,payload=96’ ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink sync=false

So, I know it is doable, just not getting the proper arguments for the pipeline.

Thank you for your continued support, I appreciate it very much.

Micky