FLIR Lepton Streaming from Pi Zero to Nano

Also, the release version:

R32 (release), REVISION: 4.4, GCID: 23942405, BOARD: t210ref, EABI: aarch64, DATE: Fri Oct 16 19:44:43 UTC 202

Hi,
So the lepton/pure thermal is connected to USB port of raspberry pi zero, and you can successfully run the pipeline on Pi Zero:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=UYVY ! videoscale ! videoconvert ! video/x-raw,format=I420,width=640, height=480,framerate=9/1 ! x264enc ! h264parse ! rtph264pay ! udpsink host=192.168.0.187 port=5000

Jetson Nano is with IP 192.168.0.87. When you run the pipeline on Nano:

$ gst-launch-1.0 udpsrc port=5000 ! 'application/x-rtp,encoding-name=H264,payload=96' ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink sync=false

You cannot see video preview. Is this correct?

Hi DaneLLL,

See attached screenshot please.

The window on the right is the Pi zero (SSH)

Window on the left nano.

The camera launches as you can see but it shows the redistribute latency …

On the nano, you can see the output:

(gst-launch-1.0:24007): GStreamer-CRITICAL **: 23:15:32.988: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed
WARNING: erroneous pipeline: syntax error

Hi,
When copying the pipeline,

'application/x-rtp,encoding-name=H264,payload=96'

becomes

‘application/x-rtp,encoding-name=H264,payload=96’

This leads to syntax error. Please correct the marks.

hmm, strange. So, I fixed that syntax and it took a little while, but for the first time I did see the video preview! I have hope now! I had to reboot the nano because there was no option to close the almost full size window. Video was super slow, but I’m just happy to see a stream.

Should I try the openCV capture now and what pipeline do you recommend, knowing that the above preview works?

See attached picture for the output

Thank you so much, I think we are almost there!

Micky

Hi,
You may try this python sample on Jetson Nano:
Doesn't work nvv4l2decoder for decoding RTSP in gstreamer + opencv - #3 by DaneLLL
Would need to modify the gstreamer pipeline in cv2.VideoCapture() accordingly.

Hi DaneLLL,

This is what I used:(replaced the rtsp url from the example you quoted). It doesn’t work. See output attached.

###########################################################
import cv2
import numpy as np

cam=cv2.VideoCapture("rtspsrc location=rtsp://192.168.0.43:5000 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink ")

#Display the camera in a window named “Thermal Video”
while True:
ret,frame=cam.read()
#flipped=cv2.flip(frame, flipCode=1) #have to have this otherwise get horizontal mirror image
cv2.imshow(‘Thermal Video’, frame)
if cv2.waitKey(1)==ord(‘q’):
break

cam.release()
cv2.destroyAllWindows()

##########################################################

The output, if you can’t read from the picture is this:

#####################################################
micky@micky:~/Desktop/pyPro$ /usr/bin/python3 /home/micky/Desktop/pyPro/Flir/flir-stream-2.py
Opening in BLOCKING MODE
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module rtspsrc0 reported: Could not open resource for reading and writing.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
[ERROR:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap.cpp (116) open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.1.1) /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can’t find starting number (in the name of file): rtspsrc location=rtsp://192.168.0.43:5000 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink in function ‘icvExtractPattern’

Traceback (most recent call last):
File “/home/micky/Desktop/pyPro/Flir/flir-stream-2.py”, line 29, in
cv2.imshow(‘Thermal Video’, frame)
cv2.error: OpenCV(4.1.1) /home/nvidia/host/build_opencv/nv_opencv/modules/highgui/src/window.cpp:352: error: (-215:Assertion failed) size.width>0 && size.height>0 in function ‘imshow’
##########################################################

What else do you think I should modify in the openCV capture?

Thank you,
Micky

@DaneLLL

The pipeline to launch camera on the pi as suggested by you yesterday that I tested and it worked, was using UDP

The example you asked to try today on the nano to receive the stream is rtsp…wouldn’t this be an issue?

Just so you know I am using this pipeline on the pi to launch the camera:

gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,format=UYVY ! videoscale ! videoconvert ! video/x-raw,format=I420,width=640, height=480,framerate=9/1 ! x264enc ! h264parse ! rtph264pay ! udpsink host=192.168.0.187 port=5000

and yesterday I used the following piepline on the nano to see the preview on the nano;

gst-launch-1.0 udpsrc port=5000 ! ‘application/x-rtp,encoding-name=H264,payload=96’ ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink sync=false

Thanks,
Micky

@DaneLLL

If I use the udp source to capture the video streaming (using the modified version of what worked yesterday on nano CLI, then I get error:

######################################################

Using UDP source to recieve the video stream

camSet= ‘udpsrc port=5000 ! ‘application/x-rtp,encoding-name=H264,payload=96’ ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink appsink’

cam=cv2.VideoCapture(camSet)

#Display the camera in a window named “Thermal Video”
while True:
ret,frame=cam.read()
#flipped=cv2.flip(frame, flipCode=1) #have to have this otherwise get horizontal mirror image
cv2.imshow(‘Thermal Video’, frame)
if cv2.waitKey(1)==ord(‘q’):
break

cam.release()
cv2.destroyAllWindows()
#######################################################

Output:

micky@micky:~/Desktop/pyPro$ /usr/bin/python3 /home/micky/Desktop/pyPro/Flir/flir-stream-2.py
File “/home/micky/Desktop/pyPro/Flir/flir-stream-2.py”, line 24
camSet= ‘udpsrc port=5000 ! ‘application/x-rtp,encoding-name=H264,payload=96’ ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink appsink’
^
SyntaxError: invalid syntax

The syntax error is pointing underneath the “application/x-rtp…”

see attached:

Hi,
Please ensure you can run the gst-launch-1.0 command:

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! fakesink

If the above pipeline works, please try the string:

camSet="udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink"

Hi @DaneLLL

I tried opening the cam using the pipeline above, but it doesn’t open. Blocking mode 261.

Please see attached the CLI on pi and nano side by side.

So far, only the following command has allowed me to get a preview (from one of your earlier post)
gst-launch-1.0 udpsrc port=5000 ! ‘application/x-rtp,encoding-name=H264,payload=96’ ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! nvoverlaysink sync=false

So, I know it is doable, just not getting the proper arguments for the pipeline.

Thank you for your continued support, I appreciate it very much.

Micky

Hi,
Please check if you can run this python code.
rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov is a public link and please make sure you can successfully connect to it and see video playback. And then try to replace the string with udpsrc ! rtph264depay ! …

Hi @DaneLLL,

This does not work.

But I have RTSP streaming from my own security camera and this is what I use:(using nanocamerra from pypi)

I have already tried using the rtsp url for the raspberry pi in the hopes that nano cam should work with t too, but it doesn’t work. May be I should port forward the raspberry pi and see?

###########################################################################
import cv2

from nanocamera.NanoCam import Camera

import nanocamera as nano

if name == ‘main’:
# requires the RTSP location. Something like this: rtsp://localhost:8888/stream
# For RTSP camera, the camera_type=2.
# This only supports H.264 codec for now

# a location for the rtsp stream
rtsp_location = "192.168.0.168:554/HighResolutionVideo"
# Create the Camera instance
camera = nano.Camera(camera_type=2, source=rtsp_location, width=960, height=720, fps=21)
print('RTSP Camera is now ready')
while True:
    try:
        # read the camera image
        frame = camera.read()
        # display the frame
        cv2.imshow("Video Frame", frame)
        if cv2.waitKey(25) & 0xFF == ord('q'):
            break
    except KeyboardInterrupt:
        break

# close the camera instance
camera.release()

# remove camera object
del camera

#########################################################################

Thanks
Micky

Hi,
We test below case and it works fine.
[Linux PC]
Start the UDP server:

$ gst-launch-1.0 videotestsrc ! video/x-raw,width=640,height=480 ! x264enc ! h264parse ! rtph264pay ! udpsink host=10.19.107.174 port=5000 sync=0

[Jetson Nano with IP 10.19.107.174]

$ gst-launch-1.0 udpsrc port=5000 ! 'application/x-rtp,encoding-name=H264,payload=96' ! rtph264depay ! h264parse ! nvv4l2decoder ! nvoverlaysink

Maybe you can try to run the command of Linux PC on Pi Zero and check if Nano can receive it.

Hi @DaneLLL

When I execute this on the pi zero, it seems like it is good (test video, not the camera) because the gst clock seems to be set properly.

On the nano, after waiting almost 5 minutes, the test video did launch…extremely slow and the CLI also said that there is a time stamp error or the computer is too slow. Please see attached for the nano output.

What do you suggest next?

Thank you,
Micky

@DaneLLL

Please see the attached for nano output

The lepton camera only support BGR, RGB8, RGB16, GRAY8 etc…there seems to be no h264 support…does that even make sense? Or should we be able to encode the nvidia library?

Thanks,
Micky

Hi,
If the source is not in NV12 or I420, you would need to do conversion through videoconvert plugin. It will be like:

v4l2src ! videoconvert ! video/x-raw,format=NV12 ! nvvidconv ! x264enc ! ...

hi @DaneLLL

I got some help and finally got it to work perfectly. See attached for screenshot.

This is what worked:

###########################################################################
PIPELINES FOR LEPTON3.5/PURETHERMAL
###########################################################################
on NANO:

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=JPEG ! rtpjpegdepay ! jpegdec ! xvimagesink

on PI Zero:
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,format=UYVY ! videoscale ! videoconvert ! video/x-raw,format=I420,width=640,height=480,framerate=9/1 ! jpegenc ! rtpjpegpay ! udpsink host=192.168.0.187 port=5000

Thank you for continuing to troubleshoot.

Micky

1 Like