Hi, I am fairly new to streaming webcams to Jetson devices. I am currently attempting to stream a raspi cam from a 64bit bullseye OS to an xavier nx for some object detection. I have successfully setup a tcp stream using libcamera’s getting started example however have had trouble getting this pipeline to mesh with @dusty_nv 's jetson-inference docker container. My thought is that a tcp pipe isn’t a supported stream, unfortunately I haven’t been able to find any examples of pipes setup between this new 64 bit RPI OS and video streams accepted by dusty’s docker container. Any help with this would be much appreciated. I am happy to post some CLI readouts if needed.
Hi,
We would suggest launch the source in RTSP. This should work fine.
If you need tcp, you may run RTSP in tcp protocol and do customization to the code like:
Jetson-inference input RTSP over UDP - #2 by dusty_nv
Hi @DaneLLL,
Thank you for your response. I am curious if there are any examples that you might provide, using libcamera to create an RTSP pipe with gstreamer 1.18.4 on the pi side to 1.16.3 on the jetson side. For whatever reason my jetson will not update past this. Any advice would be much appreciated! interested beyond just the specific gstreamer versions too. I would wipe my Jetson to get it up to 18 if needed. Maybe a working gst-rtsp-server example using a csi or usb cam?
Im not sure it is the camera’s issue, maybe libcamera? when I run gst-inspect-1.0 libcameravid I don’t see a section about protocols at all… I have seen someone say that rtsp-simple-server might be able to facilitate this connection. would this potentially by compatible with jetson-inference?
Hi,
A quick method is to run test-launch to create RTSP server. Please refer to the steps in Jetson Nano FAQ
Hi @DaneLLL,
Thank you for your response. I am attempting test-launch and was able to get it to fire up successfully on my pi after following this fix. Now when I attempt to recieve the stream on my jetson I am met with the lack of nvoverlaysink. I tried using nvdrmvideosink and nv3dsink and am met with decoding errors.
gst-launch-1.0 uridecodebin uri=rtsp://<myserverIP>:8554/test ! nvdrmvideosink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://<myserverIP>:8554/test
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source: Unhandled error
Additional debug info:
gstrtspsrc.c(6585): gst_rtspsrc_send (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source:
Service Unavailable (503)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
gst-launch-1.0 uridecodebin uri=rtsp://<myserverIP>:8554/test ! nv3dsink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://<myserverIP>:8554/test
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source: Unhandled error
Additional debug info:
gstrtspsrc.c(6585): gst_rtspsrc_send (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source:
Service Unavailable (503)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
I moved on to attempt the test pipe through the docker container and have coppied the result below.
root@ubuntu:/jetson-inference# imagenet rtp://<myserverIP>:8554 --input-codec=h264
[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstDecoder -- creating decoder for <myserverIP>
[gstreamer] gstDecoder -- resource discovery not supported for RTP streams
[gstreamer] gstDecoder -- pipeline string:
[gstreamer] udpsrc port=8554 multicast-group=<myserverIP> auto-multicast=true caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264" ! rtph264depay ! h264parse ! nvv4l2decoder ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw ! appsink name=mysink
[video] created gstDecoder from rtp://<myserverIP>:8554
------------------------------------------------
gstDecoder video options:
------------------------------------------------
-- URI: rtp://<myserverIP>:8554
- protocol: rtp
- location: <myserverIP>
- port: 8554
-- deviceType: ip
-- ioType: input
-- codec: h264
-- width: 0
-- height: 0
-- frameRate: 0.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0
-- rtspLatency 2000
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution: 1920x1080
[OpenGL] glDisplay -- X window resolution: 1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video] created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
-- URI: display://0
- protocol: display
- location: 0
-- deviceType: display
-- ioType: output
-- codec: raw
-- width: 1920
-- height: 1080
-- frameRate: 0.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0
-- rtspLatency 2000
------------------------------------------------
imageNet -- loading classification network model from:
-- prototxt networks/googlenet.prototxt
-- model networks/bvlc_googlenet.caffemodel
-- class_labels networks/ilsvrc12_synset_words.txt
-- input_blob 'data'
-- output_blob 'prob'
-- batch_size 1
[TRT] TensorRT version 8.4.1
[TRT] loading NVIDIA plugins...
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::ScatterND version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT] Registered plugin creator - ::BatchTilePlugin_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::CropAndResizeDynamic version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_Explicit_TF_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_Implicit_TF_TRT version 1
[TRT] Registered plugin creator - ::ProposalDynamic version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 2
[TRT] Registered plugin creator - ::CoordConvAC version 1
[TRT] Registered plugin creator - ::DecodeBbox3DPlugin version 1
[TRT] Registered plugin creator - ::GenerateDetection_TRT version 1
[TRT] Registered plugin creator - ::MultilevelCropAndResize_TRT version 1
[TRT] Registered plugin creator - ::MultilevelProposeROI_TRT version 1
[TRT] Registered plugin creator - ::NMSDynamic_TRT version 1
[TRT] Registered plugin creator - ::PillarScatterPlugin version 1
[TRT] Registered plugin creator - ::VoxelGeneratorPlugin version 1
[TRT] Registered plugin creator - ::MultiscaleDeformableAttnPlugin_TRT version 1
[TRT] detected model format - caffe (extension '.caffemodel')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] [MemUsageChange] Init CUDA: CPU +181, GPU +0, now: CPU 220, GPU 5075 (MiB)
[TRT] [MemUsageChange] Init builder kernel library: CPU +131, GPU +250, now: CPU 370, GPU 5232 (MiB)
[TRT] native precisions detected for GPU: FP32, FP16, INT8
[TRT] selecting fastest native precision for GPU: FP16
[TRT] found engine cache file /usr/local/bin/networks/bvlc_googlenet.caffemodel.1.1.8401.GPU.FP16.engine
[TRT] found model checksum /usr/local/bin/networks/bvlc_googlenet.caffemodel.sha256sum
[TRT] echo "$(cat /usr/local/bin/networks/bvlc_googlenet.caffemodel.sha256sum) /usr/local/bin/networks/bvlc_googlenet.caffemodel" | sha256sum --check --status
[TRT] model matched checksum /usr/local/bin/networks/bvlc_googlenet.caffemodel.sha256sum
[TRT] loading network plan from engine cache... /usr/local/bin/networks/bvlc_googlenet.caffemodel.1.1.8401.GPU.FP16.engine
[TRT] device GPU, loaded /usr/local/bin/networks/bvlc_googlenet.caffemodel
[TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 255, GPU 5141 (MiB)
[TRT] Loaded engine size: 14 MiB
[TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[TRT] Using cuDNN as a tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +345, GPU +363, now: CPU 603, GPU 5504 (MiB)
[TRT] Deserialization required 4300778 microseconds.
[TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +13, now: CPU 0, GPU 13 (MiB)
[TRT] Using cuDNN as a tactic source
[TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +2, now: CPU 604, GPU 5505 (MiB)
[TRT] Total per-runner device persistent memory is 94720
[TRT] Total per-runner host persistent memory is 112976
[TRT] Allocated activation device memory of size 3612672
[TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +4, now: CPU 0, GPU 17 (MiB)
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] -- layers 72
[TRT] -- maxBatchSize 1
[TRT] -- deviceMemory 3612672
[TRT] -- bindings 2
[TRT] binding 0
-- index 0
-- name 'data'
-- type FP32
-- in/out INPUT
-- # dims 3
-- dim #0 3
-- dim #1 224
-- dim #2 224
[TRT] binding 1
-- index 1
-- name 'prob'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1000
-- dim #1 1
-- dim #2 1
[TRT]
[TRT] binding to input 0 data binding index: 0
[TRT] binding to input 0 data dims (b=1 c=3 h=224 w=224) size=602112
[TRT] binding to output 0 prob binding index: 1
[TRT] binding to output 0 prob dims (b=1 c=1000 h=1 w=1) size=4000
[TRT]
[TRT] device GPU, /usr/local/bin/networks/bvlc_googlenet.caffemodel initialized.
[TRT] loaded 1000 class labels
[TRT] imageNet -- networks/bvlc_googlenet.caffemodel initialized.
[gstreamer] opening gstDecoder for streaming, transitioning pipeline to GST_STATE_PLAYING
Opening in BLOCKING MODE
[gstreamer] gstDecoder -- failed to set pipeline state to PLAYING (error 0)
imagenet: shutting down...
imagenet: shutdown complete.
root@ubuntu:/jetson-inference#
Any further advice would be greatly appreciated.
I tried opening the stream in VLC and cannot either, even though I am met with
stream ready at rtsp://127.0.0.1:8554/test
on the pi.
Hi,
If you have a laptop or host PC in Windows Os, you can try VLC player like:
if the device is a PC with Windows OS, you can open network stream rtsp://:8554/test via VLC
If it works with VLC on Windows OS, please try the command on Jetson device:
$ gst-launch-1.0 uridecodebin uri=rtsp://<myserverIP>:8554/test ! fakesink