I’m working on a computer vision application using the Jetson Orin Nano 8GB Developer Kit, and I’m encountering an issue while trying to save recorded video files from an RTSP stream using the DeepStream pipeline.
Setup Details:
-
Device: Jetson Orin Nano 8GB Developer Kit
-
JetPack Version: 5.1.5 (L4T 35.5.0)
-
DeepStream Version: (please fill in, e.g., DeepStream 6.3)
-
Python Version: 3.8
-
GStreamer Backend: Using DeepStream elements (
nvv4l2decoder,nvstreammux,nvvideoconvert,nvinfer,nvdsosd, etc.) -
Encoder Used: Software encoder (
x264enc/avenc_h264) instead ofnvv4l2h264enc, since Orin Nano doesn’t include hardware encoder support.
I can successfully connect to and process the RTSP stream in real time using my DeepStream pipeline.
However, when I try to record or save the processed video output, the resulting file doesn’t exist.
Since the Jetson Orin Nano doesn’t have a hardware encoder that supports nvv4l2h264enc, I’m using a software-based encoder such as x264enc or avenc_h264. The RTSP stream itself is received and processed correctly — inference and display work fine — but the recording step fails or produces invalid files.
This is the deepstream_app.txt file I am using:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5[tiled-display]
enable=1
rows=1
columns=2
width=1920
height=720
gpu-id=0
nvbuf-memory-type=0Source 0 (RTSP camera 1)
[source0]
enable=1
type=3
uri=rtsp://admin:aiindia%40123@192.168.1.69:554/Streaming/Channels/101
num-sources=1
gpu-id=0
cudadec-memtype=0RTSP-specific tuning
latency=200
drop-on-latency=1Source 1 (RTSP camera 2)
[source1]
enable=1
type=3
uri=rtsp://admin:aiindia%40123@192.168.1.64:554/Streaming/Channels/101
num-sources=1
gpu-id=0
cudadec-memtype=0
latency=200
drop-on-latency=1[streammux]
gpu-id=0
batch-size=2
batched-push-timeout=40000
width=1280
height=720
live-source=1
enable-padding=0
nvbuf-memory-type=0[primary-gie]
enable=1
gpu-id=0
config-file=config_infer_primary_mobile.txt
gie-unique-id=1
nvbuf-memory-type=0
interval=0[osd]
enable=1
gpu-id=0
border-width=2
text-size=12
text-color=1;1;1;1
text-bg-color=0.3;0.3;0.3;1
nvbuf-memory-type=0[sink0]
enable=1
type=2 # EglSink (on-screen)
sync=0
gpu-id=0
nvbuf-memory-type=0[sink1]
enable=0
type=3 # 3 = File sink
container=1 # 1=MP4, 2=MKV
codec=1 # 1=H264, 2=H265
enc-type=1 # 0=HW encoder, 1=SW
sync=0
bitrate=4000000 # 4 Mbps
profile=0 # H264 baseline profile
output-file=output.mp4
source-id=0 # which stream to save (0 = first camera)[sink2]
enable=0
type=3 # 3 = File sink
container=1 # 1=MP4, 2=MKV
codec=1 # 1=H264, 2=H265
enc-type=1 # 0=HW encoder, 1=SW
sync=0
bitrate=4000000 # 4 Mbps
profile=0 # H264 baseline profile
output-file=/home/jetson/Desktop/Resnet_working (copy)/Resnet_working/Records/output.mp4
source-id=1 # which stream to save (0 = first camera)
Any guidance, configuration suggestions, or working sample pipelines would be greatly appreciated.
Thank you!
— Tanish Jain