• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.0
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 535.183.06
• Issue Type Questions (syncing issue with RTSP input using splitmuxsink)
• How to reproduce the issue ?
I’m using DeepStream to process an RTSP stream, with splitmuxsink
saving the output into chunks every 5 seconds. When the input is a video file, the frame count per chunk remains consistent, and I can accurately extract specific frames by calculating which chunk and frame index to use. However, when the input is an RTSP stream, I’m facing challenges due to the following:
- Frame Count Variability: The number of frames per chunk varies with RTSP input, likely due to frame drops and network-induced inconsistencies.
- Frame Number Mismatch: Even when I try to track the frame number (e.g., by counting frames as they’re processed), the resulting frame numbers in the chunks don’t match the frame numbers reported by DeepStream (e.g., those sent to Kafka). This makes it challenging to extract a specific frame by frame number.
Pipeline Details:
- The input source is an RTSP stream.
splitmuxsink
is configured with:location
:./raw/raw_%05d.mp4
max-size-time
: 5 secondsasync-finalize
: true
- Queues are placed before
splitmuxsink
for buffering. - Full pipeline graph and config can be found here:
general-params:
save-graph: true
plugins:
- name: "stream-1" # Example 1: Local video file
type: "source"
path: "rtsp://localhost:8554/stream" # Path to the video file
drop-frame-interval: 0
connected-to:
- "streammux"
- "save-source" # Save the source video to a file
- "save-chunked-source" # Save the source video to chunks
- name: "save-chunked-source"
type: "splitmuxsink"
add-queues-between-elements: true
enabled: true # Enable to save the output video to a file
location: "./raw/raw_%05d.mp4" # Location to save the video chunks
max-size-time: 5000000000 # 5 second chunks
bitrate: 2000000
- name: "save-source"
type: "savesink"
add-queues-between-elements: true
enabled: true # Enable to save the output video to a file
location: "./raw.mp4"
sync: 0
async: 0
qos: 0
bitrate: 2000000
- name: "streammux"
type: "nvstreammux"
width: 1280
height: 720
batched-push-timeout: 33000
file-loop: 0
nvbuf-memory-type: 3
connected-to:
- "gie-elements"
- name: "gie-elements"
type: "custom-branch"
add-queues-between-elements: true
branch-plugins:
- name: "vehicle-inference"
type: "gie"
config-file-path: "./config/vehicle_type_detection_yolo_config/pgie_vehicle_type_detection_config.txt"
probes:
- type: "bbox-color"
pad: "src"
parameters:
class_colors:
person: [1.0, 0.0, 0.0] # Red for person
car: [0.0, 1.0, 0.0] # Green for car
3: [0.0, 0.0, 1.0] # Blue for motorbike (by ID)
bus: [1.0, 1.0, 0.0] # Yellow for bus
truck: [1.0, 0.0, 1.0] # Magenta for truck
- name: "vehicle-tracker"
type: "nvtracker"
config-file-path: "./config/vehicle_tracker_config/dstest2_tracker_config.txt"
- name: "nvanalytics"
type: "nvdsanalytics"
config-file-path: "./config/nvanalytics_config/nvanalytics_config.txt"
probes:
- type: "analytics"
pad: "src"
parameters:
interval: 10
objects_to_count:
- label: 'vehicle'
id: 2
- label: 'person'
id: 0
- name: "color-inference"
type: "gie"
config-file-path: "./config/vehicle_color_classification_efficientdet_config/sgie_color_config.txt"
- name: "make-inference"
type: "gie"
config-file-path: "./config/vehicle_make_classification_resnet18_config/config_vehiclemake.txt"
- name: "plate-inference"
type: "gie"
config-file-path: "./config/lpd_detection_yolo_config/sgie_license_plate_config.txt"
- name: "plate-text-inference"
type: "gie"
config-file-path: "./config/lprnet_classification_resnet_config/sgie_lpr_config.txt"
probes:
- type: "latency"
pad: "src"
parameters:
interval: 10
measure_latency: false
connected-to:
- "middle-elements"
- name: "middle-elements"
type: "custom-branch"
add-queues-between-elements: true
branch-plugins:
- name: "nvdslogger"
type: "nvdslogger"
- name: "tiler"
type: "nvmultistreamtiler"
height: 720
width: 1280
probes:
- type: "event-message-vehicle"
pad: "sink"
parameters:
interval: 10
- type: "event-message-person"
pad: "sink"
parameters:
interval: 10
- name: "nvvidconv"
type: "nvvideoconvert"
nvbuf-memory-type: 3
- name: "nvosd"
type: "nvdsosd"
osd-process-mode: 0
osd-display-text: 1
probes:
- type: "trackervisualizer"
pad: "sink"
parameters:
interval: 10
measure_latency: false
max_history: 200
timeout_frames: 10
line_width: 2
objects_to_draw_tracking:
- 'car'
- type: "save-frame"
pad: "sink"
parameters:
frame_dir: "./tmp"
save_interval: 200
connected-to:
- "save"
- "display"
- "msg"
- name: "save"
type: "savesink"
add-queues-between-elements: true
enabled: true # Enable to save the output video to a file
location: "./file.mp4"
sync: 0
async: 0
qos: 0
bitrate: 2000000
- name: "display"
type: "displaysink"
add-queues-between-elements: true
enabled: true
sync: 1
async: 0
qos: 0
- name: "msg"
type: "custom-branch"
enabled: true
branch-plugins:
- name: "queuemsgconv1"
type: queue
- name: "msgconv1"
type: "msgconv"
config: "/opt/nvidia/deepstream/deepstream/sources/python/stream_service/config/nvmsgconv_config/msgconv_config.txt"
payload-type: 0
msg2p-lib: "/opt/nvidia/deepstream/deepstream/sources/python/stream_service/lib/custom_libraries/libs/nvmsgconv/libnvds_msgconv.so"
- name: "broker1"
type: "msgbroker"
proto-lib: "/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so"
conn-str: "localhost;9092;deepstream-metadata"
sync: 0
topic: "deepstream-metadata"
Requirement details:
I need to be able to extract a specific frame from these splitmuxsink
chunks reliably. Are there recommended approaches or configurations for achieving this when working with variable frame rates or inconsistent frame counts from RTSP? Any advice on synchronizing frame numbers between DeepStream output (such as Kafka) and the split files would also be highly appreciated.
Thanks!