Please provide complete information as applicable to your setup.
• Hardware Platform (GPU)
• DeepStream Version: 6.4
• NVIDIA GPU Driver Version (valid for GPU only): NVIDIA GeForce GTX 1650 / Driver Version: 525.147.05 / CUDA Version: 12.0
• Issue Type( question)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I am building a pipeline which will be consuming upto16 video streams and dump them into HLS files and send the detections to MQTT. I am now trying to build a test setup with one video. I know, that not all of the video streams can be encoded by Hardware as the number of encoders on GPU are limited. Therefore, I am now trying to build a working pipeline with a hardware encoder (nvv4l2h264enc) and sowtware encoder (x264enc).
I was followilng the examples from GitHub but now I am stuck as the Software encoder pauses the video processing.
So the working pipeline with a hardware encoder looks as follows:
pdf:
graph_w_hw_encoder.pdf (24.0 KB)
and the code:
def main(args):
# Standard GStreamer initialization
Gst.init(None)
##############################################################################################
### Start parsing and check config file
##############################################################################################
# Parse config file
config_file = args.config
config = yaml.safe_load(config_file.open("r", encoding="utf-8"))
number_sources = len(config["streams"])
print(f"Number of sources connected {number_sources}")
stream_names = [stream["name"] for stream in config["streams"]]
print('\n'.join(f"{i}: {name}" for i, name in enumerate(stream_names)))
##############################################################################################
### Pipeline
##############################################################################################
print("Creating Pipeline \n ")
pipeline = Gst.Pipeline()
if not pipeline:
sys.stderr.write(" Unable to create Pipeline \n")
##############################################################################################
### Elements
##############################################################################################
# Create File Source
source = Gst.ElementFactory.make("filesrc", "file-source")
if not source:
sys.stderr.write(" Unable to create Source \n")
print('Input uri', config["streams"][0]['camera-stream-url'])
# Take just one videostream as 0
source.set_property('location', config["streams"][0]['camera-stream-url'])
##############################################################################################
# Since the data format in the input file is elementary h264 stream, we need a h264parser
h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
if not h264parser:
sys.stderr.write(" Unable to create h264 parser \n")
##############################################################################################
# Use nvdec_h264 for hardware accelerated decode on GPU
print("Creating Decoder \n")
decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
if not decoder:
sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")
##############################################################################################
# Although 1, it must be there! Cannot link decoder with pgie directly!
# Create nvstreammux instance to form batches from one or more sources.
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
if not streammux:
sys.stderr.write(" Unable to create NvStreamMux \n")
streammux.set_property('batch-size', 1)
streammux.set_property('width', 1920)
streammux.set_property('height', 1080)
# streammux.set_property('live-source', 1)
##############################################################################################
nvstreamdemux = Gst.ElementFactory.make("nvstreamdemux", "nvstreamdemux")
if not nvstreamdemux:
sys.stderr.write(" Unable to create NvStreamDemux \n")
##############################################################################################
# Use nvinfer to run inferencing on decoder's output, behaviour of inferencing is set through config file
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
if not pgie:
sys.stderr.write(" Unable to create pgie \n")
pgie.set_property('config-file-path', "./deepstream/models/dstest1_pgie_config.txt")
##############################################################################################
# Use convertor to convert from NV12 to RGBA as required by nvosd
preosd_nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "preosd_nvvidconv")
if not preosd_nvvidconv:
sys.stderr.write(" Unable to create nvvidconv \n")
postosd_nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "postosd_nvvidconv")
if not postosd_nvvidconv:
sys.stderr.write(" Unable to create nvvidconv \n")
##############################################################################################
# Create OSD to draw on the converted RGBA buffer
nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
if not nvosd:
sys.stderr.write(" Unable to create nvosd \n")
##############################################################################################
proto_lib = '/opt/nvidia/deepstream/deepstream/lib/libnvds_mqtt_proto.so'
conn_str = "localhost;1883;wrs"
cfg_file = '/opt/nvidia/deepstream/deepstream/samples/WRS_AI_Application/deepstream/cfg_mqtt.txt'
msgbroker = Gst.ElementFactory.make("nvmsgbroker", "nvmsg-broker")
msgbroker.set_property('proto-lib', proto_lib)
msgbroker.set_property('conn-str', conn_str)
msgbroker.set_property('config', cfg_file)
# msgbroker.set_property('sync', False)
if not msgbroker:
sys.stderr.write("Unable to create msgbroker \n")
##############################################################################################
# print('Output uri', config["streams"][0]['user-stream-directory'])
stream_directory = pathlib.Path(config["streams"][0]["user-stream-directory"])
stream_directory.mkdir(exist_ok=True)
# Full Resolution Detection Stream
output_directory = stream_directory / "detection"
output_directory.mkdir(exist_ok=True)
##############################################################################################
framerate = 25
keyframe_secs = 4
bitrate = 4000
####
hw_encoder = Gst.ElementFactory.make("nvv4l2h264enc", "hw_encoder")
if not hw_encoder:
sys.stderr.write(" Unable to create hw_encoder \n")
hw_encoder.set_property('iframeinterval', keyframe_secs)
hw_encoder.set_property('bitrate', bitrate*1000)
####
x264enc = Gst.ElementFactory.make("x264enc", "x264enc")
if not x264enc:
sys.stderr.write(" Unable to create x264enc \n")
x264enc.set_property('key-int-max', framerate*keyframe_secs)
x264enc.set_property('bitrate', bitrate)
##############################################################################################
h264parse = Gst.ElementFactory.make("h264parse")
if not h264parse:
sys.stderr.write("Unable to create h264parse \n")
# If this config is not set, the stream won't be saved in chunks!
h264parse.set_property("config-interval", -1) # Send SPS and PPS Insertion Interval in seconds
##############################################################################################
caps_string_raw = (f"video/x-raw, format=I420, width={1280}, height={720}")
caps_raw = Gst.caps_from_string(caps_string_raw)
capsfilter_raw = Gst.ElementFactory.make("capsfilter", "capsfilter_raw")
if not capsfilter_raw:
sys.stderr.write("Unable to create capsfilter \n")
capsfilter_raw.set_property("caps", caps_raw)
##############################################################################################
max_recording_time = 2000
# https://gstreamer.freedesktop.org/documentation/hls/hlssink2.html?gi-language=c
sink = Gst.ElementFactory.make("hlssink2", "hlssink2")
if not sink:
sys.stderr.write("Unable to create hlssink2 sink \n")
sink.set_property('send_keyframe_requests', False)
sink.set_property('target_duration', keyframe_secs)
sink.set_property('playlist_length', max_recording_time // keyframe_secs)
sink.set_property('max_files', max_recording_time // keyframe_secs)
sink.set_property('location', f"{output_directory}/%05d.ts")
sink.set_property('playlist_location', f"{output_directory}/playlist.m3u8")
##############################################################################################
msgconv = Gst.ElementFactory.make("nvmsgconv", "nvmsg-converter")
if not msgconv:
sys.stderr.write(" Unable to create msgconv \n")
msgconv.set_property('config', '/opt/nvidia/deepstream/deepstream/samples/WRS_AI_Application/deepstream/cfg_msgconv.txt')
msgconv.set_property('payload-type', 0)
tee = Gst.ElementFactory.make("tee", "nvsink-tee")
if not tee:
sys.stderr.write("Unable to create tee \n")
queue1 = Gst.ElementFactory.make("queue", "nvtee-que1")
if not queue1:
sys.stderr.write("Unable to create queue1 \n")
queue2 = Gst.ElementFactory.make("queue", "nvtee-que2")
if not queue2:
sys.stderr.write("Unable to create queue2 \n")
##############################################################################################
### Add Elements to Pipeline and link them
##############################################################################################
print("Adding elements to Pipeline \n")
# Add all elements to pipeline
pipeline.add(source)
pipeline.add(h264parser)
pipeline.add(decoder)
pipeline.add(streammux)
pipeline.add(pgie)
pipeline.add(preosd_nvvidconv)
pipeline.add(nvosd)
pipeline.add(postosd_nvvidconv)
pipeline.add(tee)
pipeline.add(queue1)
pipeline.add(queue2)
pipeline.add(msgconv)
pipeline.add(msgbroker)
pipeline.add(hw_encoder)
pipeline.add(h264parse)
# pipeline.add(nvstreamdemux)
# pipeline.add(capsfilter_nvmm)
pipeline.add(capsfilter_raw)
pipeline.add(x264enc)
pipeline.add(sink)
# Link them all together
print("Linking elements in the Pipeline \n")
source.link(h264parser)
h264parser.link(decoder)
srcpad = decoder.get_static_pad("src")
if not srcpad:
sys.stderr.write("Unable to get source pad of decoder \n")
sinkpad = streammux.get_request_pad("sink_0")
if not sinkpad:
sys.stderr.write("Unable to get the sink pad of streammux \n")
srcpad.link(sinkpad)
streammux.link(pgie)
pgie.link(preosd_nvvidconv)
preosd_nvvidconv.link(nvosd)
nvosd.link(tee)
queue1.link(msgconv)
msgconv.link(msgbroker)
# HARDWARE ENCODER ####
queue2.link(hw_encoder)
hw_encoder.link(h264parse)
h264parse.link(sink)
########################
# SOFTWARE ENCODER ####
# queue2.link(postosd_nvvidconv)
# postosd_nvvidconv.link(capsfilter_raw)
# capsfilter_raw.link(x264enc)
# x264enc.link(h264parse)
# h264parse.link(sink)
################
queue1_sink_pad = queue1.get_static_pad("sink")
queue2_sink_pad = queue2.get_static_pad("sink")
tee_msg_pad = tee.get_request_pad('src_0')
tee_render_pad = tee.get_request_pad("src_1")
if not tee_msg_pad or not tee_render_pad:
sys.stderr.write("Unable to get request pads\n")
tee_msg_pad.link(queue1_sink_pad)
tee_render_pad.link(queue2_sink_pad)
# msgbroker.link(sink)
##############################################################################################
### Initiate Pipeline
##############################################################################################
# create an event loop and feed gstreamer bus mesages to it
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)
# Lets add probe to get informed of the meta data generated, we add probe to
# the sink pad of the osd element, since by that time, the buffer would have
# had got all the metadata.
osdsinkpad = nvosd.get_static_pad("sink")
if not osdsinkpad:
sys.stderr.write(" Unable to get sink pad of nvosd \n")
osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)
##############################################################################################
### Start Pipeline
##############################################################################################
# List the sources
print("Now playing...")
# for i, source in enumerate(number_sources):
# print(i, ": ", source)
print("Starting pipeline \n")
# start play back and listed to events
pipeline.set_state(Gst.State.PLAYING)
try:
if dot_dir := os.environ.get("GST_DEBUG_DUMP_DOT_DIR", None):
dot_file = pathlib.Path(dot_dir) / "pipeline-graph.dot"
print(f"Save pipeline graph to {dot_file}")
Gst.debug_bin_to_dot_file(
pipeline, Gst.DebugGraphDetails.NON_DEFAULT_PARAMS, dot_file.stem
)
loop.run()
except:
pass
# cleanup
print("Exiting app\n")
pipeline.set_state(Gst.State.NULL)
And this is working. However, if comment the Part with “Hardware Encoder” and uncomment the part with “Software Encoder”, the pipeline stops after 3 or 4 frames with no error message.
On GST_DEBUG Level 5 all I see that makes sens is the last line:
0:00:06.779105920 140 0x55b1a87029e0 DEBUG queue_dataflow gstqueue.c:1520:gst_queue_loop:<nvtee-que2> queue is empty
Can you maybe help to understand why the video processing is pausing and not worrking?
many thanks in advance!