Hello,
I want to use NVIDIA HW to get Multiple FHD RTSP streams and to encode processed streams using the same HW.
I worked with python environment on ubuntu
My 2 simple questions are :
- How do I get Gstreamer pipeline with access to image data ?
- How do I set encoder pipeline to stream frames outside
Decoder Side
I saw GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications examples
But they do not enable access to image data for the decoder part
I tried to modify existing Decoder Gstreamer pipelines from
‘rtspsrc name=m_rtspsrc ! rtph264depay name=m_rtph264depay ! avdec_h264 name=m_avdech264 ! videoconvert name=m_videoconvert ! videorate name=m_videorate ! appsink name=m_appsink’)
to
‘rtspsrc name=m_rtspsrc ! rtph264depay name=m_rtph264depay ! h264parse ! nvv4l2decoder ! nvvideoconvert ! video/x-raw,width=1920,height=1080,format=RGBA !appsink name=m_appsink’)
it fail to identify “new-sample” event when I use for example
self.sink.connect(“new-sample”, self.new_buffer, self.sink)
were self.new_buffer should collect a frame
the solution in
is working but it is not flexible enough for my needs
Encoder Side
I tried to modify
fps)
self.launch_string = ‘appsrc name=source is-live=true block=true format=GST_FORMAT_TIME ’ + caps_str +
’ ! videoconvert’
’ ! video/x-raw,format=I420’
’ ! x264enc tune=zerolatency speed-preset={} threads=0 {} ‘.format(speed_preset,
key_int_max) +
’ ! rtph264pay config-interval=1 pt=96 name=pay0’
‘’
to
self.launch_string = 'appsrc name=source is-live=true block=true format=GST_FORMAT_TIME ' + caps_str + \
' ! nvvideoconvert ! video/x-raw,width=1920,height=1080,format=RGBA '\
' ! nvv4l2h264enc ! h264parse!'\
' ! rtph264pay config-interval=1 pt=96 name=pay0' \
''
but it failed too
I will be happy to get concrete help how to create stable accessible Encoder & Decoder access to NVIDIA HW using python
Thank you
Rami