No, c) deepstream-app modifications hasn’t been accomplished yet. I’m trying to extend the C source (found a few ideas here, here and here :-) and will report back on this thread. You helped me a lot to fix configuration issues. * thanks.
To get an understanding where to start with the modifications, I’m describing the findings so far.
custom dGPU: create a pipeline, usb camera with format MJPG, rtspstreaming with software encoded h264
without inferencing
- Ia) this works: compiled source test-launch.c from gst-rtsp-server/examples at master · GStreamer/gst-rtsp-server · GitHub in DeepStream 6.4 docker container
./test-launch --gst-debug=3 '( v4l2src device=/dev/video0 ! image/jpeg,format=MJPG,width=640,height=480,framerate=30/1 ! jpegparse ! rtpjpegpay name=pay0 pt=96 )'
- Ib1) this does not work, but fakesink is on.
gst-launch-1.0 v4l2src device=/dev/video0 ! image/jpeg,width=640,height=480,framerate=30/1,format=MJPG ! fakesink
and
gst-launch-1.0 v4l2src device=/dev/video0 ! image/jpeg,width=640,height=480,framerate=30/1,format=MJPG ! jpegparse ! jpegdec ! x264enc tune=zerolatency ! mpegtsmux ! fakesink
- Ib2) this does not work, too (?? = research):
- start with image/jpeg and rtspstreaming output as h264:
gst-launch-1.0 v4l2src device=/dev/video0 ! image/jpeg,width=640,height=480,framerate=30/1,format=MJPG ! ?? ! rtph264pay name=pay0 pt=96
or - start with video/x-h264 and rtspstreaming output as h264:
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-h264,width=640,height=480,framerate=30/1 ! ?? ! rtph264pay name=pay0 pt=96
or - start with video/x-raw with format=mjpg and and rtspstreaming output as h264:
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=30/1,format=MJPG ! ?? ! rtph264pay name=pay0 pt=96
- start with image/jpeg and rtspstreaming output as h264:
with inferencing
-
IIa) this works: (no example yet)
-
IIb) this does not work on custom dGPU, but luckily it works on Jetson. The example is from here and here
gst-launch-1.0 v4l2src device=/dev/video0 io-mode=2 ! 'image/jpeg,width=640,height=480,framerate=30/1,format=MJPG' ! jpegparse ! jpegdec ! videoconvert ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! mux.sink_0 nvstreammux live-source=1 name=mux batch-size=1 width=640 height=480 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt batch-size=1 ! nvmultistreamtiler rows=1 columns=1 width=640 height=480 ! nvvideoconvert ! nvdrmvideosink ! nveglglessink
- Accordingly to I noticed that the nvegltransform plugin is no longer available in deepstream 6.2?, Deepstream 6.2 and above no longer offers nvegltransform, it has been replaced with nv3dsink, see also Troubleshooting — DeepStream documentation 6.4 documentation.
The example above (II2b) starts with image/jpeg, but does not end yet with rtspstreaming output as h264. The following pipeline adoption works with fakesink after nvvideoconvert. Didn’t find a valid pipeline ending so far using h264parse, rtph264pay, udpsink, or any other element.
gst-launch-1.0 v4l2src device=/dev/video0 ! 'image/jpeg,width=640,height=480,framerate=30/1,format=MJPG' ! jpegparse ! jpegdec ! videoconvert ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! mux.sink_0 nvstreammux live-source=1 name=mux batch-size=1 width=640 height=480 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt batch-size=1 ! nvmultistreamtiler rows=1 columns=1 width=640 height=480 ! nvvideoconvert ! fakesink
- Ending the pipeline with
! nvdsosd ! nvvideoconvert ! x264 ! rtph264pay ! udpsink
results in udpsink0: client ::1:5400, reason: Error sending message: Cannot assign requested address […] error sending UDP packets. Might be a docker-related issue with localhost.
launch pipeline and screen output (3.1 KB)
gst-inspect (579 Bytes)
gst-inspect-output (9.1 KB)