GPU: CUDA 12.2 DeepStream 6.3
I have a problem when I use the GStreamer plugin to rotate the input video from an RTSP source. I’m using the code from /opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-demux-multi-in-multi-out
, and I’ve added the following GStreamer plugin to this code:
rotate = Gst.ElementFactory.make("gltransformation", "gltransformation")
# Set properties
rotate.set_property("rotation-x", -10)
rotate.set_property("ortho", True)
# Link the elements
streammux.link(queue1)
queue1.link(rotate)
rotate.link(pgie)
However, I’m encountering an error
Hi @mahmoudhedi.ghorbel
You also need to add glupload
and gldownload
to transform the memory from video/x-raw
into video/x-raw(memory:GLMemory)
. Additionally, gltransformation
will not work after the nvstreammux
because, at that point, the memory is batched NVMM. If you want to rotate the sources for the inference, you should do it before batching. If you want to rotate the result, you need to add a debatching element such as nvstreamdemux,
nvmultistreamtiler,
or nvdsosd
(for batch-size=1).
For example:
uridecodebin3 uri="..." ! \
nvvideoconvert ! \
glupload ! \
gltransformation rotation-x=-10 ortho=true !\
gldownload ! \
nvvideoconvert ! \
mux.sink_0 nvstreammux name=mux ... ! \
nvinfer ... ! \
...
miguel.taylor give an awesome solution.
Alternative solutions, Use the nvdspreprocess
. You can refer the deepstream-preprocess-test
I have a sample code for reading from an RTSP stream and displaying the video.
rotate_change.txt (3.3 KB)
but I add glupload
and gldownload
to transform the memory from video/x-raw
into video/x-raw (memory: GLMemory)
not show me the video he stay at this level:
@mahmoudhedi.ghorbel
Can you share the pipeline?
Here is the pipeline:
rtspsrc -> rtph264depay -> avdec_h264 -> glupload-> gltransformation -> gldownload -> textoverlay -> autovideosink
@mahmoudhedi.ghorbel
Is this pipeline working for you?
gst-launch-1.0 \
videotestsrc is-live=true ! \
glupload ! \
gltransformation rotation-x=45 ortho=true ! \
gldownload ! \
videoconvert ! \
xvimagesink
Another pipeline you can test:
gst-launch-1.0 \
uridecodebin3 uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! \
nvvidconv ! \
glupload ! \
gltransformation rotation-x=45 ortho=true ! \
gldownload ! \
nvvidconv ! \
xvimagesink
On this pipeline you can change the file://
for your RTSP stream to also test with that.
Let me know if any of the pipelines fail.
This pipeline is working
gst-launch-1.0 \
uridecodebin3 uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! \
nvvideoconvert ! \
glupload ! \
gltransformation rotation-z=10 ! \
gldownload ! \
nvvideoconvert ! \
xvimagesink
However, when using RTSP, it is not working. I have added the following Python code to make it work:
uridecodebin.connect("pad-added", on_pad_added)
def on_pad_added(element, pad):
sink_pad = nvvideoconvert.get_static_pad("sink")
pad.link(sink_pad)
I am currently facing an issue when adding the following elements in my code to rotate two RTSP inputs (the same code as /opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-demux-multi-in-multi-out
): (use : glupload ! \ gltransformation rotation-z=10 ! \ gldownload !
)
for i in range(number_sources):
print("Creating source_bin ", i, " \n ")
uri_name = input_sources[i]
if uri_name.find("rtsp://") == 0:
is_live = True
source_bin = create_source_bin(i, uri_name)
if not source_bin:
sys.stderr.write("Unable to create source bin \n")
pipeline.add(source_bin)
nvvideoconvert1 = Gst.ElementFactory.make("nvvideoconvert", f"nvvideoconvert{i}")
gltransformation=Gst.ElementFactory.make("gltransformation",f"gltransformation{i}")
if not gltransformation:
sys.stderr.write("unable to create gltransformation")
gltransformation.set_property("rotation-z",-10)
glupload=Gst.ElementFactory.make("glupload", f"glupload{i}")
if not glupload:
sys.stderr.write("unable to create glupload")
gldownload=Gst.ElementFactory.make("gldownload", f"gldownload{i}")
if not gldownload:
sys.stderr.write("unable to create glupload")
nvvideoconvert2 = Gst.ElementFactory.make("nvvideoconvert", f"nvvideoconvert{i+1}")
pipeline.add(nvvideoconvert1)
pipeline.add(glupload)
pipeline.add(gltransformation)
pipeline.add(gldownload)
pipeline.add(nvvideoconvert2)
nvvideoconvert1.link(glupload)
glupload.link(gltransformation)
gltransformation.link(gldownload)
gldownload.link(nvvideoconvert2)
padname = "sink_%u" % i
sinkpad = streammux.get_request_pad(padname)
if not sinkpad:
sys.stderr.write("Unable to create sink pad bin \n")
srcpad = nvvideoconvert2.get_static_pad("src")
if not srcpad:
sys.stderr.write("Unable to create src pad bin \n")
srcpad.link(sinkpad)
I need assistance in adding the rotation in my code. Thank you.
@mahmoudhedi.ghorbel
I encountered an issue when combining nvvideoconvert
with the GL elements. I couldn’t get it to work with any of the memory types supported by nvvideoconvert
. However, the default memory type on nvvidconv
worked correctly. Could you try switching your conversion elements to nvvidconv
?
Note that you will get issues when mixing nvvidconv
and DeepStream, so if you want to do that you may need to force a conversion with nvvideoconvert
just to copy the buffer into a memory type that works with DeepStream.
Hi guy,does your problem is still exist ?
I have implement a sample as your request.
Here is code, It’s a funny sample.
deepstream_demux_multi_in_multi_out_new.py (18.5 KB)
Hello,
Thank you for the example, but it’s not displaying the video. I want to view the video directly from an RTSP link. The code seems to be stuck at this stage:
I work on a headless server. If you want show it on monitor.
change the following code to
print("Creating EGLSink \n")
sink = make_element("nvvideoencfilesinkbin", i)
out_name = "sink_%u.h264" % i
sink.set_property("output-file", out_name)
sink.set_property("bitrate", 4000000)
sink.set_property("codec", 1)
print("Creating EGLSink \n")
sink = make_element("nveglglessink", i)
Another question, judging from your error log, no PTS
seems that there is something wrong with the rtsp stream.
You can try the following cli. I have tried, It works fine. then modify the code as your requestment.
python3 deepstream_demux_multi_in_multi_out_new.py -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
If I work with file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
, I can save or display the video. However, if I want to change the video or use an RTSP stream, I encounter this error when I use sink = make_element("nveglglessink", i)
to show the stream:
When I try to save the video, I can’t display it. It does not provide any frames to read. I don’t know what the problem is.
I want to rotate an RTSP stream, then use detection, tracking, classification, and display the video to test my work with models.
So can you share your video ? I have tested use sample mp4 and RTSP. All of them work fine.
If you use this file path file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
, I’m unable to display it, and I encounter the following error: Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2397): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference: streaming stopped, reason not-negotiated (-4)
. The file does not provide any frames to read when I attempt to save it.
It runs fine on my gpu, what is your gpu model? What if you just run the following command?
gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! nvvideoconvert ! nvv4l2h264enc bitrate=2000000 ! h264parse ! filesink location=out.264
Thank you, but I would like to apply that to RTSP in the code for multiple sources with detection and classification.I tested it with an RTSP stream, but it didn’t work.
I have tested that both local files and rtsp stream can work normally. Even the sample_1080p_h264.mp4
you mentioned.
And I think it makes no difference to uridecodebin
.
You said that the program cannot process sample_1080p_h264.mp4
when running on your machine, so I want to know whether the codec of your machine can work normally.
You’d better check if the rtsp stream has any special
Thank you, I solved the problem by reading the RTSP stream with rotation from another local RTSP source.