The above pipeline works, but I think there is a lot of unnecesary plugins I’m using. What I want to achieve is very simple, just send the image to the rtspclientsink with the height, width, 1 framerate per second.
This takes a lot of CPU Eventually I want to be able to view the localhost stream. What am I doing wrong?
Sorry for missed this topic, is it still an issue now?
jpegdec&videoconvert&x264end can be replaced by NVidia hardware accelerated plugins, which can save a lot of CPU resources.
Thank you for your reply,
Yes it’s still an issue,
Let’s think about it for a moment, all I want to do is send an image from a single file src, to be streamed via rtsp or rtmp, why all these middle plugins that is going to consume a lot of things?
Currently I’m learning C language to understand the offical tutorials, but I’m litreally in hell to use all of these plugins to produce a single stream contains a single image that should be refreshed x amount of seconds…
Waiting for an answer!
you can use HW jpeg decoding and HW h264 encoding. here is a sample:
gst-launch-1.0 filesrc location=/home/2.jpg ! jpegparse ! nvv4l2decoder ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=I420’ ! nvv4l2h264enc bitrate=1000000 ! filesink location=test.264
I know I’m inside Nvidia forums :D And it’s quite embarrassing, I want to run the above pipeline on Raspberry PI 4 using Debian 11 or any operating system that doesn’t depend on any NVIDIA hardware.
Also correct me if I’m wrong
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
As you know, NVIDIA HW plugins are based on NVIDIA GPU. This issue would be outside of Deepstream, You could try asking in the Raspberry PI 4 community.