I am using gst to transcode a video into multiple scales/bitrates. All is working great on the original size output (1080p), but on lower scale video, like 720p, elements with straight lines become warped, as if scaler would simply drop every n-th row from video instead of calculating averages.
So, the question is: Is it possible to change scaler behaviour, if yes - how?
These are example frames of the gst output. Notice the text and especially the logo on the right upper corner:
This is the pipeline that is being used: gst-launch-1.0 souphttpsrc location=http://192.168.0.1/mpegts ! tsdemux name=input input.audio_0_013f ! parsebin ! queue ! tee name=a1 input.audio_0_01a3 ! parsebin ! queue ! tee name=a2 a1. ! queue ! mux. a2. ! queue ! mux. input.video_0_00db ! decodebin ! deinterlace ! tee name=video video. ! nvvidconv ! video/x-raw(memory:NVMM),width=1048,height=576,format=I420 ! nvv4l2h264enc maxperf-enable=1 control-rate=0 peak-bitrate=1635779 bitrate=1363149 preset-level=1 profile=High insert-sps-pps=1 ! h264parse ! queue ! mux. video. ! nvvidconv ! video/x-raw(memory:NVMM),width=654,height=360,format=I420 ! nvv4l2h264enc maxperf-enable=1 control-rate=0 peak-bitrate=1258291 bitrate=1048576 preset-level=1 profile=High insert-sps-pps=1 ! h264parse ! queue ! mux. mpegtsmux name=mux ! tcpserversink port=12345
Correct me if I’m wrong but, as I understand, this pipeline is using nvv4l2h264enc to resize the stream, not the nvvidconv. On nvvidconv I did not see scaling options, only cropping, so did not use interpolation-method.
Yes, this does solve the issue. Trying different interpolation-method levels does not seem to affect performance.
Can You describe major differences between these methods? Nearest and bilinear are quite self-explanatory, but other methods are a bit unclear, especially “nicest” does it use one of the above (0-4) method based on frame contents or is it just called “nicest” because some programmer was realy proud of his code?
@Honey_Patouceul won’t this extra queueing reduce performance? I am already experiencing some odd behaviour with current pipeline, as in after extended period of uninterupted transcoding cpu and system memory usage increases constantly. At the beggining process uses ~20% cpu and ~10% ram, after ~6 hours cpu is increased to ~50% and ram to almost 40%.
Are you sure the increasing CPU and RAM usage is related to queue plugins ?
I just use queues for each parallel output of tee/demux and for each parallel input of mux as a general rule.
Without these, I think it may work in some cases, but may have synchronization issues.
Not sure how it can affect performance, but I think it may also allow a subpipeline to be run on a different core.
That’s only my own understanding, I may be wrong, you may have deeper knowledge than me about queue, but for your pipeline I’d try to remove the queues before tees a1 and a2 and add queues after video tee.
Someone with better knowledge may further comment. You may create a separate topic since this one is solved and will therefore get less visibility.