Combination of above 2 pipelines works well. Needless to say that, above 2 pipelines are in separate applications connected through ROS.
Problem: I want to use Jetson for the 1st(encoder) pipeline. So I replace ‘vaapih265enc max-bframes=0 keyframe-period=30’ with ‘omxh265enc iframeinterval=30 quant-b-frames=0’. After this replacement I had to add ‘h265parse’ in 2nd decoder pipeline inorder to make application works. I don’t want to add ‘h265parse’ element because it results an extra latency of ~40ms in decoding.
How can I efficiently replace ‘vaapih265enc max-bframes=0 keyframe-period=30’ with ‘omxh265enc’?
Additional info: I am forcing ‘omxh265enc’ to output byte-stream instead of hvc1, because vaapih265enc outputs byte-stream by-default. So appsink of omxh265enc is having caps
We would like to suggest you output in stream-format=byte-stream, in both omxh265enc and nvv4l2h265enc. In our SQA tests, we use h265parse plugin. It would be more stable in constructing the pipeline with h265parse.
There @junyan.he presented his opinion. He suspects that frame alignment causes this problem. ‘vaapih265enc’ outputs just one complete frame in one gstbuffer. So, one gstbuffer is already frame aligned. He suspects that ‘omxh265enc’ may mix data from several frames together, with AU as delimiter. So one gstbuffer may not be frame aligned.
What I understood: omxh265enc may output gst-buffer which contains some data belong to one frame and some data belong to next frame.
If you think this is the issue, Can you suggest any property of ‘omxh265enc’ which can force it to produce frame align data?