IFrameConsumer vs IBufferOutputStream

Hi,

the 10_camera_recording Argus sample changed from using IFrameConsumer (v28.3) to IBufferOutputStream (v32.1). What is the benefit of IBufferOutputStream compared to IFrameConsumer approach?

We are experiencing occasional frame dropping when the system is heavily loaded. Sometimes we can’t consume frames fast enough. We are using frame->getNumber() to detect gaps. IBufferOutputStream doesn’t seem to have any similar function. Is there any way how to detect dropped frames with IBufferOutputStream?

Thank you.

David

The coming release r32.2 have performance improve that should fix your problem.

Performance improvements are always good but the system load has many variables. I don’t think frame drops can be completely avoided. r32.2 is good news but questions still remain.

What is the benefit of IBufferOutputStream compared to IFrameConsumer approach?

Can I get a frame sequence number with IBufferOutputStream like frame->getNumber()?

The new release fixed the frame drop issue even the stress CPU.
Didn’t implement the getNumber() in IBufferOutputStream class.

One more remains :-)

What is the benefit of IBufferOutputStream compared to IFrameConsumer approach?

@ShaneCCC this seems like a very poor answer, to promise something for a new release, which will likely have everything different again - until nvidia decides to discontinue the product because it is not able to get it even working up to the promised specs. Are there any developer-preview releases for that magically working upcoming version? The TK1,TX1,TX2 system are out for years now, yet none of them can be considered as stable and production ready for imaging applications.

The question of the OP is very much valid and I would like to hear the answer as well. As you know, the TX2 clains 3x 2160p30 encode performance, yet its VI interfaces can provide up to 3x 20160p60 from three CSI camera modules. What will happen if somebody enables such pipeline? The 1.4Gpx/s figure in the datasheet is not high enough to cover 3*4K60 (1.59Gpx/s) and knowing the encoder will definitely not keep up as well, one can assume that this will result in dropped frames.

No performance tuning will get you around this - the limits are given by hardware. They were there right from the start, yet no solid mechanism was designed-in in order to handle the excessive processing in an application-informed way.

Can the queues be sized to larger length? Some applications might not need a realtime, low latency mode, but can tolerate a higher latency at the benefit of having less drops and better decision which frame to omit, if something like that must occur. This might show the way how it should be done - if there are FIFO’s in the processing pipeline, allow the application to register a handler/callback that informs about frame being not accepted to the fifo and thus being dropped. Or an advanced mode, where a pre-notification is issued and the application can decide which frame(s) will be sacrificed.

On the encoder side, it would be very helpful if one can insert a bubble (empty / no data frame) that will be reflected on the output by an empty P frame and allow creating a fixed FPS stream with proper placement of frames, even if some of the frames were missing (and replaced by bubbles).

Currently, camrea samples in MMSDK use IImageNativeBuffer->copyToNvBuffer
to extract buffer from EGLStream. It has an extra memory copy
(blit by VIC). Change to IBufferOutputStream update camera samples in MMSDK to use
applicatoin managed buffer which has zero buffer copy.

I also found that I can get a frame number via Ext::IInternalFrameCount even with IBufferOutputStream.

iStreamSettings->setMetadataEnable(true);
...
IBuffer *iBuffer = interface_cast<IBuffer>(buffer);
auto metadata = iBuffer->getMetadata();
auto fc = interface_cast<const Ext::IInternalFrameCount>(metadata);
std::cout << fc->getInternalFrameCount() << '\n';