As shown in the following figure,It can work well when capture by a single sensor(4000300025fps),no matter how many groups(app-bin) are followed by tee.
But when using two sensors to capture(4000300025fps), it will drop the frame rate when the groups(queue and bin) are more than 2 .
Whether Nvarguscamerasrc has a buffer space limit ？ And problems will occur when it capture more than one sensor？
I’ve set all my clocks to the highest,Is there any way to diagnose what’s wrong with it？
Could you modify below to build the libgstnvarguscamerasrc.so to try.
You should be able get the source code from download center.
./gstnvarguscamerasrc.cpp:#define MAX_BUFFERS 8
Do you know how to speed up nvarguscamerasrc consumer processing?
In the function bool StreamConsumer::threadExecute(GstNvArgusCameraSrc *src)(gstnvarguscamerasrc.cpp),I print the consumption time.
It was fast when using a single camera to capture,no matter how many gruops(queue-appbin) are followed by tee.
But a lot of times over 40ms when capturing with two cameras:
Do you have any suggestions about this problem?
Have you try boost the system by nvpmodel and jetson_clocks?
It was work on max clock now,and I’ve identified the problem with it,but I don’t know how to solve it.
In the function:NvBufferTransform (consumer_thread gstnvarguscamerasrc.cpp),It takes very little time when you only have one camera to capture.
but it will takes very much time when capture by using two cameras.
Do you know how to solve it?
Sorry, I don’t have idea for it now.