Libargus CUDA EGLStream, used for TensorRT inference?

I am using an RPiV2 camera connected with CSI. I’m trying to capture an image and send it to a TensorRT model while minimizing the number of buffer copies necessary. There will be a secondary consumer running as well to encode a higher resolution video at the same time the model is running.

  • This power point suggests that a CUDA EGLStream consumer is possible - is there any integration with TensorRT such that the acquired image can be sent directly to a model?
  • The EGLStream documentation describes stream operation modes of “Mailbox” / “FIFO”. Ideally, the consumer feeding the TensorRT would have a mailbox style operation (real-time application). The video encoding consumer, however, would be FIFO, as I want to drop no frames and have a lossless, full resolution video. However, I cannot find documentation on the StreamType parameter of ICaptureSession::createOutputStreamSettings() or what methods are available with the EGLOutputStreamSetting (I cannot find any EGL documentation actually). Is this where I would set Mailbox / FIFO?
  • Finally, I am not very experienced with this stuff. Am I overthinking the costs of buffer copies? Will it have very negligible impact?

Thank you in advance, and sorry if these questions are a bit too high level.

Hi,

You can check our Deepstream SDK first.
It should meet most of your requirement:

Thanks.