Tensorrt binding from NvBuffer

Hello,

Is there a better / more efficient way to convert a (Argus) NvBuffer to a Tensorrt InputBinding than manually copping the image from one memory to the other?

Environment

Platform: Jetson Xavier NX
Jetpack version: 4.6.3
TensorRT Version: 8.0.1
CUDA Version: 10.2
CUDNN Version: 8.2.1
Operating System + Version: L4T 32.7

Hi,

We are moving this post to the Jetson Xavier NX forum to get better help.

Thank you.

Hi,

Please find Deepstream for more information:

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

Thanks.

In my application I want to use CSI cameras to capture frames via the Argus API. However this cameras are triggered by a Hardware trigger to sync them up with other data streams. Therefore, I can get relative long delays between frames ( > 1 min) and I also don’t really want to give up the per Frame handling of the captured images.

I currently use a CUDA Kernel to convert the NvBuffer containing a YUV image to the kLINEAR input binding containing an RGB image.

So is there also a (more) efficient way without the overhead and abstractions of the GStreamer/Deepstream lib?.
Or is there a possibility to outsource this operation to a hardware Accelerator like the PVA or VIC?

Hi,

Deepstream extends the functionality of GStreamer.

It’s more recommended to use the library since it has optimized for Jetson.
And it also utilizes onboard hardware like PVA and VIC.

Thanks.

I would need to control three cameras in one “Capture session” (so thy use the same capture parameters) and to have a per frame handling of the incoming data.
Is this possible with Gstreamer?

And also is there somewhere a documentation for the nvarguscamerasrc plugin, which I think I need to use when I switch to Gstreamer?

If this behavior is not achievable with gstreamer is there a better way of copping the data?

Hi,

Could you share more information about your use case?

What kind of parameters do you need for the three cameras?
More is any synchronization required between the cameras?

Thanks.

I have three cameras which should take a panoramic picture together. All cameras are triggered by a hardware trigger generated by an external component (to sync up the cameras and the generated images with other data streams).

The three cameras should use the same configuration (White-balance, exposure, gain, ccm, gamma, …) for a capture.

My current solution manages all three cameras in one Argus CaptureSession to ensure that the captures are taken with the same settings. An then I convert the Images into the ARGB32 format (using NvBufferTransform) and use a CUDA kernel to copy the ARGB32 frame into a the (Linear) binding used for the inference.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.