HSB(holoscan sensor bridge) integration

I asked same question in HoloscanSDK forum, but couldn’t get any response, so want to try my luck here.

Our current design uses Jetson AGX Orin with sensors connected through MIPI-CSI. The gstreamer application is utilizing key deepstream components for infer/tracking/iva. Now we are evaluating the possibility of using HSB to connect image sensors through ethernet, this would enable us a much larger distance between the Jetson and the image sensor, which is needed in some uses cases.

I briefly read through the Holoscan documents and tried some examples, it appears even though it has some operators do inference etc, right now it lacks the diverse functions gstreamer plugins provide. I would still need use gstreamer/deepstream as the main structure, but need to take the video stream from HSB sensor.

So my questions:

  1. does Nvidia has any plan to provide a gstreamer plugin similar to nvarguscamerasrc to interface to HSB sensors? this would be hugely useful.
  2. if not, what’s the best way forward? I am thinking of use holoscan operators to receive the packets and perform the hardware ISP, then send the memory buffer to gstreamer pipeline’s appsrc. Is this feasible?

Thanks for your feedback.

Can you tell us which functions you think are lack in Holoscan SDK?

HSB is specially designed for Holoscan project. It is not a common part of Jetson. There is no plan to provide Gstreamer plugin of HSB sensor.

If you can get the YUV/RGB data memory from Holoscan, you can use appsrc to involve the frame data into GStreamer/DeepStream pipeline. There is DeepStream sample to demonstrate how to feed the external video frame data into DeepStream pipeline, but it is for normal system memory GstBuffer now. /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-appsrc-test

Thanks for the reply, Fiona.
The functionality provided by HoloScan operators, including this provided by HoloHub is still a small subset of what gstreamer has right now.
For example, I need to do object detection/tracking/dsanalytics on the stream, but only infer is provided by HoloScan. I need to take snapshot and do jpeg encoding then save it to disk, holoscan doesn’t have jpegencoder, or text overlay. And there are many other tasks.

I personally feel the HSB shouldn’t belong to Holoscan system, instead it should be application agnostic, so it could be used by different Nvidia SDKs. l4t-camera maybe a better place for it.

text overlay is supported by Class HolovizOp - NVIDIA Docs.

For the JPEG encoding and others feature, you may check in the Holoscan forum. Latest Healthcare & Life Sciences/Holoscan SDK topics - NVIDIA Developer Forums

HSB is a Holoscan specific hardware, L4T does not support it.