Using vimbasrc camera plugin in deepstream

Hardware: Orin AGX
Jetpack: 5.0.2
Running deepstream-l4t:6.0.1-triton docker

We have successfully used vimbasrc in gstreamer pipelines.
It is a plugin that allows gstreamer to take inputs from specific cameras.

We tend to use video/x-raw, format=GRAY8 because of the color limitations of our camera.
These are the other possible caps of the source pad of the element:
video/x-raw format: { I420, YV12, YUY2, UYVY, AYUV, VUYA, RGBx, BGRx, xRGB, xBGR, RGBA, BGRA, ARGB, ABGR, RGB, BGR, Y41B, Y42B, YVYU, Y444, v210, v216, Y210, Y410, NV12, NV21, GRAY8, GRAY16_BE, GRAY16_LE, v308, RGB16, BGR16, RGB15, BGR15, UYVP, A420, RGB8P, YUV9, YVU9, IYU1, ARGB64, AYUV64, r210, I420_10BE, I420_10LE, I422_10BE, I422_10LE, Y444_10BE, Y444_10LE, GBR, GBR_10BE, GBR_10LE, NV16, NV24, NV12_64Z32, A420_10BE, A420_10LE, A422_10BE, A422_10LE, A444_10BE, A444_10LE, NV61, P010_10BE, P010_10LE, IYU2, VYUY, GBRA, GBRA_10BE, GBRA_10LE, BGR10A2_LE, GBR_12BE, GBR_12LE, GBRA_12BE, GBRA_12LE, I420_12BE, I420_12LE, I422_12BE, I422_12LE, Y444_12BE, Y444_12LE, GRAY10_LE32, NV12_10LE32, NV16_10LE32, NV12_10LE40 }
video/x-bayer format: { (string)bggr, (string)grbg, (string)gbrg, (string)rggb }

We could use advice on how to implement that same plugin as an input for a deepstream pipeline.

What’s your expected pipeline in DeepStream? Normally you can use videoconvert/nvvideoconvert to convert the color format after the src plugin.

In gstreamer the base pipeline would look something like:

vimbasrc camera="camera_ID" ! video/x-raw,format=GRAY8 ! queue ! videoscale ! video/x-raw,width=2322,height=1690 ! videoconvert ! video/x-raw,format=I420 ! x264enc ! rtph264pay

In the past we have been splitting that pipeline in half and use openCV to grab frames, run computer vision models on them, overlay the resulting information and then send the frames onwards towards the end of the pipeline.
We’d like to try to use Deepstream to achieve the same or similar result, but with multiple vimbasrc elements as input.
(ie. [application], [source0], [source1], [streammux], [primary-gie], [tiled-display], [osd], [sink0])

Any advice would be appreciated since our experience with Deepstream is quite limited.

You mean you want to use the config file like samples\configs\deepstream-app\source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt? You can’t use it directly, cause we only suport the origin gstreamer plugin. You can refer to the pipeline like below and write the demo code by yourself.

gst-launch-1.0 \
nvstreammux name=m (other paramerters by yourself) ! pgie !  tiler ! osd ! sink \
vimbasrc camera=xxx ! nvvideoconvert! 'video/x-raw(memory:NVMM), format=(string)I420' ! queue ! m.sink_0 \
vimbasrc camera=xxx ! nvvideoconvert! 'video/x-raw(memory:NVMM), format=(string)I420' ! queue ! m.sink_1

Thank you for the advice.
At first I was trying to configure or modify NVIDIA’s Deepstream apps to accomodate our use case.
What I did instead was to generate a .dot file while running the Deepstream sample app that most resembled our used case.
I used that to see what deepstream plugins are used and in which order.
Now I am making our own pipeline using relevant plugins and making progress doing so.

Here are some of the files that nvinfer relies on in one of NVIDIA’s reference apps:
model-file=resnet10.caffemodel
proto-file=resnet10.prototxt
model-engine-file=resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=labels.txt
int8-calib-file=cal_trt.bin

As far as I can understand, the ‘model-file’ (in this case caffe format) is used to generate the ‘model-engine-file’ (in this case TensorRT)

We have our own TensorRT models that were generated from ONNX models.
Do you have advice on how to implement either of those in nvinfer and to which extent I’d need to remove or edit the other files nvinfer uses?

You can use the onnx model directly in the config file. You can refer to the link below:
https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/yolov5_tao/pgie_yolov5_config.txt

Are TRT engine files that we converted previously (in our app before working with deep stream) not suitable for use with a multi video gstreamer pipeline?

We actually use multiple TRT files to achieve different capabilities at the same time. So all of them now need be onnx?

Once converted, hopefully we can use the output engine files the next time the app is started rather than a fresh conversion every time, causing delay?

At the first time, you can configure the onnx model in the config file directly.

onnx-file=yourmodel.onnx

After that, it’ll generate a engine file. If you don’t change the usage environment, it can always be used.

model-engine-file=yourmodel.onnx_b1_gpu0_fp16.engine

If you have other new problems, please open a new topic. Thanks

And we cannot just use an existing TRT engine that we created before on this machine? One that was created before we started trying to use deepstream? Or perhaps the generated engine file you mention has something specific to deepstream going on?

You can always use the engine file on the same machine with related software has not been upgraded. What specific problems did you encounter during your user case?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.