How make inference with Basler camera

Hello,

I am trying make inference with a Basler acA5472-17UC using the examples of this repository

Moreover, i have installed all the required drivers from pypylon repsitory and the camera works ok
https://github.com/basler/pypylon

I just need make inference with pretrained models using the basler but i don’t know how get it. Could someone share with me how to get it?

Hi,

Jetson-inference use GStreamer for the camera input.
Would you mind to check if your camera can be opened with GStreamer first?

We found several GStreamer plugin for basler.
But we don’t have a basler camera to verify on our side.

Thanks.

You can use https://gitlab.com/zingmars/gst-pylonsrc.git and adopt deepstream-test2 example for this one.

1 Like

I have been trying both solution but i think i am not understanding what should I do.
I can use the examples of pypylon repository like this

https://github.com/basler/pypylon/blob/master/samples/opencv.py

And the basler works fine. The problem is the format of the image because jetson-inference expect RGBA images in a ppycapsule format and then, when I want reverse the process the image is printed in white color.

It would ne great if i can make inference with camera = jetson.utils.gstCamera(1280, 720, “/dev/video0”) or something like that but it do nothing and It’s crashing my jetson.
I can see my basler camera with lsusb but it doesn’t appear with v4l2-ctl --list-formats-ext.
In fact with this last comand i can see

Index       : 0
	Type        : Video Capture
	Pixel Format: 'BG10'
	Name        : 10-bit Bayer BGBG/GRGR
		Size: Discrete 2592x1944
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 2592x1458
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.008s (120.000 fps)

which make me think this is the wecam integrated on my jetson

Finally I solved the problem

https://github.com/mjack3/jetson-inference/blob/master/detectnet-basler3.py

Anyway i would like know whether exist a way of make it works with something silimar to

camera = jetson.utils.gstCamera(1280, 720, "/dev/video0")

I think this is the best way

1 Like

Hi camprosromeromiguel, unless Basler camera provides V4L2 interface (e.g. /dev/video* device), I don’t think it’s possible. The Basler cameras seemed to be accessed through the Pylon library they use.

Hi,

is there any update to do something like this?

Thanks

Hi dusty_nv.

When we use the Basler lib and inject a stream to Gstreamer on Jetson, will this not force us to use CPU buffer hence slowing down the process? I would like to know what I am talking about :D so that I can force/ask Basler to create a hardware buffer possibilities to gstreamer and NVMM memory

Typically V4L2 would use normal CPU system memory, and need one memory copy to get it into CUDA buffer. One copy may not be perfectly ideal but it isn’t terribly slow in most cases, either. I think you mean NVMM memory instead of NVME.

An alternative is to just use pypylon for camera interface instead of using my videoSource interface, like @camposromeromiguel did above:

I agree. I have been able to read the iamge using pypylon. The problem is after. I could probably to like camposromeromiguel but I want to take it to a multifile sink and UDP sink. Cause this is so neat using queues and split the stream in two. I want both appsink and UDP sink in same stream. So if basler managed to get it to Gstreamer directly would be nice.

But I still have issues with the injection to Gstreamer. Once I have the numpy array I just dont know what to do with it.

Here is an example of using GStreamer’s appsrc element from Python to push a numpy array to a custom GStreamer pipeline:

Thank you. I will try that. I did it in a virtual Ubuntu and did not get it to work fully, but I will try with a fresh set of eyes in the morning. Thank you. I let you know if this solves it.

I have now tried the Gstreamer AppSrc With basler Camera and the caps is:
appsrc format=RGB, framerate=30/1, width=2048, height=2048 ! videoconvert ! autovideoconvert

Not to focus too much on the g-string but the pypylon read and convert to np-array and insert into Gstreamer is extreamly slow. I cant use this at all. I don’t believe that the VM I use is the limiting factor cause I reduce the bandwidth on the camera to 60MB/s and the camera still lag alot.