And the basler works fine. The problem is the format of the image because jetson-inference expect RGBA images in a ppycapsule format and then, when I want reverse the process the image is printed in white color.
It would ne great if i can make inference with camera = jetson.utils.gstCamera(1280, 720, “/dev/video0”) or something like that but it do nothing and It’s crashing my jetson.
I can see my basler camera with lsusb but it doesn’t appear with v4l2-ctl --list-formats-ext.
In fact with this last comand i can see
Index : 0
Type : Video Capture
Pixel Format: 'BG10'
Name : 10-bit Bayer BGBG/GRGR
Size: Discrete 2592x1944
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 2592x1458
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.008s (120.000 fps)
which make me think this is the wecam integrated on my jetson
Hi camprosromeromiguel, unless Basler camera provides V4L2 interface (e.g. /dev/video* device), I don’t think it’s possible. The Basler cameras seemed to be accessed through the Pylon library they use.
When we use the Basler lib and inject a stream to Gstreamer on Jetson, will this not force us to use CPU buffer hence slowing down the process? I would like to know what I am talking about :D so that I can force/ask Basler to create a hardware buffer possibilities to gstreamer and NVMM memory
Typically V4L2 would use normal CPU system memory, and need one memory copy to get it into CUDA buffer. One copy may not be perfectly ideal but it isn’t terribly slow in most cases, either. I think you mean NVMM memory instead of NVME.
An alternative is to just use pypylon for camera interface instead of using my videoSource interface, like @camposromeromiguel did above:
I agree. I have been able to read the iamge using pypylon. The problem is after. I could probably to like camposromeromiguel but I want to take it to a multifile sink and UDP sink. Cause this is so neat using queues and split the stream in two. I want both appsink and UDP sink in same stream. So if basler managed to get it to Gstreamer directly would be nice.
But I still have issues with the injection to Gstreamer. Once I have the numpy array I just dont know what to do with it.
Thank you. I will try that. I did it in a virtual Ubuntu and did not get it to work fully, but I will try with a fresh set of eyes in the morning. Thank you. I let you know if this solves it.
I have now tried the Gstreamer AppSrc With basler Camera and the caps is:
appsrc format=RGB, framerate=30/1, width=2048, height=2048 ! videoconvert ! autovideoconvert
Not to focus too much on the g-string but the pypylon read and convert to np-array and insert into Gstreamer is extreamly slow. I cant use this at all. I don’t believe that the VM I use is the limiting factor cause I reduce the bandwidth on the camera to 60MB/s and the camera still lag alot.