Blackfly USB3 Image Processing on AGX Xavier

I am trying to setup an image acquisition pipeline using an AGX Xavier and a FLIR Blackfly BFS-U3-63S4C-C. This camera is Jetson Partner Supported Camera over the USB3 interface. I was wondering if you had any resources for setting up the image acquisition + processing on the Jetson. From what i understand, the onboard ISP on the Jetson is only accessible through the Argus API for CSI cameras. Are there any other ways to setup an Image Signal Processor on the Jetson with this USB3 camera from FLIR?

1 Like

hello rishj099,

you’re correct that for the following,

may I know what’s the additional image processing you would like to achieve.
thanks

As the FLIR camera provides raw images, I was looking at doing demosaicing, denoising, auto white balance as possible steps for processing the raw sensor reading.

1 Like

hello rishj099,

ISP pipeline only support raw content from CSI; you cannot feed raw files from memory for tuning process.

there’s a samples that capture images from a V4L2, but it’s a YUV type of camera, 12_camera_v4l2_cuda.
please contact with sensor vendor to have further supports.
thanks

understood, I will reach out to FLIR and ask for support. Do you have any experience/thoughts of using something like this as an alternative?

1 Like

hello rishj099,

I’ve never testing Fastvideo before.
nevertheless, you may refer to Camera Architecture Stack, cameras using USB interface can only go through v4l2src.
thanks

1 Like

Very unfortunately the Nvidia GStreamer / DeepStream pipeline doesn’t seem to integrate well with industrial cameras. I’m also working with a FLIR BlackFly camera (USB3Vision) and have to rely on either the FLIR drivers that need C / C++ coding to work (and I don’t know how to connect these to the Nvidia Pipeline) or go through some industrial computer vision provider (I have chosen MVTec Halcon) to get your images processed.

I’ve just found that the Halcon library form version 20.11 will support the Python programming language that will hopefully make things easier in the future. For now I’m very sad that I can’t use the Nvidia toolsuite for my industrial project.

I had some hopes for the Aravis Project (GitHub - AravisProject/aravis: A vision library for genicam based cameras) and aravissrc but in my eyes it doesn’t look stable enough to be used in production and I’m not sure about all the camera configuration parameters that will be supported.

Hi @b2prix21 ,

I am also interested in using the FLIR Blackfly GigE cameras with the NVIDIA Jetson Xavier. Could you elaborate on the C/C++ coding work needed to connect to the NVIDIA pipeline? What image formats does the FLIR driver output? How do you read the camera frames into your pipeline? is it with Argus API or Video4Linux or something else? Did you face any bandwidth issues reading from the FLIR drivers?

Sorry for the barrage of questions, but I am in the decision making process whether to go with these cameras or not.

Best,
S

1 Like

Hi @sandiprmlc0

Sorry for the late reply. I must have turned notifications for the Nvidia forums off.

We’re using Blackfly USB3 cameras now and the MVTec Halcon library. The library provides the driver for the camera to capture the images. I then use Halcon to do CV and send about 100 cropped images / second to a TensorRT model.

I’m also using a GStreamer + DeepStream pipeline to display a live video feed in the browser using v4l2loopback device.

Haven’t coded any C/C++ for any of this - all done in Python (with optimized C code underneath of course).

Hope that helps.