Hello, I am currently using my Jetson AGX with 4 FLIR BlackFly S cameras to capture .raw images. The FLIR cameras require FLIR’s Spinnaker API to access/control the cameras. Currently, I capture several .raw images from the cameras and then append them using OpenCV’s videowriter, however, this process is quite slow so I was wondering if anyone could help point me to some information for implementing a similar video encoding using the Multimedia API that the AGX comes with.
Please share information about the camera for reference:
$ v4l2-ctl --list-formats-ext
If it is general format YUV422, you can run
12_camera_v4l2_cuda to capture frame data into NvBuffer and then convert to YUV420 for video encoding. If it is other format, a possible solution is to capture frame data into CUDA buffer and implement format conversion in CUDA. For this solution you can refer to
v4l2cuda. It demonstrates how to capture frame data into CUDA buffer.
The jetson_multimedia_api samples are in
Hey DaneLLL, the camera does not show up in dev/video# → the return from
$ v4l2-ctl --list-formats-ext $ Failed to open /dev/video0: No such file or directory
Through the camera’s api I can set it to pass .raw images (YUV422Packed among others, 16 Bytes/pixel) to the AGX. If I do this are there any samples here (Jetson Linux API Reference: Main Page) that may help me pass the .raw frames into a CUDA buffer? Sorry, I’ve never done anything like this before.
Looks like it does not support v4l2. One possible solution is to create CUDA buffer and then check if you can capture frame data into CUDA buffer directly. And then do
- Create NvBuffer in NvBufferColorFormat_YUV420 or NvBufferColorFormat_NV12
- Implement CUDA code to convert and put frame data into NvBuffer
- Do video encoding.
For video encoding please refer to 01_video_encode.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.