Accessing Pixel Values in Real Time

I am looking to use a TX1 for the following:

 (1) capture an image from the TX1 camera
 (2) examine the image for a black object against a white background
 (3) if present, compute object position (in pixels)
 (4) repeat as fast as possible
 (5) use object trajectory to do some stuff

Given pixel data in, for example, an array, I have experience writing algorithms for processing of image data and doing of some stuff. I cannot figure out how to access the image data. I have no experience with the image capture part, and I have found the Nvidia documentation (API Specifications, webinars, API reference documentation, …) to be frustratingly unhelpful and largely inscrutable.

JetPack was recently installed so the software is up to date. I started examining the Argus sample code “oneShot” line by line, but got nowhere.

ANY help with any of the following would be tremendous:

  1. What is the best approach to access pixel data?
  2. Is modifying “oneShot” using Argus and EGLStreams the right approach?
  3. Conceptually, where does accessing the pixel data fit in? {e.g., in oneShot, after an image is captured [iFrame->getFrame()], or does a specific consumer have to be established?}
  4. If the pixel values are in an array, what is the best way to process image data quickly? There seem to be several packages that could work.
  5. Where should a beginner start? Is there documentation, website, textbook, blog, … that provides a readable account of the image capture procedures (e.g., those provided by Argus)?

1, It differs how you capture. If you want high resolution and high framerate, using V4L2 to capture MJPEG format is better. In this case, you need a decoder and a buffer converter and you can use mmap to access pixel data.

2, As you can see in sample ‘OneShot’, it returns a JPEG image. So you need a NvJPEGDecoder to get pixel data in Y-Cb-Cr format. The Y planner contains luminance information which may help you find the ‘black object against a white background’.

3, not understand

4, Use CUDA to process pixels parallelly.

5, Do some research about color-space and V4L2.

Hi
There is face detection sample code may help for your reference.
/tegra_multimedia_api/sample/11_camera_face_regnize

I cannot find the directory you mention

‘/tegra_multimedia_api/sample/11_camera_face_recognize’

Did you mean there could be something helpful in

‘/tegra_multimedia_api/sample/11_camera_object_identification’ ?

There is a separate face detection directory ‘argus_facetetect.dir’. If I could not decipher ‘oneShot’, I’m not sure I’ll be able to do anything with a more sophisticated code. I’ll try. Is there a specific file that you were thinking of with example code that accesses the pixel values?

Yes, it’s ‘/tegra_multimedia_api/sample/11_camera_object_identification’ apologize for the incorrect information.

Hi, maybe you can refer to my code at github : [url]https://github.com/ngleeatsualab/sobel_cuda_tx1[/url]
This code captures image using the argus library, with the Jetson TX1 Developer Kit’s on-board camera, and applies the Sobel edge detecting algorithm to the image.

You should look at the code from line 318 of main.cpp : [url]sobel_cuda_tx1/main.cpp at master · nglee/sobel_cuda_tx1 · GitHub

This is my approach for getting the raw pixel data and manipulating them afterwards.

  1. Create an NvBuffer for the captured image (image is of type EGLStream::Image)
  2. mmap the contents of NvBuffer to memory.
  3. Create an OpenCV cv::Mat object which refers to the mmaped region
  4. Manipulate your cv::Mat object.