Face recognition on jetson orin nano developer kit

hi . i want to run face recognition pipeline on nvidia jetson orin nano 8 gb developer kit .
i am using imax 219 cameras .. past two months i developed the face recognition pipeline on rpi with at hat in which i have use models like yolo8n , arcface , reluswider face from degirum .
i dont know much about jetson and its frameworks .
can anyone help me for this .
the goal of the pipeline stated below :

real time video and frames capture by camera–>it detects the person →face detection→ face recogntion → coordinate estimation where is person through pixel maping and camera calibration .
i want to run this pipeline for 10m range and approx 10 people in the frame , also 30 fps for smooth transition..
any kind of help is appreciated . thank you :)

*** Please note that this reply is generated by LLM automatically ***

To run a face recognition pipeline on the NVIDIA Jetson Orin Nano 8GB developer kit, you’ll need to leverage the kit’s capabilities and available frameworks. Given your experience with Raspberry Pi and models like YOLO8n, ArcFace, and ReluSwider Face, you can adapt your pipeline to the Jetson Orin Nano. Here’s a comprehensive guide to help you achieve your goal:

Hardware and Software Requirements

  1. NVIDIA Jetson Orin Nano 8GB developer kit
  2. IMAX 219 cameras (ensure they are compatible with the Jetson Orin Nano)
  3. JetPack SDK (includes NVIDIA Libargus Camera API, CUDA, cuDNN, and TensorRT)
  4. A compatible Linux distribution (e.g., Ubuntu 18.04 or later)

Face Recognition Pipeline

To achieve real-time video and frame capture, face detection, face recognition, coordinate estimation, and camera calibration, you can follow this pipeline:

  1. Video Capture: Use the NVIDIA Libargus Camera API to capture video frames from the IMAX 219 cameras. You can use the libargus library to access the camera and capture frames.
  2. Face Detection: Utilize a face detection model like YOLO8n, which is compatible with the Jetson Orin Nano. You can use the TensorRT framework to optimize the model for the Jetson Orin Nano’s GPU.
  3. Face Recognition: Employ a face recognition model like ArcFace or ReluSwider Face. These models can be optimized using TensorRT and run on the Jetson Orin Nano’s GPU.
  4. Coordinate Estimation: Use the camera calibration data to estimate the coordinates of the detected faces. You can use the OpenCV library to perform camera calibration and coordinate estimation.
  5. Pixel Mapping: Use the estimated coordinates to map the faces to the pixel coordinates in the frame.

Optimizing the Pipeline for the Jetson Orin Nano

To achieve 30 FPS and smooth transition, you’ll need to optimize the pipeline for the Jetson Orin Nano’s hardware. Here are some tips:

  1. Use TensorRT: Optimize your face detection and recognition models using TensorRT to leverage the Jetson Orin Nano’s GPU.
  2. Leverage CUDA: Use CUDA to accelerate computationally intensive tasks like face detection and recognition.
  3. Use cuDNN: Utilize cuDNN to accelerate deep learning computations.
  4. Optimize Memory Access: Minimize memory access and optimize data transfer between the CPU and GPU to reduce latency.
  5. Use Multi-Threading: Use multi-threading to parallelize tasks and improve overall performance.

Comparison with Raspberry Pi

While the Raspberry Pi is a great platform for prototyping, the Jetson Orin Nano offers significant performance advantages for computer vision and deep learning tasks. Here’s a rough estimate of the performance differences:

  • Face detection: Jetson Orin Nano (30 FPS) vs. Raspberry Pi (10-15 FPS)
  • Face recognition: Jetson Orin Nano (20-25 FPS) vs. Raspberry Pi (5-10 FPS)

Keep in mind that these estimates depend on the specific models, optimizations, and implementation details.

Additional Tips and Resources

  1. NVIDIA Developer Forums: Leverage the NVIDIA developer forums for Jetson Orin Nano-specific questions and discussions.
  2. JetPack SDK Documentation: Refer to the JetPack SDK documentation for detailed information on the NVIDIA Libargus Camera API, TensorRT, CUDA, and cuDNN.
  3. OpenCV: Use OpenCV for camera calibration, coordinate estimation, and pixel mapping.
  4. Face Recognition Models: Explore other face recognition models like FaceNet, VGGFace2, or Light-CNN, which may offer better performance or accuracy for your specific use case.

By following this guide and leveraging the Jetson Orin Nano’s capabilities, you should be able to achieve your goal of running a face recognition pipeline with real-time video and frame capture, face detection, face recognition, coordinate estimation, and camera calibration.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

1 Like

Hi,

We don’t have a face recognition example due to privacy issues.
But you can find other recognition in our Deepstream SDK:

Thanks.

its good to hear that ..
when i am using rpi 5 with ai hat i use degirum public models from hailo and run direct models into my project without any conversion process .. but in terms of jetson orin nano it is so hectic and i dont know how to run yolo8n , face recognition models . where to download , how to convert them ,what is the standard way of doing it … explain this process from beginning .
on the other hand i am facing issue with openCV and GStreamer . camera is not open with opencv but when i run the command of nvgstcapture-1.0 its running properly .
dont judge me , i am new here and dont know about these cool devices and enviournment .
Any help will be appreciated .
thankyou

Hi,

On Jetson, it’s recommended to use DeepStream for camera and display.
Since OpenCV contains some memory copy or CPU implementation, which may not be optimal.

Thanks.