Backend, image transfering protocol for jetson nano

I’m planning to work on a project using Jetson Nano and the YOLOv8 model for object detection. As I don’t have much experience with Jetson, I’m facing some issues and would appreciate any advice.

Here’s a brief overview of my system: Images and videos will be sent to the Jetson Nano from another module for detection. The models will be loaded onto the Jetson, but I want to reload the model from an API when it’s retrained. I’m considering using a Python framework like Flask to code a backend server on the Jetson to handle all these tasks, including logging and storing images.

Here are my questions:

  1. Which Python framework would be the best practice for my case?
  2. For transmitting images to the Jetson Nano, I’m torn between ROS2 and WebSocket (the module sending images is a robot running Android).
  3. Which library should I use for logging?

Hi,

For YOLOv8, it’s recommended to switch your platform to Xavier or Orin series since Ultralytics doesn’t support Python3.6 anymore.

1. For Vision-based AI, it’s recommended to try our Deepstream SDK.
If the Orin series (like Orin Nano) is used, we also provide Jetson platform services which should be very helpful for your use case.
https://docs.nvidia.com/jetson/jps/moj-overview.html

2. If you use Jetson platform services, there is an IoT gateway to allow sharing data between Jetson and phone.

3. For Jetson platform services, you can log in with AWS Cognito.

Below is a talk for Jetson platform services for your reference.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.