Orin AGX Developer Kit - Getting started

Hello,

I’ve just bought an Orin AGX Developer Kit and wanted to get started with it. I was trying to take the course below: https://developer.nvidia.com/embedded/learn/jetson-ai-certification-programs#course_outline but noticed that it is for the Jetson Nano (the docker indicated for the classes is only for an older Jetpack version).

Should I emulate a Jetson Nano with the AGX to take the course? Any further recommendations for someone starting from scratch are highly appreaciated.

Thank you in advance!

Hi @jvitordm, as you have found the “Getting Started with AI on Jetson Nano” DLI course only has containers released for JetPack 4 and Jetson Nano. What I would recommend is proceeding with the Hello AI World portion of the course, which has containers available for JetPack 5 and runs natively on AGX Orin. The emulation package for AGX Orin is for emulating other Orin devices (like Orin NX and Orin Nano), not the original Jetson Nano.

If you really wanted to, in theory you could download the Jupyter notebooks from that DLI course from a mirror on GitHub (like here) and either install Jupyter on your Orin, or run them in the l4t-ml container (which comes with JupyterLab pre-installed). Running that DLI course like this is untested/unsupported however and you would likely need to install some additional pip3 packages into the container (like jetcam IIRC) to get the notebooks to run.

Either way, good luck and wish you all the best getting started with your devkit!

1 Like

Thank you for the prompt reply @dusty_nv !

Hi @dusty_nv !

I was starting with your tutorial (Jetson AI Fundamentals - S3E1 - Hello AI World Setup - YouTube) and have a quick question: I’m using a basler camera (Basler dart daA3840-45uc (CS-Mount) - Area Scan Camera) which in a python file I’m able to access through gstreamer with the command below:

“pylonsrc device-index=0 pfs-location=/home/daA3840-45uc_bayer.pfs ! video/x-bayer, width=3840, height=2160, format=rggb, framerate=12/1 ! bayer2rgb ! videoconvert ! appsink”

How does it work to access it when running through the terminal the examples in your video?

Thank you in advance!

Hi @jvitordm, you could write your own script that uses cv2.VideoCapture() with your pipeline, and then use jetson.utils.cudaFromNumpy() to convert the image to CUDA, and then you can process it with imageNet/detectNet class the same way:

Also if you can I would probably recommend lowering your resolution a bit because the models only run at like 224x224 or 300x300 resolution and so 4K is typically overkill.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.