Setting up Hello AI World Containers for Jetpack 5.1.3

Hello, I’m having some trouble running the correct container for the hello world tutorial, as the current documentation only provides a guide up to 5.1.1. Could I get some guidance on how to find the correct container and run it for this version of Jetpack-- should I clone a different repository or pull something directly through Docker? Apologies for the likely frequent issue regarding the firmware versions, but any help would be much appreciated.
Thanks,
Elliot

Hi,

Please use r35.4.1 for JetPack 5.1.3 since it is a minor release from JetePack 5.1.2.

https://hub.docker.com/r/dustynv/jetson-inference/tags

Thanks.

I believe it automatically ran that container, although I wasn’t sure it was working correctly since I didn’t see the same UI for choosing which models to download like in Dusty’s tutorial. Thank you for the confirmation.

I am having a problem seeing my camera output however. When I attempt docker/run.sh, I get the output “[OpenGL] failed to open X11 server connection.” and although the camera is connected I can’t see the live video feed. I saw that other users with this problem edited some of the python files directly to resolve this but I believe they were building from source. What would be the best option for dealing with this? Thanks very much

edit: I got it fixed. Had to run the command on the jetson connected to a monitor rather than SSH.

Hi @eclark.pgh , glad you got it working! Yes, the GL/GLX window won’t work over SSH with X11 forwarding, because it uses CUDA interoperability - sorry about that (even if it did, it would be very slow). For remote viewing I recommend using WebRTC, RTP, or RTSP. Also that model downloader tool had been removed in lieu of an on-demand downloader which automatically pulls the model the first time it is loaded.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.