reading L4T version from /etc/nv_tegra_release
L4T BSP Version: L4T R32.7.2
cannot find l4t-ml docker container for L4T R32.7.2
please upgrade your version of JetPack.
Hi,
Could you please let us know which Jetson platform are you using?
Thank you.
I am using Jetson Nano 2GB Developer Kit.
R32 (release), REVISION: 7.2,
GCID: 30192233,
BOARD: t210ref
EABI: aarch64
Hi,
We are moving this post to the Jetson Nano forum to get better help.
Thank you.
Hi @user148523, for L4T R32.7.2, you can use the same container as for R32.7.1 - nvcr.io/nvidia/l4t-ml:r32.7.1-py3
The R32.7.1 containers are compatible with R32.7.2, as there were only minor changes in the L4T BSP between these releases.
Thank You so much for the response. I used l4t-ml:r32.7.1-py3 but getting the error
reading L4T version from /etc/nv_tegra_release
L4T BSP Version: L4T R32.7.2
cannot find l4t-ml docker container for L4T R32.7.2
please upgrade your version of JetPack.
How are you trying to start the container? Are you trying to use the jetson-inference project?
If not, you can start the l4t-ml container directly like shown on this page:
sudo docker run -it --rm --runtime nvidia --network host nvcr.io/nvidia/l4t-ml:r32.7.1-py3
I have tried both the ways but it is not working in any case. I am following this link for my jupyterlab
In jupyterlab I am trying to implement this link
https://github.com/dusty-nv/pytorch-timeseries and everytime when I run docker/run.sh, it is giving the same error
L4T version from /etc/nv_tegra_release
L4T BSP Version: L4T R32.7.2
cannot find l4t-ml docker container for L4T R32.7.2
Oh I see, I hadn’t updated those docker/run.sh scripts in awhile in that repo - sorry about that. Instead just try running the container manually like this:
sudo docker run --runtime nvidia -it --rm --network host --volume $PWD:/pytorch-timeseries nvcr.io/nvidia/l4t-ml:r32.7.1-py3
Thanx alot…it worked
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.