Error when "import torch" is executed after python3

Please reply how to resolve this below error, when I execute:
/run.sh dustynv/l4t-pytorch:r35.4.1

Is it because NVDIA Contianer runtime is not installed?
If yes, please provide me the correct to install NVidia container runtime. Thanks.

Ah yes, you are missing NVIDIA container runtime (normally SDK Manager would have installed this for you)

Can you try running sudo apt-get install nvidia-container, then reboot your system. Then if you run sudo docker info | grep runtime you should see nvidia listed:

$ sudo docker info | grep nvidia
 Runtimes: nvidia runc io.containerd.runc.v2 io.containerd.runtime.v1.linux

Thanks for a lot for the updates. I will try out this today and hope all works fine with out any errors.

Have few more queries meanwhile

  1. if I have installed CUDA tool kit already which is not able to run sample program and giving insufficient driver or runtime error.
    Does this will cause any issue to my this container package installation?

Because even if I uninstall by purge command as per documentation, I can see come CUDa folders inside /usr/local folder.

  1. how much time does it take to install the container usually.

I was able to run the command, but during this process, I got no space left on the device error as shown below:

root@linux:/home/trident/Downloads# ./run.sh dustynv/l4t-pytorch:r35.4.1
localuser:root being added to access control list
xauth: file /tmp/.docker.xauth does not exist

  • sudo docker run --runtime nvidia -it --rm --network host --volume /tmp/argus_socket:/tmp/argus_socket --volume /etc/enctune.conf:/etc/enctune.conf --volume /etc/nv_tegra_release:/etc/nv_tegra_release --volume /tmp/nv_jetson_model:/tmp/nv_jetson_model --volume /home/trident/Downloads/data:/data --device /dev/snd --device /dev/bus/usb -e DISPLAY=:0 -v /tmp/.X11-unix/:/tmp/.X11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth -e XAUTHORITY=/tmp/.docker.xauth --device /dev/video0 dustynv/l4t-pytorch:r35.4.1
    Unable to find image ‘dustynv/l4t-pytorch:r35.4.1’ locally
    r35.4.1: Pulling from dustynv/l4t-pytorch
    2378679266ac: Pulling fs layer
    fbe64e031ca2: Pulling fs layer
    bc84636d073b: Pulling fs layer
    4c8a78053e02: Pulling fs layer
    c98a245e07e2: Pulling fs layer
    54e58b7fa9fd: Pulling fs layer
    bd70e595c704: Pulling fs layer
    70a7c3288501: Pulling fs layer
    0cdb0dfb035f: Pulling fs layer
    a4c3570418d1: Pulling fs layer
    019c19b28a48: Pulling fs layer
    ea8790418f30: Pulling fs layer
    a9dca93572f1: Pulling fs layer
    1a4f4ffe916b: Pulling fs layer
    b1285e2f3ec3: Pulling fs layer
    2bd4b945552e: Pulling fs layer
    a5c388f5daf7: Pulling fs layer
    d9fa505958b2: Pulling fs layer
    5f5b8fb8b6a8: Pulling fs layer
    1b798dc9c4ba: Pulling fs layer
    b48c074786ef: Pulling fs layer
    30eefdd3bb15: Pulling fs layer
    3743e22deed9: Pulling fs layer
    1a4f4ffe916b: Downloading [=============> ] 660.1MB/2.378GB
    3d741bf72ce3: Pulling fs layer
    86e08f6b76c7: Pulling fs layer
    d9fa505958b2: Download complete
    8c4636dbec74: Download complete
    081f2dd0b262: Download complete
    905014c674e1: Download complete
    81ab418a92cd: Download complete
    f318c5cd7844: Download complete
    945789d2c6d1: Download complete
    f50a3a918e35: Download complete
    b04ffda492a3: Download complete
    9c04ed74a3a6: Download complete
    068c434c811d: Download complete
    f97fc1845290: Download complete
    42a84e476949: Download complete
    9fecda837dbb: Download complete
    6cfc1607869c: Download complete
    b04af95af949: Download complete
    275ba0fa9861: Download complete
    fe635d687758: Download complete
    256d9521fb01: Download complete
    2e7b0f14af93: Download complete
    5daa6c19fdeb: Download complete
    b87b67b6d270: Download complete
    6d898cb740b3: Download complete
    3d2817cd70c7: Download complete
    f1a9b38efebe: Download complete
    6b30ca875c4d: Download complete
    c33a213e479a: Download complete
    767750d8b0f8: Download complete
    68a9d1c655f5: Download complete
    7c930cf15d0d: Download complete
    bf41de624434: Download complete
    e6a03f1fd8bd: Download complete
    6e4e21fbeca3: Download complete
    e3b4f10efdfa: Download complete
    docker: failed to register layer: write /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcusolver.so.11.2.0.297: no space left on device.
    See ‘docker run --help’.

Let me know how to proceed further.

I checked the diskspace and I observe that there is still 9.3 GB free on my internal eMMC memory. Please see the disk analyser screen shot as shown below:

@dusty_nv

Actually this time, I mounted new external SD card which has 64 GB free space and ran the command “./run.sh dustynv/l4t-pytorch:r35.4.1” inside that folder, but still getting “no space left…” error as shown below:

please let me know how to resolve this issue?

My understanding is that, it is trying to look for space in /usr/local folder. Any can we expand the space for “/” folder as we have 33 gb additional space left on the eMMC memory which is of 64 gb capacity.
In that 30 GB is mounted as /dev/mmcblk0p1

In continuation …

I some how cleared some 14 GB memory on the /dev/mmcblk0p1 and installed the pytorch container as per the command “./run.sh dustynv/l4t-pytorch:r35.4.1” and later observed some peculiar things as shown below:

Based on this I have few queries:

  1. Was my installation successful?
  2. I could see it had installed Cuda 11.4 in /usr/local folder , Pytorch etc in /usr/local/bin folder , but suddenly, they all got disappeared after some time. I was observing some short of memory error message was also popping up at the same time.

Was there any memory swapping and alignment happening on the root file systems, which caused my data to be lost?

  1. I tried building one sample Cuda deviceQuery program, it compiled successfully but when I executed it threw driver insufficient error message. Any idea? as Cuda 11.4 compatible to my Jetpack 5.1.2 l4t 35.4.1 should be work fine right?

4)Please provide solution for all this, As my hardware is boxed up, I am not able to reflash the unit once again.

  1. What is the total size of entire pytorch package on your container packages?
    Is it in compressed format and will it need more space to unzip it later?

  2. what is the total space required for this complete Pytorch package?

  3. Also as my root directory dont have more space, what can be done now?

Note I have another separate 64 GB SD card and 240 GB NVM M2 slot memory card available, can you modify your run.sh script so that all the Pytroch CUDA installation happens there. Is it possible?

Please help me, we need to finish this CUDA installation very soon. thanks.

Yes, it downloaded and pulled the pytorch container successfully. I’m not sure what strange behavior you mean from your screenshot, there is no “overlay” folder in the root directory of the container.

Are you sure you were still working inside the container’s terminal? Even if you were to exit and restart the container, they would still be there, because the container filesystem gets reset to it’s original state every time it’s restarted.

It should be yes, but then again you had already installed a different version of CUDA before so it’s unclear if it’s still functioning on your system or not. Will deviceQuery sample run outside of container for you? When in doubt, I would just reflash your device and start clean. I understand your system doesn’t easily permit this, but flashing is an issue you should figure out how to do on your system because it will probably come up again in the future at some point.

The actual PyTorch itself shows that its 593MB when installed inside the container:

du -s -h /usr/local/lib/python3.8/dist-packages/torch
593M    /usr/local/lib/python3.8/dist-packages/torch

See here to change your docker data root directory (where it stores the container images) to NVME: https://github.com/dusty-nv/jetson-containers/blob/master/docs/setup.md#relocating-docker-data-root

If the pytorch container was intsalled successfully, then I can see some of the packages status still not finished completely in the terminal window. why is that?

The how come I am seeing Overlay directory on my target after installing the l4t:Pytorch container image package?

Also I see some Data directory getting created on my “/” directory. but I dont see any data inside this “Data” folder. Pls explain in detail, as I am new to this Container concept.

As I am not aware of Container concept, did not understand what does container terminal mean here? How to restart a container terminal once again? Please explain.

Does users who want to work on CUDA application programming should enter in to this container terminal each time they want to work on CUDA?

Earlier version of CUDA 12.2 I had uninstalled and also manually deleted the folders and cleaned up. so I expect it to work fine for 11.4 CUDA now.
or I should install compaible upgrade version above 11.4 for example 11.8 to make this work?

Might me I was working inside container terminal while I compiled the program and executed and later when I exited from the container terminal I could not see any CUDA packages nor any other installed tools/softwares under /usr/local/bin directory which I remember seeing after container images installation.

You mean there might be issues with different library versions in cuda programming by the end user and they may also need to reflash at some point of time and and its better to have reflash option?

Probably I will discuss this with my hardware team and request them about this.

What I meant was the total image I downloaded by running the command:
./run.sh dustynv/l4t-pytorch:r35.4.1 It says 5.5 GB in the website, but what I observed was- There was around 14 GB free space before I ran this and at the end of this command, I could see only 1 GB free space. So my understanding is it takes around 13-14 GB space for downloading image and extracting it on the target.
Please explain.

It looks lot of steps. Any how I would try that. will ask in case of any doubts.

What do you mean by package status not being finished?

The docker daemon stores the container image in a different format (not human readable/browsable), and the docker data dir has several subdirectories that look like this:

sudo ls -l data
total 452
drwx--x--x    4 root root   4096 Jul 22  2023 buildkit
drwx--x---    7 root root  16384 Feb 15 10:00 containers
drwx------    3 root root   4096 Jul 22  2023 image
drwxr-x---    3 root root   4096 Jul 22  2023 network
drwx--x--- 3537 root root 409600 Feb 15 10:00 overlay2
drwx------    4 root root   4096 Jul 22  2023 plugins
drwx------    2 root root   4096 Jan  9 18:10 runtimes
drwx------    2 root root   4096 Jul 22  2023 swarm
drwx------    2 root root   4096 Jul 22  2023 trust
drwx-----x    2 root root   4096 Jan  9 18:10 volumes

Located within them are various layers of the container images (which may or may not be shared with other containers). These folders shouldn’t be used/modified directly by you. They are just what provide the contents inside the containers.

After you start the container with my run.sh script (or docker run) command, it will give you the root@host:/# prompt, and that terminal is running inside the container and all the files are in the container image. It is a different filesystem than you would see outside the container.

If you started the container in persistent/headless mode, you can re-attach a terminal to it using docker attach command. By default though, my run.sh script will have the container shutdown after that initial terminal is exited (this is the -it --rm flag that it provides to docker run command)

You should test some samples from CUDA Toolkit (like deviceQuery, bandwidthTest, ect) to make sure CUDA 11.4 is still working. Just stay on the standard CUDA 11.4, that is also what my containers for JetPack 5 are based on. Different versions of CUDA and Python would need PyTorch rebuilt for those specific versions.

It could be anything that comes up hypothetically, like what if the user inadvertently installs some packages or upgrades on the native OS that “bricks” it from booting, or it could be anything. I would recommend exposing the flashing USB port and recovery button in some way, even if it is normally hidden from the user somehow.

DockerHub reports the compressed size (as noted here) whereas docker images reports the uncompressed size. For me, it says that dustynv/l4t-pytorch:r35.4.1 is 11.4GB (uncompressed)

Note that containers that share the same base image don’t make extra copies of all those base layers though. That’s why the docker data root and overlay folder don’t look like normal files. Each container layer has a unique SHA checksum and the container image is assimilated from all those steps.

Since most of my containers share the same CUDA+cuDNN+TensorRT base, much of the data is shared, so you may notice that if you download other containers it doesn’t consume as much disk space (because it is sharing layers that you already downloaded)

Thanks a lot for the detailed info. Though it may take some time for me to really get used to this and understand this better. This looks bit complicated to me at this point of time and I need to provide demo to customer about the same , not sure if they get confused with so much new stuff ( in case they don’t know about this containers /Dockers)

Keep this container installation method aside, I think other two methods I have is

SDK manager installation method + install Pytorch seperately

Or

Installation of individual CUDA tool kit/ Pytorch wheel pip/open CV etc

I am in a dilemma …not sure what to do and go about this …

If you intend for this to go into production and get deployed into the field (which would seem the case given the design of your system enclosure), then yes I would recommend the container way, because once you have your application’s container image built it’s easy to redistribute across other Jetson units (and also push/pull updates to your application containers in the field). Also you may have multiple containerized applications (or different versions of those applications) and this will help to keep all their dependencies straight in the event of conflicts.

But yes, you could also just make a script that runs the commands to install CUDA/PyTorch/ect on each Jetson individually (without using container) but IMO over time it is more error prone to changes that may occur in Python packages in pip/PyPi outside of your control just for example (whereas once the container image is built and tested, it’s all baked in)

In the below image you can notice that some of the items are still in Dowloading [====>] and Extracting [ =============>] stage

I could not see any data inside the Docker created “Data” folder even with “root” user log in, then how come you are able see the folder details using “sudo ls -l data” command.
Should we use “sudo” appended, even if we are logged in as “root” to see the hidden contents inside data folder. Please clarify?

Ok. So these container images inside the “data” folder are not visible to anyone right? unless and untill we use the “sudo ls -l data” command to explicitly see the hidden container images?

OK. I thought “./run.sh” or “docker run” commands were used only once while downloading/installing container images which would work normally straight away like any other driver installations.

any idea where to keep this “run.sh” file? in which folder we should it keep it so that no use deletes it by mistake… please suggest

Can you please explain what does headless/persistent mode here? can you explain with help of “run.sh” "docker attach - … - " docker run – … – … commands so that it will be easy for me to try out.

Ok. One more thing I observed was I was not able to see the “trident” directory which was present inside my “home” directory, when i exited out of the container terminal ? any idea why? It found it missing.

I will try but I keep getting the same driver insufficient error messages. Might I should uninstall some nvidia drivers that may still residing for CUDA 12.2 I had installed using the cmmands shown below.

But I am afraid, they may remove some other drivers also which might effect the system and might end up not booting only.

Just want to know the difference between Pytorch PIP Wheel and Pytorch Containers?

does Pytorch container contains all which we are currently trying to install?
and pytorch wheel will install only Python3? pls clarify

Agree with the flashing USB port, but regarding the recovery button, you had provided some command "sudo reboot –force forced-recovery " where we can simulate the recovery button. wont that command be sufficient to force the unit to recovery boot mode?

so it means we need some 12-15 GB free space( keep some buffer free memory space ) for installing this containers image. so can I run the docker rmi command uninstall previously installed container images? should we run it in normal terminal or a container terminal aster running “run.sh”

Could you explain what does this overlay folder is all about? what does it contain inside…

understood the logic. But it may take some more time for me to fully understand the working of this.

Can I know what is the difference between the Jetson Containers in the ngc catalog and the ones you are sharing in your docker hub?

Is it still running, or frozen? Did it say ‘pull complete’ or something like that at the end? It could just be an aberration of the curses-based terminal program they use. Or just try re-pulling the container again.

This is not run from inside the container. It is run on your device outside container, looking at your docker data root. If you don’t see any subfolders, make sure the docker deamon is indeed using your desired data-root by checking sudo docker info | grep Root

Correct, you would need sudo priveledge just to read those files, and even then they are in some strange format/layout specific to docker

The first time you run a new container image, it will pull it for you automatically (or you can do this manually with docker pull <image> when you want to update it). But then you actually need to start the container instance, which is what the docker run command does (run.sh is just a helper script that invokes docker run with common options, detected devices, and some mounted directories for caching the models)

You can run a container in persistent mode so that it restarts after reboots (in which case you would only really need to start it once), or when you exit it you need to start it again (which is the default behavior that run.sh invokes)

Given that you were tinkering around with different CUDA versions, I think you just need to re-flash your device fresh, then use SDK Manager to install CUDA/ect, and test CUDA before proceeding further.

The l4t-pytorch container page lists the major components that are included (these include CUDA, cuDNN, TensorRT, PyTorch, torchvision, torchaudio, and OpenCV). PyTorch is installed in the containers using the same wheels that you can install individually outside of container (these wheels are in turn built using another ‘builder’ container)

Yes, you can try that as long as the system is in a bootable state already. It’s been awhile since I used it though and not sure that it still works or not.

The ones on dockerhub get updated more frequently and are automatically pushed from here whenever I make changes. The ones on NGC have not been updated past L4T R35.2.1.

Thanks a lot for the detailed updates. It’s making me to understand things better slowly.

Kindly do reply to all the queries with how much ever information possible from your side so that I can get clarity about the unknown areas and learn things soon.

At the end it said "downloaded newer image for … '. Might be the other display of Extracting ===> Downloading =====> is may be refreshing of the terminal window problem. please confirm?

ok. What if I observe that my docker daemon is not using my data-root, how it make it use?
Should I execute “docker run” and make it uses. Please confirm.

In case if I download/install 2-3 different versions of containers images on my Jetson, will I have different data-root folders for each or will it ea single common “data” root for all the time?

ok fine.

Sorry, If there was any confusion. My question was I manually downloaded the “run.sh” script from the docker hub website of yours and then executed the "run.sh dustycv:l4t-pytoch:3.4.1… ".
My question is in which folder outside the Container file system should we download “run.sh” file. which is better folder path. example /home or /etc or /usr/ or something like this…

Also if I repeat the same “docker pull ” command two times for the same image, will the image gets overwritten automatically on my jetson or will it create a another copy of same image?

ok.understood.

I have still some doubt about the parameters passed here, example “-network=host” parameter. Here “host” literally means the file system outside my container file system within this target jetson right?
and nothing to do with the outside host PC which we use for external flashing. please clarify?

If answer is Yes, then what is the need to mount some of the folder from file system to inside the container file system. please elaborate.

Please let me know command to run a container in persistent mode so that, it automatically runs each time it reboots?

OK. I will try that. after installing what ever Stuff related to CUDA from SDK manager, I think i need not install your pytorch container which again has the same stuff inside which SDK manager installed.please clarify?
Is there any specific package/component relate to CUDA that Sdk manager is missing, which I need to install. pls confirm again.

Also are you aware of some commands like “Jtop” etc which can show be the stuff installed on my Jetson and helps in complete diagnosis.

Ok. understood. Thanks you.

Ok. I will try this and check. However to configure one USB 3.0 as flashing port, the hardware team need to remove the top side of the packed box.

Ok.You mean you are pushing the updates from your side and the users like us are pulling the same from your docker hub?

Just few more queries:

  1. Now that i have memory constraints currenlty on my internal eMMC, should I uninstall the current docker image downloaded using "docker rmi’ command and then install freshly again on NVM drive

or can i move the “data” root to another NVMM drive based on the link you shared before in this thread?

  1. In case some other users install some other versions of CUDA or python or etc, will it effect your container image functionality or will it remain isolated and stable without any effect, which ever drivers/tools the user install. Pls confirm.

It says “downloaded newer image” at the end, so yea it was a success - ignore those weird aberrations that happen from the curses-based terminal output that it does.

Follow this: https://github.com/dusty-nv/jetson-containers/blob/master/docs/setup.md#relocating-docker-data-root

The different container versions should have different tags, and they can all exist in the same data-root. Docker can only work off one data-root at a time (without changing it’s config and restarting the daemon). If you want to update a container and it has the same tag and you wanna save the old one, re-tag the old one first with docker tag command.

You can clone the jetson-containers repo anywhere on your device, but if you have NVME drive, I would recommend putting it on NVME. Because run.sh automatically mounts jetson-containers/data into the containers (under /data inside the containers), and this is where all the models get cached to (so the container isn’t constantly re-downloading them every time you restart container). So that jetson-containers/data directory can get quite large depending on how many models you download, and you would prefer it to be on fast storage like NVME.

You should also clone the entire jetson-containers repo and not just run.sh, because it relies on that jetson-containers/data directory (unless you are literally just using PyTorch and not using any models from jetson-containers). And actually if that’s the case, you don’t need run.sh at all - you can just use the docker run command that it generates (which it prints out before it runs it for you)

docker pull is smart so it will not download extra copies of the container unnecessarily, unless some layers on dockerhub have changed (then it will update those parts of the container)

It means that the container has access to all of the network ports on your Jetson’s native network adapters. In production, you would typically not use this and only open up the certain ports that you needed. Please begin to consult the Docker documentation because there are a lot of options available for everything.

See the docker run --restart flag. You can pass any of these options to run.sh too before the container name. Basically run.sh takes the same syntax as docker run (it’s just a wrapper script that populates some default options to docker run)

That’s true, if you had SDK Manager install all the JetPack components for you, you wouldn’t need container unless you wanted to use them. SDK Manager can install CUDA, cuDNN, TensorRT, OpenCV, VPI, ect but not PyTorch or torchvision.

Yes, you can just move the folder, or docker rmi them before and re-download them after you make your new data-root (it’s up to you)

It will remain isolated unless another user changes the lower-level CUDA driver on the device (for example, if you have a container needing CUDA 12, but CUDA driver on the device has been downgraded to CUDA 11 thereby not meeting the minimum driver version needed for the container based on CUDA 12)

ok. Thanks for confirming.

ok

Ok.
Could you please point me to the docker usage commands webpage links with some examples to understand all the dockers commands with different parameters?

Please explain a bit more about what does this models mean here?

We analysed that Pytorch Container image is about 11.4 GB in our earlier posts. What would be the total size of entire Jetson-containers? and what other additional software/drivers/tools stuff these have ?

You mean this “docker run” command is user interactive and asks user to provide input as to what he wants to install?
Our customer was saying he wanted gstreamer also to be installed as part of CUDA Complete related package. Does your pytorch or jetson-containers image has gstreamer already built in?

Ok. Just want to know, if the CUDA developers who use all these tools installed by Pytorch -container or Jetson-Containers- should they develop there CUDA applications( writing source code, compilng etc ) with in the docker terminal all the time?

or can they work on the outer file system outside the Docker terminal?
how this work. please explain.

sure. Pls point me to the docker documentation link. Thanks.

Ok. Understood.

Ok. Again I have to becareful here to install correct version of pytorch and torchvision so that it matches with the CUDA version 11.4 for my jetpack 5.1.2

ok.

ok fine.

Below are my plan moving forward, pls let me know your thoughts or suggestions on these:

  1. I will get a hardware setup, which has recovery mode and flashing through USB3.0 port option/. I shall try use SDK manager and try to install all the Jetpack components except flashing and later install, pytorch, torchvision, gstreamer seperately and see if things works fine.

If not

then plan B is to

  1. Download the jetson-containers from your github link and use this.
    Do I need to uninstall all jetpack components I installed through SDK manager method before installing Jetson-containers from your github?
    or can I have both at same time, as Docker container is isolated and has seperate independent file system not getting effected with the other file sytem.Pls explain?

if plan B fails as above then I have last and final option Plan C as below:

  1. Here I try to download CUDA tool 11.4 kit, after freshly flashing the unit, Later try to install Pytorch / torch vision , open cv, gstremaer seperately.
    Could you please help me in providing a script written which has all commands in sequence, which installs all these CUDA related stuffs, one by one for my Jetpack 5.1.2, just in case
    Plan A and B fails( which I hope they does not fail and plan C wont happen!!). thanks.

Yes, here are the options for docker run - docker run | Docker Docs

In the readme for each of the containers, it includes an example docker run command to start the container (for example, PyTorch RUN CONTAINER section)

Other projects like Jetson AI Lab and dusty-nv/jetson-inference use jetson-containers, and they use a lot of large ML models.

The jetson-containers repo itself is 46MB, so it only take that much to use run.sh. The containers it downloads or builds get stored in the docker data-root, not the jetson-containers directory.

If you pass -it option to docker run, then once the container starts it will give you an interactive bash terminal that is running inside the container. At that point you can run linux commands.

You mount a directory on your host device into the container, then just use your desired code editing tools in that mounted location. Meanwhile in the container’s interactive shell you can run the commands to build the code, ect (or you can do that part in a Dockerfile to automate it)

You can use any of the PyTorch wheels for JetPack 5.1 listed here: https://elinux.org/Jetson_Zoo#PyTorch_.28Caffe2.29

On JetPack 5, yes the pytorch container has gstreamer. On JetPack 6 you would just build a container that had pytorch+gstreamer, like this: ./build.sh --name=my_container pytorch gstreamer (this combines different packages from jetson-containers)

No, you don’t need to uninstall any JetPack components to use the containers. The nice thing about them is like you said, their filesystems are independent and this also helps to keep your device’s native environment clean and not constantly getting messed up.

OK. Thanks for the info.

ok fine.

ok. After downloading the entire jetson-containers of 46 MB, All the container images it downloads will take some space, even though they get stored inside docker data-root. Wont it take more memory space is my worry, as for pytroch image it took 11 GB. For others it may take some more space, but not huge as the base image is already downloaded with Pytorch-Container. Pls confirm.

ok. I will get more thorough understanding only after doing hands on all these commands.

It sounds a bit odd to me here regarding mounting Host PC folders mapped on to the Target containers mounted folders.
Basically once, we deploy the software and ship the unit to the customer, they would be working directly on the target, I assume.
Did not understand why host PC has to be connected and there folders have to be mounted on the targets container folder ?
If at all they are , in that case what ever changes we do inside the container terminal in those folders, will they be automatically reflected on the host pC?
Is thats how development work happens. Please explain, in case. THanks.

ok.

ok. So we can build our own images as per our requirement in case required right?
.but we need to do some system settings before we do container build on our target right as per the website documentation?

ok. But i doubt i get enough space to install/deploy both.
Can we uninstall the jetpack components installed via SDK manager from host? just in case

That is generally the case for containers that share the same base image, however you may have containers that don’t share the same base image. If you are trying to budget what disk space you need to specify for your system, I would err on the side of caution and go bigger than you think you need for future-proofing. 2TB NVME drives are only ~$100 these days, and then you wouldn’t need to worry…

Ahh, when I said ‘host device’, I meant your Jetson’s native filesystem outside of docker - not your host PC. The system will be standalone after you flash it from your PC, then no PC needed.

Yes, and those System Setup steps are mostly just a collection of best practices for building large containers onboard your device (for example, changing the docker data-root to NVME so you don’t fill up your smaller eMMC or SD card)

Yes, they are just apt packages (for example, cuda-toolkit-11-4) and you can uninstall them via apt.