jetPack 4.5 nvdli-data course

is there a new docker for the new image.the 4.4 dosnt work on the 4.5 anymore . when will the dli course be updated for the new sd image45? i cannot find the sd image code. this sure is a difficult system to master. i havnt been able to run ros and many other dockers. this is surly the most confusing system to install. when it all could be done from one course and docker.

loving the blank pages when you click on contacts us. lol

Hi @lifeforce.knowledge, the JetPack 4.5 (L4T R32.5.0) containers should be up in the next few days, sorry about that! Once the JetPack 4.5 versions of the containers come out, you should be able to run them again.

1 Like

Thank you Dusty I will be patient I will do my best to help the situation. Thank you, I am honored ofyoure reply. I know you very well, I have been looking at your videos over and over again. So you are kinda in my daily programming. Lol nice that you personally replied I do have very innovative ideas and related to gape over layering to create easy and new interface using inference geometry and universal sign icons// automatic screen analyzers and knowledge based universe… I also build an open source robotic giant arm I really need to be able to launch and run a total environment. If you can assist we could help automate for the people to truly create abundance, everything starts with the will to do the most ideal future. Aka can’t help from being a friend… peace

It would be amazingly helpfull if the dli course nvdli-data,if inference and Ros examples and subfiles for tweaking/ gpu web accelerator/code direct examples/// ah bro that would make the Jetson nano truly amazing and accessible. Jupiter is easy way to remotely access and make that2g worth wild… thank you… one day my dreams will come true. Thank you for your great videos.

Hi @lifeforce.knowledge , the DLI container for JetPack 4.5 was published here:

https://ngc.nvidia.com/catalog/containers/nvidia:dli:dli-nano-ai

The tag is: nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.0

I had also updated jetson-inference container. Good luck on your projects and keep in touch!

wonderful thank you, would you have any tips on adding the inference folder. to that container, matplotlib, inference x,z, cam axes//xy / the inference container access for Jupiter? pretty please, mayday. lol for the people by dusty. lol Thanks any tips or link is appreciated and will help many more that pass by this link. peace and thank you.

Sorry, I don’t quite understand what you are asking. The matplotlib you can install into the container by running apt-get update && apt-get install matplotlib. You can open a terminal from JupyterLab to do this, or run it from the SSH terminal that you original launched the terminal with.

If you wanted the change to be persistent after you shutdown the container, you could build your own Dockerfile using the dli-nano-ai container as a base, or use the docker commit command to save your changes to a new container tag.

i am using nvdli-data jupyterlab to be able to stream the content to my pc, i am not able with gtstreamer or vlc.so i want to use dli jupyter contaner because i cant do it any other way… so i would just like to add all the apps possible to my nvdli-data container… i tried all the other ways without success…it is trully not well thought out, the nvdli-data should already be able to matplolib and use youre inference folder…and issak…why so may contaners,when we want them all in one container.my question is…can i add them in a easser way…building my own container is not exactly simple nor adding docker to any othercontainer. i am sure most people will need simplicity in these contaners that servers nothing considering we mostly want it all in one. thank you… i been trying to stream inference for months…making me agitated to the max…i could literally freak out learning this confusion ways to install launch and more.

Hi @lifeforce.knowledge, it is a common best-practice when building containers to keep their size down by not install unnecessary packages that not everyone may need.

There are two ways to install additional packages into the container:

  1. From the container’s interactive terminal (i.e. the terminal that shows up after you start the container via SSH), you can run apt-get update && apt-get install <your-package>. To save the changes, you can then use docker commit and save it as a new tag.

  2. You can create your own Dockerfile and specify the DLI container on the FROM line at the top, i.e.

FROM nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.0

RUN apt-get update && \
    apt-get install -y --no-install-recommends \
            <your-package-here> \
    && rm -rf /var/lib/apt/lists/*

This will base your image off the DLI container and install these packages on top of it. You can build your Dockerfile with this command:

$ sudo docker build -t my-container:latest -f Dockerfile

this is to complicated, i learn by watching. i dont understand this jubrish sorry. to complicated for such simple action. we are talking about adding files here not a freaking space launch… make it simpler this is to freaking confusing.

The simplest way would be the method I outlined in option #1 above:

From the container’s interactive terminal (i.e. the terminal that shows up after you start the container via SSH), you can run apt-get update && apt-get install <your-package> . To save the changes, you can then use docker commit and save it as a new tag.

After you start the DLI container, in the container’s terminal, run the apt-get install and/or pip3 install commands to install the packages you want (like you would have done to install them outside of container).

After you are finished making changes, use docker commit to save the changes to a new container tag. Then the next time you start the container, run the new tag instead of the old one.

After having upgraded my nano to jetpack 4.5 and hanving pulled the new docker image v2.0.1-r32.5.0 I’m experiencing a really terrible latency on the video stream from my usb camera using usb_camera.ipynb.
video is streamed extremely slowly from the camera, it’s impossible to even focus it. Then after a while the stream freezes completely.
I did not have this problem on jetpack 4.4 using exaclty the same hardware setup (I’m running the nano from a usb SSD). I have exaclty the same behaviour when running the nano from the SD card.
What could be the issue?

Hi @giorgio_ne, if you try viewing the camera from outside the container, is there also the latency there? You can also try using the video-viewer tool from jetson-inference to confirm it:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#v4l2-cameras

If it continues to be an issue, please follow-up by creating a new topic since your USB camera issue is a different topic than this one.

1 Like

Hi Dusty, thank you very much for your reply. As I proceeded to clone jetson-inference I realized I had an issue with one of the switches between the jetson and my mac. That was the cause of the latency problem on Jupyter.

The silver lining of this is discovering the amazing jetson-inference container and your excellent youtube tutorials!

Thank you,
Giorgio