Jetson-inference docker file

Hi
Looking for complete Docker file for dustynv/jetson-inference. This would be of great timely help

Thanks

Hi @orion.jag, here is the Dockerfile for jetson-inference:

https://github.com/dusty-nv/jetson-inference/blob/master/Dockerfile

And you can find the docker build/run scripts under the jetson-inference/docker directory of the project.

If you clone the master repo, these are all included. Note that there are data/model paths that are mounted from the host by the docker/run.sh script. These are used so you don’t have to re-download the models and re-optimized the TensorRT engines each time you run the container (in addition to preserving your models/datasets if you are doing PyTorch training in the container).

Dustin
As always, thanks for the quick response.

       Given I have just started dabbling on Docker, wondering how I would add them to an Azure IoT project.   

       As of now the image "dustynv/jetson-inference"  installed on Jetpack 4.5. but the IoT edge fails to find the libraries at run time. 
  
      If  I add the contents of the docker file from "https://github.com/dusty-nv/jetson-inference/blob/master/Dockerfile" , to my solution, will that address the issue. 

Apologies for my ignorance.

Thanks

In order to utilize GPU acceleration, the base image needs to derive from l4t-base container. jetson-inference derives from l4t-pytorch which derives from l4t-base, so it can use GPU acceleration.

Is the IoT edge project in a container? If so, it would need to use l4t-base also, otherwise it will be unable to start with --runtime nvidia and won’t be able to use GPU.

Perhaps it would be easier for you to add Azure IoT edge to the jetson-inference container.
Or if IoT edge project is not in a container, you can just build jetson-inference from source outside of container too.

Dustin
Sorry for the belated response.

As recommended and taking baby steps. I included a standalone Python solution to run in
jetson-inference docker container and it works very well with Logitech webcam. But having trouble accessing webIP cam RTSP stream from the container using Foscam. Despite install of nanocamera library, the script returns camera is not ready.

Same script works well if the jetson-inference is installed from source and script runs outside the container.

Any thoughts would be highly appreciated.

Thanks

Hi @orion.jag, are you able to ping the remote camera from inside the container?

Is the nanocamera library some other library from working with Foscam? Or are you using the RTSP interface from jetson-inference?

Are you still running the container with the jetson-inference/docker/run.sh script? If not, are you using --network host? This is what my docker/run.sh script uses too.

Hi Dustin
Unable to use the ping cmd.
Yes the libraries: nanocamera and libpyfoscam works as a standalone PY scripts outside the container - no issues.
Yes, I am running jetson-inference/docker/run.sh to kickoff the container.

Thanks

OK gotcha - I’m not familiar with these libraries, but it may be these libraries use some device or file that needs mounted into the container. Do you have any examples of these libraries being used inside another container, and if so was it run with additional flags?

Dustin

Thanks, decided to include these libraries into the “jetson-inference” docker file and build them, but then I hit with this msg “cannot build jetson-inference docker container for L4T R32.5.1”, currently using jetpack “L4T 32.5.1”

May be I need to use jetpack 4.5 instead - correct ?

Dustin
Installed additional utility and able to ping the IP camera no issues. When I run the script using FOSCAM
(libpyfoscam) library, with the below script

Camera credentials

camera_stream1 = “userid:mypwd123@ipAddress/videoMain”

Create the Camera instance

camera1 = nano.Camera(camera_type=2, source=camera_stream1, width=1280, height=960, fps=30)

Confirm Camera is ready

print("Camera Status: ", camera1.isReady())

I get “False” as camera status

Output

[TRT] device GPU, /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
[TRT] W = 7 H = 100 C = 1
[TRT] detectNet – maximum bounding boxes: 100
[TRT] detectNet – loaded 91 class info entries
[TRT] detectNet – number of object classes: 91
Send Foscam command: http://ipAddress/cgi-bin/CGIProxy.fcgi?usr=userid&pwd=mypwd123&cmd=getDevName
Received Foscam response: 0, OrderedDict([(‘devName’, ‘C01’)])
Send Foscam command: http://ipAddress/cgi-bin/CGIProxy.fcgi?usr=userid&pwd=mypwd123&cmd=setPTZSpeed&speed=0
Received Foscam response: 0, OrderedDict()
Camera Status: False

Ah sorry - if you pull the latest from the repo, this should be fixed now. Updated this in commit 35f896

Can you enable debugging in the nanocamera library? GitHub - thehapyone/NanoCamera: A simple to use camera interface for the Jetson Nano for working with USB and CSI cameras in Python.

I believe that nanocamera needs the version of OpenCV with GStreamer enabled (the one that came with JetPack). The default OpenCV installed from Ubuntu repo does not have GStreamer enabled. This is how JetPack’s OpenCV was installed into l4t-ml container, so perhaps you could do something similar:

Also, you could just try to see if jetson-inference can display the video by running: video-viewer rtsp://userid:mypwd123@ipAddress/videoMain

Dustin

Thanks for the quick response and updating the version. As of now I was able to download the jetson-inference from github and when I try to build using " docker/build.sh", I get the following msg.

Also attempted on latest Jetpack as a new install. I must be missing something or doing incorrectly - need help please.

Error msg

– Python 2.7 wasn’t found
– detecting Python 3.6…
– found Python version: 3.6 (3.6.9)
– found Python include: /usr/include/python3.6m
– found Python library: /usr/lib/aarch64-linux-gnu/libpython3.6m.so
– CMake module path: /jetson-inference/utils/cuda;/jetson-inference/python/bindings;/jetson-inference/python/bindings/…/…/utils/python/bindings
– NumPy ver. 1.19.4 found (include: /usr/local/lib/python3.6/dist-packages/numpy/core/include)
– found NumPy version: 1.19.4
– found NumPy include: /usr/local/lib/python3.6/dist-packages/numpy/core/include
– detecting Python 3.7…
– Python 3.7 wasn’t found
– Copying /jetson-inference/python/examples/detectnet.py
– Copying /jetson-inference/python/examples/imagenet.py
– Copying /jetson-inference/python/examples/my-detection.py
– Copying /jetson-inference/python/examples/my-recognition.py
– Copying /jetson-inference/python/examples/segnet.py
– Copying /jetson-inference/python/examples/segnet_utils.py
– Copying examples/imagenet.py → imagenet-console.py
– Copying examples/imagenet.py → imagenet-camera.py
– Copying examples/detectnet.py → detectnet-console.py
– Copying examples/detectnet.py → detectnet-camera.py
– Copying examples/segnet.py → segnet-console.py
– Copying examples/segnet.py → segnet-camera.py
– Configuring incomplete, errors occurred!
See also “/jetson-inference/build/CMakeFiles/CMakeOutput.log”.
See also “/jetson-inference/build/CMakeFiles/CMakeError.log”.
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_nppicc_LIBRARY (ADVANCED)
linked by target “jetson-utils” in directory /jetson-inference/utils

The command ‘/bin/sh -c mkdir docs && touch docs/CMakeLists.txt && sed -i ‘s/nvcaffe_parser/nvparsers/g’ CMakeLists.txt && mkdir build && cd build && cmake …/ && make -j$(nproc) && make install && /bin/bash -O extglob -c "cd /jetson-inference/build; rm -rf -v !(aarch64|download-models.)" && rm -rf /var/lib/apt/lists/’ returned a non-zero code: 1

Hi @orion.jag, did you set your docker daemon’s default-runtime to nvidia and reboot first? See here:

https://github.com/dusty-nv/jetson-containers#docker-default-runtime