Jetson-inference from source

Hello,
Im trying to build jetson inference from source following jetson-inference/docs/building-repo-2.md at master · dusty-nv/jetson-inference · GitHub , to be able to use DetectNet outside the container. I have updated my python 3 to 3.7 on the jetson nano and i was wondering if everything will still work ok on the jetson-inference side? I have to use python 3.7.5 because of the FeatherWing H-bridge to be able to control some motors.

thank you

@castej10 I don’t believe I’ve personal built jetson-inference against Python 3.7 specifically (Python 3.6 and 3.8 I have), but if you have the libpython3.7-dev packages installed, it should detect it:

If it has trouble selecting it, I would change those lines to set(PYTHON_BINDING_VERSIONS 3.7)

1 Like

if i wanted to instead use the container and communicate with host using RbbitMQ server using the pika library for python, can i change the Dockerfile from:

...

#
# install development packages
#
RUN add-apt-repository --remove "deb https://apt.kitware.com/ubuntu/ $(lsb_release --codename --short) main" && \
    apt-get update && \
    apt-get purge -y '*opencv*' || echo "existing OpenCV installation not found" && \
    apt-get install -y --no-install-recommends \
            cmake \
...
...
# 
# install python packages
#
COPY python/training/detection/ssd/requirements.txt /tmp/pytorch_ssd_requirements.txt
COPY python/www/flask/requirements.txt /tmp/flask_requirements.txt
COPY python/www/dash/requirements.txt /tmp/dash_requirements.txt

RUN pip3 install --no-cache-dir --verbose --upgrade Cython && \
    pip3 install --no-cache-dir --verbose -r /tmp/pytorch_ssd_requirements.txt && \
    pip3 install --no-cache-dir --verbose -r /tmp/flask_requirements.txt && \
    pip3 install --no-cache-dir --verbose -r /tmp/dash_requirements.txt && \

...


to:

...

#
# install development packages
#
RUN add-apt-repository --remove "deb https://apt.kitware.com/ubuntu/ $(lsb_release --codename --short) main" && \
    apt-get update && \
    apt-get install -y libpython3.7-dev && \
    apt-get purge -y '*opencv*' || echo "existing OpenCV installation not found" && \
    apt-get install -y --no-install-recommends \
            cmake \
...

...
# 
# install python packages
#
COPY python/training/detection/ssd/requirements.txt /tmp/pytorch_ssd_requirements.txt
COPY python/www/flask/requirements.txt /tmp/flask_requirements.txt
COPY python/www/dash/requirements.txt /tmp/dash_requirements.txt

RUN pip3 install --no-cache-dir --verbose --upgrade Cython && \
    pip3 install --no-cache-dir --verbose -r /tmp/pytorch_ssd_requirements.txt && \
    pip3 install --no-cache-dir --verbose -r /tmp/flask_requirements.txt && \
    pip3 install --no-cache-dir --verbose -r /tmp/dash_requirements.txt && \
    pip3 install pika numpy
...

then using docker/build.sh command to recreate the image?

Sure you could do that - I would actually put your commands in their own RUN statement so that when you change them, it doesn’t have to re-run all those other pip3 installations that I have. It may take a bit of time to build the container at first, but once you do, you’ll be able to add your own commands to the end of the Dockerfile and it’ll quickly build just those commands using the build cache.

Note that the typical approach with Docker would be to create your own container by making your own Dockerfile with the commands you added with FROM jetson-inference:rXX.X at the top, and build it (look up tutorials on creating your own containers). Alas your approach may be easier to start with.

Also, when building the jetson-inference container, make sure that your default docker runtime is set to nvidia: https://github.com/dusty-nv/jetson-containers#docker-default-runtime

1 Like

also when I edit the Dockerfile and run the command docker/build.sh after quite sometime of bulding the image i get this error:

eppl@eppl-desktop:~/jetson-inference$ docker/run.sh
ARCH:  aarch64
reading L4T version from /etc/nv_tegra_release
L4T BSP Version:  L4T R32.7.1
[sudo] password for eppl: 
localuser:root being added to access control list
CONTAINER_IMAGE:  jetson-inference:r32.7.1
DATA_VOLUME:      --volume /home/eppl/jetson-inference/data:/jetson-inference/data --volume /home/eppl/jetson-inference/python/training/classification/data:/jetson-inference/python/training/classification/data --volume /home/eppl/jetson-inference/python/training/classification/models:/jetson-inference/python/training/classification/models --volume /home/eppl/jetson-inference/python/training/detection/ssd/data:/jetson-inference/python/training/detection/ssd/data --volume /home/eppl/jetson-inference/python/training/detection/ssd/models:/jetson-inference/python/training/detection/ssd/models --volume /home/eppl/jetson-inference/python/www/recognizer/data:/jetson-inference/python/www/recognizer/data 
V4L2_DEVICES:     --device /dev/video0 
DISPLAY_DEVICE:   -e DISPLAY=:1 -v /tmp/.X11-unix/:/tmp/.X11-unix 
ERROR: ld.so: object '/lib/aarch64-linux-gnu/libGLdispatch.so.0' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/lib/aarch64-linux-gnu/libGLdispatch.so.0' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/lib/aarch64-linux-gnu/libGLdispatch.so.0' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.

also i get this error in the mounted directories:

root@eppl-desktop:/jetson-inference/python/training/detection/ssd# ls models
ERROR: ld.so: object '/lib/aarch64-linux-gnu/libGLdispatch.so.0' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.

These directories get mounted in docker run command when you start the container:

https://github.com/dusty-nv/jetson-inference/blob/cbd9c0610143beb289ae77b42c0b40d69bc4c801/docker/run.sh#L67

Perhaps that behavior is different from JetPack 4 - you could remove it from the jetson-inference dockerfile:

https://github.com/dusty-nv/jetson-inference/blob/cbd9c0610143beb289ae77b42c0b40d69bc4c801/Dockerfile#L135

or change the LD_PRELOAD ENV variable in your own dockerfile

sorry im a bit confused, would i have to have python 3.7 installed on host so that the container can also have python 3.7?

Mmm good point - I forgot you were building this in container. In that case, you would want to make sure those libpython3.7-dev packages were installed in the Dockerfile before jetson-inference gets built.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.