Hello.
Firstly I’d like to express my gratitude for your continued support and interest to solve this issue. Hopefully our work will be useful for others as well.
I feel that I should reply in two parts to your question regarding how I created the virtual environment.
1: SD Card Image Installation
I referred to this website to obtain the SD Card Image and followed setup instructions given.
2: Commands Run Before Creating Virtual Environment
sudo nvpmodel -m 0
sudo jetson_clocks
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install git cmake
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install libhdf5-serial-dev hdf5-tools
sudo apt-get install python3-dev
sudo apt-get install nano locate
sudo apt-get install libfreetype6-dev python3-setuptools
sudo apt-get install protobuf-compiler libprotobuf-dev openssl
sudo apt-get install libssl-dev libcurl4-openssl-dev
sudo apt-get install cython3
sudo apt-get install libxml2-dev libxslt1-dev
wget http://www.cmake.org/files/v3.13/cmake-3.13.0.tar.gz
tar xpvf cmake-3.13.0.tar.gz cmake-3.13.0/
cd cmake-3.13.0/
./bootstrap --system-curl
make -j4
echo ‘export PATH=/home/nvidia/cmake-3.13.0/bin/:$PATH’ >> ~/.bashrc
source ~/.bashrc
sudo apt-get install build-essential pkg-config
sudo apt-get install libtbb2 libtbb-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install libxvidcore-dev libavresample-dev
sudo apt-get install libtiff-dev libjpeg-dev libpng-dev
sudo apt-get install python-tk libgtk-3-dev
sudo apt-get install libcanberra-gtk-module libcanberra-gtk3-module
sudo apt-get install libv4l-dev libdc1394-22-dev
wget https://bootstrap.pypa.io/get-pip.py
sudo python3 get-pip.py
rm get-pip.py
sudo pip install virtualenv virtualenvwrapper
editing the bashrc file
nano ~/.bashrc
virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh
loading the bash profile to finish virtualenvwrapper installation
source ~/.bashrc
3: Commands for Creating py3cv4 Virtual Environment
mkvirtualenv py3cv4 -p python3
workon py3cv4
4: Commands Run Successfully After Creating py3cv4 Virtual Environment
wget https://raw.githubusercontent.com/jkjung-avt/jetson_nano/master/install_protobuf-3.6.1.sh
sudo chmod +x install_protobuf-3.6.1.sh
./install_protobuf-3.6.1.sh
workon py3cv4 # if you aren’t inside the environment
cd ~
cp -r ~/src/protobuf-3.6.1/python/ .
cd python
python setup.py install --cpp_implementation
succeessfull installation of numpy and cython
pip install numpy cython
below is the given method to install scipy 1.3.3
wget https://github.com/scipy/scipy/releases/download/v1.3.3/scipy-1.3.3.tar.gz
tar -xzvf scipy-1.3.3.tar.gz scipy-1.3.3
cd scipy-1.3.3/
python setup.py install
however the above doesn’t work for me so I do the below:
pip install scipy==1.3.3
5: Commands Not Running/Partially Successful
here’s where the problem arises…
pip install --extra-index-url Index of /compute/redist/jp/v42 tensorflow-gpu==1.13.1+nv19.3
works successfully but ofcourse it needs tf to work
pip install keras
the below is tfod installation, which gets installed standalone but I wonder what effectiveness it would have without tf
cd ~
workon py3cv4
git clone GitHub - tensorflow/models: Models and examples built with TensorFlow
cd models && git checkout -q b00783d
cd ~
git clone GitHub - cocodataset/cocoapi: COCO API - Dataset @ http://cocodataset.org/
cd cocoapi/PythonAPI
python setup.py install
cd ~/models/research/
protoc object_detection/protos/*.proto --python_out=.
editing setup.sh file
nano ~/setup.sh
#!/bin/sh
export PYTHONPATH=$PYTHONPATH:/home/whoami
/models/research:
/home/whoami
/models/research/slim
workon py3cv4
cd ~
git clone --recursive GitHub - NVIDIA-AI-IOT/tf_trt_models: TensorFlow models accelerated with NVIDIA TensorRT
cd tf_trt_models
./install.sh
the below is installation of opencv, which installs partially; import cv2 returns error
cd ~
wget -O opencv.zip https://github.com/opencv/opencv/archive/4.1.2.zip
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.1.2.zip
unzip opencv.zip
unzip opencv_contrib.zip
mv opencv-4.1.2 opencv
mv opencv_contrib-4.1.2 opencv_contrib
workon py3cv4
cd opencv
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE
-D WITH_CUDA=ON
-D CUDA_ARCH_PTX=“”
-D CUDA_ARCH_BIN=“5.3,6.2,7.2”
-D WITH_CUBLAS=ON
-D WITH_LIBV4L=ON
-D BUILD_opencv_python3=ON
-D BUILD_opencv_python2=OFF
-D BUILD_opencv_java=OFF
-D WITH_GSTREAMER=ON
-D WITH_GTK=ON
-D BUILD_TESTS=OFF
-D BUILD_PERF_TESTS=OFF
-D BUILD_EXAMPLES=OFF
-D OPENCV_ENABLE_NONFREE=ON
-D OPENCV_EXTRA_MODULES_PATH=/home/whoami
/opencv_contrib/modules …
make -j4
sudo make install
cd ~/.virtualenvs/py3cv4/lib/python3.6/site-packages/
ln -s /usr/local/lib/python3.6/site-packages/cv2/python3.6/cv2.cpython-36m-aarch64-linux-gnu.so cv2.so
6: Additional Commands which Run Successfully
workon py3cv4
pip install matplotlib scikit-learn
pip install pillow imutils scikit-image
pip install dlib
pip install flask jupyter
pip install lxml progressbar2
That’s it. This is all that I have done. The problematic areas are Tensorflow and OpenCV. As you are an expert, you would be able to tell where in the installation process have I gone wrong, or what have I not installed, or any other reason which causes TF and OpenCV to not get installed and imported in a program properly.
This is the website I referred to: How to configure your NVIDIA Jetson Nano for Computer Vision and Deep Learning - PyImageSearch
Maybe the fact that it is from 25th March 2020 is the reason why some commands don’t work today.
Why am I doing all this?
Why am I following such a long and arduous process to install certain libraries on my Jetson Nano? Let’s get straight to the point. The fact is that I am building an Autonomous Wheelchair which will require Jetson Nano as Embedded Software. The input video stream will be fed to the Jetson Nano, in which a Hugging Face Model for Depth Estimation is running. It is the lightest DE Model I could find (Intel’s DPT Hybrid Midas). The input video is processed by the DE Model and output video is a Depth Estimation video. This Depth Estimation video is used by the Jetson Nano to carry out decision-making task as to which direction to turn, etc. When I use normal CPU, the process is too slow - one frame of video processed in two minutes. We need a rate of atleast one frame per second to have reasonable accuracy. For that, we need GPU Access of Jetson Nano.
Here are the libraries we require:
import os
import torch
from transformers import DPTImageProcessor, DPTForDepthEstimation
import cv2
import numpy as np
from PIL import Image
So I guess GPU support is needed for Torch, Transformers and CV2.
What I need
What I need is a proper step-by-step explanation of what to do so that I can run the code with GPU Acceleration so that the project is successful.
I hope I have explained myself properly. I thank you a lot for taking the time to read through the message.
Thanks.