Running Meshroom on 2GB Jetson Nano

I’ve been fiddling around with 3D printing and 3D scanning using the Openscan project. It systematically takes a series of photos that can be stitched together into a 3D model using photogrammetry. This is a great overview and howto: https://blog.prusaprinters.org/photogrammetry-2-3d-scanning-simpler-better-than-ever_29393/

The main tool in the toolchain is ALICE Meshroom. However it requires CUDA. I didn’t have an NVIDIA card locally but I did have a Jetson Nano 2GB so I was curious if you could get Meshroom running a 2GB Nano.

I haven’t fully learned out to use Meshroom, so I can’t say if it’s going to be able to produce a result, but I have been able to get the UI running and it looks promising so I thought I’d share how I got there in case it’s interesting or helpful to someone else.

First you have to “build” Meshroom yourself. It’s a python app so that mostly means getting a bunch of dependencies to install correctly. Start by checking out meshroom GitHub - alicevision/meshroom: 3D Reconstruction Software. The instructions on the INSTALL.md are only marginally useful. Basically you need to run

pip install -r requirements.txt

BUT… that’s going to fail until you’ve successfully gotten PySide2 installed which requires Qt and you’re going to have to build both of these from scratch.

I wanted to make sure that I had a recent version of python with venv and an environment set up so I did this:

sudo apt-get install python3 python3-venv
python3 -m venv ~/meshroom-env
. ~/meshroom-env/bin/activate

Then I mostly followed the instructions from this post: PySide2 (Qt for python) installation on Jetson Xavier - #5 by Muscle_Oliver with a couple caveats 1. you need the full version of Qt 2. there are a few other dependencies that you have to install.

mkdir ~/deps
cd ~/deps
wget http://master.qt.io/archive/qt/5.15/5.15.2/single/qt-everywhere-src-5.15.2.tar.xz
tar xpf qt-everywhere-src-5.15.2.tar.xz
cd qt-everywhere-src-5.15.2

 # Now you have to install a few dependencies (not mentioned in the post above)
 sudo apt-get install xcb libxkbcommon-x11-0

./configure -xcb
make -j4
sudo make install

Hopefully all of that goes well… If not you more than likely ran into a missing dependency that I forgot to document. Once you’ve successfully built and installed Qt then you will also need to install some fonts. If you don’t do this you will discover this later then you try to run Meshroom and the UI has no text. NOTE that it may be possible to configure to use fontconfig but I didn’t try to make that happen.

cd ~/deps
# Download the font pack from here: https://dejavu-fonts.github.io/
unzip ~/Downloads/dejavu-fonts-ttf-2.37.zip
sudo mkdir /usr/local/Qt-5.15.2/lib/fonts
sudo cp ~/deps/dejavu-fonts-ttf-2.37/ttf/*.ttf /usr/local/Qt-5.15.2/lib/fonts

Now we need to install PySide2 per the post above… NOTE that the version that is needed by Meshroom as of this post is 5.14.1 (if it’s a different version in the future you’ll be able to tell that when you try to pip install the requirements)

cd ~/deps
git clone http://code.qt.io/pyside/pyside-setup.git
cd pyside-setup
git checkout 5.14.1
sudo python setup.py install --qmake=/usr/local/Qt-5.15.2/bin/qmake

If that all went smoothly then you’re ready to pip install meshroom.

cd ~/meshroom
pip install -r requirements.txt

If that goes smoothly then you’re done! To run the UI simply do this

cd ~/meshroom
PYTHONPATH=$PWD python meshroom/ui

Voilla!

1 Like

Cool! Thanks for your sharing to community!

1 Like

Well done,
did you test some data?
did you try Colmap which should be less resource intensive?
with jetson nano B01 (Jetpack 4.5.1 and L4T 32.5.1), QT compilation has a lot of warnings and until now, meshroom installation failed. bad luck!

So far in my journey I know almost nothing about Meshroom other than it can be used with photogrammetry to make 3d models :) Step one was to see if I could even get it running… I’ll see if I can figure out how to use a batch of photos to make a model sometime this week and will report back.

If you could instruct me on how to “test some data” (e.g. with a “Colmap”) I’d be happy to try before I learn something more ambitious.

http://download.agisoft.com/datasets/monument.zip (32 images)
Download Regard3D from SourceForge.net (11 images)
or just take three photos with your smartphone of any facade.

Well… That was (and may still be) a can of worms. You can run Meshroom without AliceVision. It just fails on trying to do anything. That said I chased the compile process to get Meshroom + AliceVision mostly working. I documented the process here: RunningMeshroomOnJetsonNano.md · GitHub

I’m trying a run with your monument.zip set, but I failed to generate an entire mesh with a small set of images I took of an apple that I carved from wood. You can see in images from the gist that it completed 8 steps successfully but then it failed on the “Meshing” step. All of the logging (including “warning” messages) seemed pretty benign but it ended with a simple

No valid mesh was generated

I don’t know enough about Meshroom to know if that’s a function of my images or the binaries that I built or something else.

I’m probably going to keep fiddling with this and will report back if the monument set works.

Bravo!
If you got a dense cloud, you can use meshlab (MeshLab) to generate and filter your mesh. meshlab is able to filter the dense cloud. I will try your apple!

I started a dockerfile, so we can make this a little more reproducible.

I tried to run it earlier, before a few changes, and it died while trying to build AliceVision. I was not using your branch, however, so maybe that was my issue. It is running again now. I suspect it will take most of the evening, so I will report back any issues in the morning.

I am curious how it builds Geogram, which apparently doesn’t have a build profile for aarch64. I believe that is where my previous attempt died. linux aarch64 support? · Issue #19 · alicevision/geogram · GitHub

Check out my fork. I created a “Jetson” platform and did a little hackery.

  1. Specified -march=armv8-a
  2. Disabled SSE (which seemed to cause issues)
  3. Forced it to use a non-native spin lock.

TBH I didn’t dig into the ramifications of that… but the pipeline seems to work so maybe just impacts performance.

What version of cmake did you use? I am using 3.20.

I’ve updated to both your forks. I am stuck now building cctag within AliceVision. The error is in regards to cuda.

CMake Error at /usr/local/share/cmake-3.20/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
  Could NOT find CUDA (missing: CUDA_CUDART_LIBRARY) (found suitable version
  "10.0", minimum required is "7.0")

My cmake is 3.20.1. CUDA is installed in /usr/local/cuda and is version 10.2.89.

I didn’t install that, but rather it came on the image that I flashed when getting started: https://developer.nvidia.com/jetson-nano-2gb-sd-card-image

Googling around it seems like specifying pathing in the call to cmake has worked in a variety of cases. I’m not familiar with ROS, but this seems in a similar ballpark: Could NOT find CUDA (missing: CUDA_CUDART_LIBRARY) (found version "10.2") [BUG] · Issue #18 · stereolabs/zed-ros2-wrapper · GitHub

Perhaps you need

-DCUDA_CUDART_LIBRARY=/.....

Or

-DCUDA_TOOLKIT_ROOT_DIR=/....

I see you put CUDA into your PATH and LD_LIBRARY_PATH in your docker setup so not sure really what’s wrong.

I updated my base to the latest version and specified all the environment variables similar to AliceVision/Dockerfile_ubuntu at develop · alicevision/AliceVision · GitHub

I’m still building, so fingers crossed. I’ll update the dockerfile if it succeeds. Nay, when it succeeds ;)

Thanks for checking in.

I did not succeed in compiling QT5 with the parameters ‘./configure -xcb && make -j4 && sudo make install’. I do not know why.
But, I managed to compile QT5 by following the instructions of embedded linux - Qt-default version issue on migration from RPi4 to NVIDIA Jetson Nano - Stack Overflow . Then, PySide2 using PySide2 (Qt for python) installation on Jetson Xavier - #4 by Muscle_Oliver
Compiling of AliceVision failed… .
With the jetson pack version 4.5.1, mode headless, still have an issue with X11 (ssh)
Xlib: extension “NV-GLX” missing on display “localhost:11.0”.
Xlib: extension “NV-GLX” missing on display “localhost:11.0”.
Xlib: extension “NV-GLX” missing on display “localhost:11.0”.
Btw, your dataset is very good, I got a good result with an other software on my PC.

Are you able to run with a “head?” I’ve been running LXDE (the default X11 on nano 2G) while doing my work. My first question would be whether somehow you’re missing an X11 dependency. My second question would be whether you have to be running X11 (or at least have a $DISPLAY) in order to compile AliceVision.

There are a variety of forum posts referencing that NV-GLX error. The answer that marked this question as “solved” seems like it might be useful in your case: SSH -X problems!

Also thanks for confirming my dataset! I ran the monument example and a few other tests. I repeatedly get to the “Meshing” step and it fails for a variety of reasons that I don’t understand yet.

  • The monument one fails simply at a “flann kdtree” step and simply reports “Killed”
  • My apple just reports “No mesh found”
  • Another example I tried reported “Failed to estimate space from SfM: The space bounding box is too small”

At some point I’m going to try your idea to see if I can take an one of the successful artifacts (e.g. the Dense Scene) and get a mesh using MeshLab. But for now… I don’t understand enough about AliceVision to know whether it’s just failing because of resource constraints or something else.

In the Meshroom process, you could plug to mesh calculation, the parse point cloud instead of the dense point cloud just to check the mesh calculation. Meshroom needs a big amount of memory for meshing (they said 8gb).
With the apple parse point cloud, using meshlab, I got a correct mesh, after some cleaning in order to delete false point of correlation in the parse point cloud.