building-repo-2 tutorial - 'make' command fails

I have set up my Jetson Nano - all seems fine. I have mounted an SSD (see path: Extreme SSD) for downloading the tutorials onto.
While following the “Hello AI World” steps: jetson-inference/building-repo-2.md at master · dusty-nv/jetson-inference · GitHub

On “Compiling the Project” step, the ‘make’ command fails with error:
/…/endian.h error: identifier “uint64_t” is unidentified

Hi,

Which JetPack version do you use?
Update to the corresponding jetson_inference branch may fix this issue.

Thanks.

  1. I’m new to this ML \ Jetson Nvidia domain
  2. The JetPack version I use is 32.2.1. I use an image to SD.
  3. I have tried to re-run the procedure using the L4T-R32.2 branch. What I don’t understand is - executing the command: $ git clone GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. - How does it know from which branch to clone?
  4. while running the $ cmake …/ - the Model Downloader fails to download any model. and references to Mirror: Releases · dusty-nv/jetson-inference · GitHub
  5. I have downloaded an unpacked GoogleNet.tar.gz
  6. Then upon proceeding to run ‘make’ command - I receive error: “…no makefile found…”

L4T R32.2.1 is the latest release, so you should be fine with using the master branch.

Are these the commands you are running?

$ git clone --recursive https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ mkdir build
$ cd build
$ cmake ../
$ make
$ sudo make install
$ sudo ldconfig

Even if the model downloader fails, it should still allow CMake to continue and create the makefile (however you can comment it out in CMakePreBuild.sh:25). Perhaps you were trying to run make from the wrong directory after you unpacked GoogleNet.tar.gz? You should run make from your /build directory.

If you encounter further build errors, please post the full build log here so we can take a look. Thanks.

these are the commands:

$ sudo apt-get install git cmake
$ git clone --recursive GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
$ cd jetson-inference
$ sudo apt-get install libpython3-dev python3-numpy
$ mkdir build
$ cd build
$ cmake …/
The Model Downloader fails I use the Mirror and follow the instructions: Releases · dusty-nv/jetson-inference · GitHub

$ make
Fails…

Please post the build log with the make errors.

You can disable the Model Downloader by commenting it out in CMakePreBuild.sh:25. Then run this:

$ cd jetson-inference
$ rm build
$ mkdir build
$ cmake ../
$ make

And then if there are build errors, please post the log from your terminal.

I have re-run the process twice from scratch each.

1st. Before commenting the Model Downloader just to generate all the logs with the original sequence. In the Modler Downloader I have selected quit in the dialog.

2nd. After commenting the Model Downloader.

I get the same results. Eventually make fails with errors - see the attachments.

after commenting downloaded - cmake and make logs.zip (6.12 KB)

before commenting downloaded - cmake and make erros.zip (5.95 KB)

I’m sorry, unfortunately I am confused by this make error, because there is no file jetson-utils/endian.h - so I am not sure where that error is coming from or why it happens on your system. Is it possible you are using an older branch of the repo? I would try removing the repo and re-cloning:

$ rm -r -f jetson-inference
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ mkdir build
$ cd build
$ cmake ../
$ make
$ sudo make install
$ sudo ldconfig

Don’t worry about manually downloading the models until you get past the make step, and just exit the Model Downloader tool during cmake.

If it still doesn’t work, is it possible you upgraded packages on your system with ‘sudo apt-get upgrade’ or otherwise configured the compilier toolchain? I am unsure why you would still be having issue at this point, so if you have spare SD card you might want to try a fresh SD card image to see if you can build it then.

I have retried the entire process with a new image and I have re-cloned jetson-inference countless times already as well as run the sequence. Even if I skip the model-downloader on the Pytorch installation sometimes the screen that shows the user the options goes wild - any key I click writes giberish to the screen and there is no way to exit.
In any case, one of two things happen, either I can’t complete the cmake properly (as mentioned above) or the make command fails with the error I specified.

A point to note is that cmake needs to be installed prior to executing the sequence and the instructions on the site mention to run update before installing cmake. Anyway I have also tried without running update first.

A question: are running these steps pre-requisite for the other examples?
My goal is to execute some example, to be able to test the environment - using gpu and some model for inference.

I am curious, are you using English keyboard layout, or another language? Also because you mention you cannot download the models. Regardless, you can comment these out in CMakePrebuild.sh (you probably understand that by now)

For sanity, I tried a new image and was able to build the repo again, so I am unsure why you face the issue - sorry for the trouble.

Only for the examples in the jetson-inference repo. You could run the TensorRT samples, the deep learning inference benchmarks (see the sticky), other samples from the NVIDIA-AI-IOT GitHub, ect.

For anyone else reading this topic, here is a similar post that solved the error by building from the primary partition (under $HOME) instead of from a mounted drive:

https://devtalk.nvidia.com/default/topic/1067998/jetson-agx-xavier/-lt-solved-gt-jetson-inference-install-error-on-jetson-agx-xavier-lt-solved-gt-/post/5409484/#5409484

Thank you. The added comment → to build from the primary partition, resolved the issue.