I’ve tried building from sources the tensorflow (using https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html) and cv2 (using https://pythops.com/post/compile-deeplearning-libraries-for-jetson-nano) in a python virtual environment, since things don’t work on the nano in a venv. My build works. Using hhk7734’s tensorflow-yolov4 (I’m at my 3 link limit for new posters), I got OOM errors for tf, tf tiny, and tflite, even after ditching Unity for LXDE (using https://www.zaferarican.com/post/how-to-save-1gb-memory-on-jetson-nano-by-installing-lubuntu-desktop) and trying all the memory limiting stuff. My build of tflite-tiny using hhk’s stuff does, however, run – at 2000ms/frame, which is slower than my RPi 4 at 1500ms (!). Extremely disappointing.
So, now I’m trying your TensorRT stuff, and when I run the requirements.txt build of /usr/src/tensorrt/samples/python/yolov3_onnx (which I copied to a venv, rather than just work with it under /usr/src and not in a venv), it fails building the wheel for onnx with “fatal: not a git repository (or any of the parent directories): .git”, which I assume means I have to be in a cloned git repo that includes yolov3_onnx. But there’s no .git file for the repo for your samples, so I’m trying to find the yolov3_onnx sample on github and can’t.
Frustrated? Yes. Way, way, harder than getting things to work on RPi? Yes. But I’m sure if I build and run the right thing, the nano will actually be faster than an RPi, given all the benchmark info.
So, can you help me to get yolov3_onnx to run?