NanoOWL Pre-Built Engine Download

I’m using the documentation at Tutorial - NanoOWL to learn about NanoOWL through a self-paced exercise. I’ve:

  • pulled/built the container on my Orin Nano SDK
  • ensured that a camera is connected
  • installed aiohttp from PyPi (because the NVIDIA link for the package is broken)

After that I am stuck because the following command (as mentioned in the article) outputs error messages:

python3 -m nanoowl.build_image_encoder_engine data/owl_image_encoder_patch32.engine

The tail output is:

Traceback (most recent call last):
File “/usr/lib/python3.10/runpy.py”, line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File “/usr/lib/python3.10/runpy.py”, line 86, in _run_code
exec(code, run_globals)
File “/opt/nanoowl/nanoowl/build_image_encoder_engine.py”, line 34, in 
predictor.build_image_encoder_engine(
File “/opt/nanoowl/nanoowl/owl_predictor.py”, line 458, in build_image_encoder_engine
return self.load_image_encoder_engine(engine_path, max_batch_size)
File “/opt/nanoowl/nanoowl/owl_predictor.py”, line 389, in load_image_encoder_engine
with open(engine_path, ‘rb’) as f:
FileNotFoundError: [Errno 2] No such file or directory: ‘data/owl_image_encoder_patch32.engine’
root@jetson2:/opt/nanoowl/examples/tree_demo#

What kind of exercises can I perform on the Orin Nano SDK to learn some very basic concepts with NanoOWL? Is there another engine that I can download? Thanks.

Regards.

P.S.

running JP 6.2

Hi,

You will need to build the TensorRT engine before running the demo script.
Please find the detailed steps below:

Thanks.

Here are the commands that I ran:

jetson-containers run --workdir /opt/nanoowl $(autotag nanoowl)
pip install --index-url https://pypi.jetson-ai-lab.io/jp6/cu126 aiohttp
root@jetson2:/opt/nanoowl/examples/tree_demo# python3
Python 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorrt as trt
>>> print(trt.__version__)
10.4.0
>>> print(trt.Builder(trt.Logger()))
<tensorrt.tensorrt.Builder object at 0xffff7fe7ba30>
>>> import aiohttp
>>> exit()

My newbie assumption is that TensorRT was installed when the Docker container was prepared in my environment. Please correct me, if my assumption is incorrect.

Unfortunately, launching the demo didn’t work:
`` # cd examples/tree_demo
python3 tree_demo.py --camera 0 --resolution 640x480
../../data/owl_image_encoder_patch32.engine
/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py:128: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead.
warnings.warn(

File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 1326, in convert
return t.to(
RuntimeError: NVML_SUCCESS == r INTERNAL ASSERT FAILED at “/opt/pytorch/c10/cuda/CUDACachingAllocator.cpp”:995, please report a bug to PyTorch.

``

I’ll re-read your link and record every step I perform. Perhaps, I may have missed something. Thanks.

Regards.