Advice on using numba & tensorrt on Jetson nano


Hi everyone,
I have to use a package called numba on Jetson nano. Under arm, the package is only available via conda but unfortunately, tensorrt is not available there. I tried building numba from source but to do so it requires python3.7. I tried installing python3.7 but tensorrt is not available there. Could someone support me on what I can do? is it possible to build tensorrt from source under conda4aarch64? or is there a way to get conda4aarch64 to use the already available tensorrt?

Best Regards


TensorRT Version:
GPU Type: Jetson Nano
Jetpack: 4.6.2
Python Version: 3.6
PyTorch Version: 1.10


We are moving this post to the Jetson Nano forum to get better help.

Thank you.

1 Like

Hi @mohamedA95, others may be able to share their feedback about building numba or using conda, however the l4t-ml container comes with numba pre-installed (along with PyTorch and TensorRT). In the Dockerfile, numba was built for Python 3.6 on JetPack 4.x

1 Like

Hi, @dusty_nv Thanks a lot for your response I did not know that I can use docker containers on the nano. This is helpful and saves time.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.