From fastai import * ModuleNotFoundError: No module named 'fastai'

Hi,

I need to run my deep learning application in jetson nano(4gb memory).
I successfully installed pytorch version 1.7 and torch vision 0.7.0 using below link

pytorch1.6.0
commands followed:
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev
pip3 install Cython
pip3 install numpy torch-1.6.0-cp36-cp36m-linux_aarch64.whl

torchvision:

$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev

$ git clone --branch v0.7.0 GitHub - pytorch/vision: Datasets, Transforms and Models specific to Computer Vision torchvision

$ cd torchvision

$ export BUILD_VERSION=0.7.0 # where 0.x.0 is the torchvision version

$ python3 setup.py install --user

both of them installed sucessfully.
=output:
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type β€œhelp”, β€œcopyright”, β€œcredits” or β€œlicense” for more information.

import torch
torch.version
β€˜1.6.0’

import torchvision
torchvision.version
β€˜0.7.0a0+78ed10c’

print('CUDA available: ’ + str(torch.cuda.is_available()))
CUDA available: True

print('cuDNN version: ’ + str(torch.backends.cudnn.version()))
cuDNN version: 8000

when i run the application,it is giving error

from fastai import *

ModuleNotFoundError: No module named β€˜fastai’

but my application runs fine in ubuntu host machine but could not able to run in jetson nano.
please can any body help me on this issue,

one of the below link found:

it is saying that we need 64GB sd card. but i am using 32 GB sd card. whether this is proper link for my problem?, i am in confusion.
if i already installed pytorch and torchvision successfully then why i am getting error that no module name fastai.
i really appreciate for any help.

Hi,

You can install it by typing

pip install fastai

or

pip3 install fastai

Hi,

I am getting below error while installing fastai
Collecting fastai
Downloading https://files.pythonhosted.org/packages/5b/53/edf39e15b7ec5e805a0b6f72adbe48497ebcfa009a245eca7044ae9ee1c6/fastai-2.3.0-py3-none-any.whl (193kB)
100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 194kB 1.9MB/s
Requirement already satisfied: pip in /usr/lib/python3/dist-packages (from fastai)
Requirement already satisfied: matplotlib in /usr/lib/python3/dist-packages (from fastai)
Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from fastai)
Requirement already satisfied: pandas in /usr/lib/python3/dist-packages (from fastai)
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from fastai)
Requirement already satisfied: scikit-learn in ./.local/lib/python3.6/site-packages (from fastai)
Requirement already satisfied: scipy in ./.local/lib/python3.6/site-packages (from fastai)
Collecting spacy<3 (from fastai)
Downloading https://files.pythonhosted.org/packages/45/71/507b8dbbe3ee6f93c0356c3e5e902e0f598c02d919ad3116e16559eb011f/spacy-2.3.5.tar.gz (5.8MB)
100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5.8MB 92kB/s
Complete output from command python setup.py egg_info:
Command β€œpython setup.py egg_info” failed with error code 1 in /tmp/pip-build-_aqnyxje/blis/
Traceback (most recent call last):
File β€œ/home/jetson/.local/lib/python3.6/site-packages/setuptools/installer.py”, line 75, in fetch_build_egg
subprocess.check_call(cmd)
File β€œ/usr/lib/python3.6/subprocess.py”, line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command β€˜[’/usr/bin/python3’, β€˜-m’, β€˜pip’, β€˜β€“disable-pip-version-check’, β€˜wheel’, β€˜β€“no-deps’, β€˜-w’, β€˜/tmp/tmpmwycjb1u’, β€˜β€“quiet’, β€˜blis<0.8.0,>=0.4.0’]’ returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/tmp/pip-build-sdsnn3su/spacy/setup.py", line 252, in <module>
    setup_package()
  File "/tmp/pip-build-sdsnn3su/spacy/setup.py", line 247, in setup_package
    cmdclass={"build_ext": build_ext_subclass},
  File "/home/jetson/.local/lib/python3.6/site-packages/setuptools/__init__.py", line 152, in setup
    _install_setup_requires(attrs)
  File "/home/jetson/.local/lib/python3.6/site-packages/setuptools/__init__.py", line 147, in _install_setup_requires
    dist.fetch_build_eggs(dist.setup_requires)
  File "/home/jetson/.local/lib/python3.6/site-packages/setuptools/dist.py", line 724, in fetch_build_eggs
    replace_conflicting=True,
  File "/home/jetson/.local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 768, in resolve
    replace_conflicting=replace_conflicting
  File "/home/jetson/.local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 1051, in best_match
    return self.obtain(req, installer)
  File "/home/jetson/.local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 1063, in obtain
    return installer(requirement)
  File "/home/jetson/.local/lib/python3.6/site-packages/setuptools/dist.py", line 780, in fetch_build_egg
    return fetch_build_egg(self, req)
  File "/home/jetson/.local/lib/python3.6/site-packages/setuptools/installer.py", line 77, in fetch_build_egg
    raise DistutilsError(str(e)) from e
distutils.errors.DistutilsError: Command '['/usr/bin/python3', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmpmwycjb1u', '--quiet', 'blis<0.8.0,>=0.4.0']' returned non-zero exit status 1.

----------------------------------------

Command β€œpython setup.py egg_info” failed with error code 1 in /tmp/pip-build-sdsnn3su/spacy/

any help on this??

Hi,

Can you try following commands ?

sudo python3 -m pip install -U pip
sudo python3 -m pip install -U setuptools

Thanks for your suggestion.
I tried given commands:
sudo python3 -m pip install -U pip
sudo python3 -m pip install -U setuptools

now getting different error:
ERROR: Could not find a version that satisfies the requirement thinc<8.1.0,>=8.0.2
ERROR: No matching distribution found for thinc<8.1.0,>=8.0.2

My application requires pytorch version 1.6 and torchvision 0.7.0
but if you see below pip3 install fastai downloading pytorch version 1.8.1 and torchvision v0.9.1
but i have already installed pytorch version 1.6.0and torchvision v0.7.0.

please find the detailed error below:
pip3 install fastai:
Requirement already satisfied: scipy in ./.local/lib/python3.6/site-packages (from fastai) (1.5.4)
Requirement already satisfied: matplotlib in /usr/lib/python3/dist-packages (from fastai) (2.1.1)
Collecting fastai
Downloading fastai-2.2.7-py3-none-any.whl (193 kB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 193 kB 4.5 MB/s
Downloading fastai-2.2.6-py3-none-any.whl (193 kB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 193 kB 4.4 MB/s
Downloading fastai-2.2.5-py3-none-any.whl (191 kB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 191 kB 4.1 MB/s
Downloading fastai-2.2.4-py3-none-any.whl (191 kB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 191 kB 4.3 MB/s
Downloading fastai-2.2.3-py3-none-any.whl (191 kB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 191 kB 5.5 MB/s
Downloading fastai-2.2.2-py3-none-any.whl (191 kB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 191 kB 4.5 MB/s
Downloading fastai-2.2.1-py3-none-any.whl (191 kB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 191 kB 4.5 MB/s
Downloading fastai-2.2.0-py3-none-any.whl (191 kB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 191 kB 4.7 MB/s
Downloading fastai-2.1.10-py3-none-any.whl (190 kB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 190 kB 4.8 MB/s
Requirement already satisfied: packaging in ./.local/lib/python3.6/site-packages (from fastai) (20.9)
Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from fastai) (2.18.4)
Collecting torch>=1.7.0
Downloading torch-1.8.1-cp36-cp36m-manylinux2014_aarch64.whl (45.3 MB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 45.3 MB 12.1 MB/s
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from fastai) (3.12)
Requirement already satisfied: pandas in /usr/lib/python3/dist-packages (from fastai) (0.22.0)
Collecting fastprogress>=0.2.4
Downloading fastprogress-1.0.0-py3-none-any.whl (12 kB)
Collecting pillow>6.0.0
Downloading Pillow-8.2.0-cp36-cp36m-manylinux2014_aarch64.whl (2.8 MB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.8 MB 19.6 MB/s
Requirement already satisfied: scikit-learn in ./.local/lib/python3.6/site-packages (from fastai) (0.24.1)
Requirement already satisfied: pip in /usr/local/lib/python3.6/dist-packages (from fastai) (21.0.1)
Collecting torchvision>=0.8
Downloading torchvision-0.9.1-cp36-cp36m-manylinux2014_aarch64.whl (11.8 MB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 11.8 MB 9.7 MB/s
Collecting spacy
Downloading spacy-3.0.5.tar.gz (7.0 MB)
|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7.0 MB 19.9 MB/s
Installing build dependencies … \

      self.run_command(cmd_name)
    File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
      cmd_obj.run()
    File "/usr/lib/python3.6/distutils/command/build_ext.py", line 339, in run
      self.build_extensions()
    File "setup.py", line 106, in build_extensions
      objects = self.compile_objects(platform_name, arch, OBJ_DIR)
    File "setup.py", line 174, in compile_objects
      objects.append(self.build_object(env=env, **spec))
    File "setup.py", line 188, in build_object
      subprocess.check_call(command, cwd=BLIS_DIR)
    File "/usr/lib/python3.6/subprocess.py", line 311, in check_call
      raise CalledProcessError(retcode, cmd)
  subprocess.CalledProcessError: Command '['gcc', '-c', '/tmp/pip-install-disbaktv/blis_1f41cf9cb0054b59a542d38eeee273b7/blis/_src/kernels/skx/3/bli_dgemm_skx_asm_16x12_l2.c', '-o', '/tmp/tmpjmtw8z67/bli_dgemm_skx_asm_16x12_l2.o', '-O3', '-mavx512f', '-mavx512dq', '-mavx512bw', '-mavx512vl', '-mfpmath=sse', '-march=skylake-avx512', '-fPIC', '-std=c99', '-fvisibility=hidden', '-D_POSIX_C_SOURCE=200112L', '-DBLIS_VERSION_STRING="0.7.0"', '-DBLIS_IS_BUILDING_LIBRARY', '-Iinclude/linux-x86_64', '-I./frame/3/', '-I./frame/ind/ukernels/', '-I./frame/3/', '-I./frame/1m/', '-I./frame/1f/', '-I./frame/1/', '-I./frame/include', '-I/tmp/pip-install-disbaktv/blis_1f41cf9cb0054b59a542d38eeee273b7/blis/_src/include/linux-x86_64']' returned non-zero exit status 1.
  ----------------------------------------
  ERROR: Failed building wheel for blis
  Building wheel for cymem (PEP 517): started
  Building wheel for cymem (PEP 517): finished with status 'done'
  Created wheel for cymem: filename=cymem-2.0.5-cp36-cp36m-linux_aarch64.whl size=121631 sha256=821390b3d39b524b50b8da8bc005f4570c0d50631a8e43d6c3a608e8f745d7a6
  Stored in directory: /tmp/pip-ephem-wheel-cache-1a76l_o7/wheels/37/bd/75/6ceef4faff4ea1802f4aca749d86134af066e09a40c1b119ba
  Building wheel for murmurhash (PEP 517): started
  Building wheel for murmurhash (PEP 517): finished with status 'done'
  Created wheel for murmurhash: filename=murmurhash-1.0.5-cp36-cp36m-linux_aarch64.whl size=66886 sha256=b13478f8ac97c0db2541e1d6ebd215878cb0fd99f5218913b7d0732642eaa37c
  Stored in directory: /tmp/pip-ephem-wheel-cache-1a76l_o7/wheels/06/40/76/bd1dbf725e932fd89ad09aa600197904fdc1cdeed4d33667ab
  Building wheel for preshed (PEP 517): started
  Building wheel for preshed (PEP 517): finished with status 'done'
  Created wheel for preshed: filename=preshed-3.0.5-cp36-cp36m-linux_aarch64.whl size=478493 sha256=cd2302b5d7af8bbcfa9d96cfdd8a216604e3504444db53d80602b9f7a2f1040a
  Stored in directory: /tmp/pip-ephem-wheel-cache-1a76l_o7/wheels/d2/d5/a7/53eb0e38cce491b028f0f572eb22a23a328cf0713526e1a4f7
Successfully built cymem murmurhash preshed
Failed to build blis
ERROR: Could not build wheels for blis which use PEP 517 and cannot be installed directly
----------------------------------------

WARNING: Discarding https://files.pythonhosted.org/packages/62/0f/7142d6fd3282e7f9002c4d31547168c15696ac091024cb5a2f7bd32dc673/thinc-8.0.2.tar.gz#sha256=20f033b3d9fbd02389d8f828cebcd3a42aee3e17ed4c2d56c6d5163af83a9cee (from Links for thinc) (requires-python:>=3.6). Command errored out with exit status 1: /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-r1g0o_ym/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple – setuptools β€˜cython>=0.25’ β€˜murmurhash>=0.28.0,<1.1.0’ β€˜cymem>=2.0.2,<2.1.0’ β€˜preshed>=3.0.2,<3.1.0’ β€˜blis>=0.4.0,<0.8.0’ β€˜numpy>=1.15.0’ Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement thinc<8.1.0,>=8.0.2
ERROR: No matching distribution found for thinc<8.1.0,>=8.0.2

couldn’t understand how to rectify the problem. if i install different version of pytorch and torchvision, my application may not work. Any suggestions how solve this issue?

Thanks in Advance
Regards,
Swagatika

Hi,

Have you tried following command?

pip install blis

I recommend to use the L4T docker containers for getting rid of these kind of installation problems.

https://ngc.nvidia.com/catalog/containers?orderBy=modifiedDESC&pageNumber=1&query=&quickFilter=&filters=

1 Like

Thanks for ur suggestion,
Nothing is working ,I have installed blis too
but while installing fastai , dependency scapy is not getting installed, getting below error. i am struggling a lot, please help me.
File β€œ/usr/local/lib/python3.6/dist-packages/setuptools/init.py”, line 153, in setup
return distutils.core.setup(**attrs)
File β€œ/usr/lib/python3.6/distutils/core.py”, line 148, in setup
dist.run_commands()
File β€œ/usr/lib/python3.6/distutils/dist.py”, line 955, in run_commands
self.run_command(cmd)
File β€œ/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File β€œ/usr/local/lib/python3.6/dist-packages/setuptools/command/install.py”, line 61, in run
return orig.install.run(self)
File β€œ/usr/lib/python3.6/distutils/command/install.py”, line 589, in run
self.run_command(β€˜build’)
File β€œ/usr/lib/python3.6/distutils/cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File β€œ/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File β€œ/usr/lib/python3.6/distutils/command/build.py”, line 135, in run
self.run_command(cmd_name)
File β€œ/usr/lib/python3.6/distutils/cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File β€œ/usr/lib/python3.6/distutils/dist.py”, line 974, in run_command
cmd_obj.run()
File β€œ/usr/lib/python3.6/distutils/command/build_ext.py”, line 339, in run
self.build_extensions()
File β€œ/tmp/pip-install-u584kgou/blis_d66cd1104e0c4770bcd8151dca1b629c/setup.py”, line 109, in build_extensions
objects = self.compile_objects(platform_name, arch, OBJ_DIR)
File β€œ/tmp/pip-install-u584kgou/blis_d66cd1104e0c4770bcd8151dca1b629c/setup.py”, line 177, in compile_objects
objects.append(self.build_object(env=env, **spec))
File β€œ/tmp/pip-install-u584kgou/blis_d66cd1104e0c4770bcd8151dca1b629c/setup.py”, line 191, in build_object
subprocess.check_call(command, cwd=BLIS_DIR)
File β€œ/usr/lib/python3.6/subprocess.py”, line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command β€˜[β€˜gcc’, β€˜-c’, β€˜/tmp/pip-install-u584kgou/blis_d66cd1104e0c4770bcd8151dca1b629c/blis/src/kernels/skx/3/bli_dgemm_skx_asm_16x12_l2.c’, β€˜-o’, '/tmp/tmprl8jres/bli_dgemm_skx_asm_16x12_l2.o’, β€˜-O3’, β€˜-mavx512f’, β€˜-mavx512dq’, β€˜-mavx512bw’, β€˜-mavx512vl’, β€˜-mfpmath=sse’, β€˜-march=skylake-avx512’, β€˜-fPIC’, β€˜-std=c99’, β€˜-fvisibility=hidden’, β€˜-D_POSIX_C_SOURCE=200112L’, β€˜-DBLIS_VERSION_STRING=β€œ0.7.0”’, β€˜-DBLIS_IS_BUILDING_LIBRARY’, β€˜-Iinclude/linux-x86_64’, β€˜-I./frame/3/’, β€˜-I./frame/ind/ukernels/’, β€˜-I./frame/3/’, β€˜-I./frame/1m/’, β€˜-I./frame/1f/’, β€˜-I./frame/1/’, β€˜-I./frame/include’, β€˜-I/tmp/pip-install-u584kgou/blis_d66cd1104e0c4770bcd8151dca1b629c/blis/_src/include/linux-x86_64’]’ returned non-zero exit status 1.
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c β€˜import sys, setuptools, tokenize; sys.argv[0] = β€˜"’"’/tmp/pip-install-u584kgou/blis_d66cd1104e0c4770bcd8151dca1b629c/setup.py’"’"’; file=’"’"’/tmp/pip-install-u584kgou/blis_d66cd1104e0c4770bcd8151dca1b629c/setup.py’"’"’;f=getattr(tokenize, β€˜"’"β€˜open’"’"’, open)(file);code=f.read().replace(’"’"’\r\n’"’"’, β€˜"’"’\n’"’"’);f.close();exec(compile(code, file, β€˜"’"β€˜exec’"’"’))’ install --record /tmp/pip-record-f2ghbqnu/install-record.txt --single-version-externally-managed --prefix /tmp/pip-build-env-ond0atub/overlay --compile --install-headers /tmp/pip-build-env-ond0atub/overlay/include/python3.6m/blis Check the logs for full command output.
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/62/0f/7142d6fd3282e7f9002c4d31547168c15696ac091024cb5a2f7bd32dc673/thinc-8.0.2.tar.gz#sha256=20f033b3d9fbd02389d8f828cebcd3a42aee3e17ed4c2d56c6d5163af83a9cee (from Links for thinc) (requires-python:>=3.6). Command errored out with exit status 1: /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-ond0atub/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple – setuptools β€˜cython>=0.25’ β€˜murmurhash>=0.28.0,<1.1.0’ β€˜cymem>=2.0.2,<2.1.0’ β€˜preshed>=3.0.2,<3.1.0’ β€˜blis>=0.4.0,<0.8.0’ β€˜numpy>=1.15.0’ Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement thinc<8.1.0,>=8.0.2
ERROR: No matching distribution found for thinc<8.1.0,>=8.0.2

WARNING: Discarding https://files.pythonhosted.org/packages/65/01/fd65769520d4b146d92920170fd00e01e826cda39a366bde82a87ca249db/spacy-3.0.5.tar.gz#sha256=9f7a09fbad53aac2a3cb7696a902de62b94575a15d249dd5e26a98049328060e (from Links for spacy) (requires-python:>=3.6). Command errored out with exit status 1: /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-15bc7fw2/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple – setuptools β€˜cython>=0.25’ β€˜cymem>=2.0.2,<2.1.0’ β€˜preshed>=3.0.2,<3.1.0’ β€˜murmurhash>=0.28.0,<1.1.0’ β€˜thinc>=8.0.2,<8.1.0’ β€˜blis>=0.4.0,<0.8.0’ pathy β€˜numpy>=1.15.0’ Check the logs for full command output.
Using cached spacy-3.0.4.tar.gz (7.0 MB)
Installing build dependencies …

Hi,

Also i tried using docker container but got error:
the steps I followed are listed below:
step1 : git clone GitHub - dusty-nv/jetson-containers: Machine Learning Containers for NVIDIA Jetson and JetPack-L4T
step2 : cd jetson-containers
step3 : As my application needs torch 1.6 and torchvision 0.7.0 so i made changes in scripts/docker_build_ml.sh
vi docker_build_ml.sh
then uncomment below lines:
# PyTorch v1.6.0
build_pytorch β€œhttps://nvidia.box.com/shared/static/9eptse6jyly1ggt9axbjj
a2yrmj6pbarc.whl”
β€œtorch-1.6.0-cp36-cp36m-linux_aarch64.whl”
β€œl4t-pytorch:r$L4T_VERSION-pth1.6-py3”
β€œv0.7.0”
β€œpillow”
β€œv0.6.0”
then i saved it.
step4: jetson@ubuntu:~/jetson-containers$ ./scripts/docker_build_ml.sh pytorch
The error i got is:
Note: checking out β€˜78ed10cc51067f1a6bac9352831ef37a3f842784’.

You are in β€˜detached HEAD’ state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

git checkout -b

Traceback (most recent call last):
File β€œsetup.py”, line 13, in
import torch
File β€œ/usr/local/lib/python3.6/dist-packages/torch/init.py”, line 188, in
_load_global_deps()
File β€œ/usr/local/lib/python3.6/dist-packages/torch/init.py”, line 141, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File β€œ/usr/lib/python3.6/ctypes/init.py”, line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: libcurand.so.10: cannot open shared object file: No such file or directory
The command β€˜/bin/sh -c git clone -b {TORCHVISION_VERSION} https://github.com/pytorch/vision torchvision && cd torchvision && python3 setup.py install && cd ../ && rm -rf torchvision && pip3 install "{PILLOW_VERSION}"’ returned a non-zero code: 1

Can you please help me, how to rectify the issue.
Thanks

Hi,

β€œfastai” might not be compatible with Nvidia Jetson libraries. Do you have a chance to check this kind of compatability?

Hi,

I have followed below link to install pytorch from docker container.

getting below error:
/usr/local/lib/python3.6/dist-packages/torch/include/torch/library.h:91:64: required from β€˜torch::CppFunction::CppFunction(Func*, std::enable_if_t<c10::guts::is_function_type<FuncType_>::value, std::nu
llptr_t>) [with Func = long int(); std::enable_if_t<c10::guts::is_function_type<FuncType_>::value, std::nullptr_t> = std::nullptr_t]’
/usr/local/lib/python3.6/dist-packages/torch/include/torch/library.h:414:17: required from β€˜torch::Library& torch::Library::def(NameOrSchema&&, Func&&) & [with NameOrSchema = const char (&)[14]; Func =
long int (*)()]’
/torchvision/torchvision/csrc/vision.cpp:56:40: required from here
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:239:22: warning: variable β€˜num_ivalue_args’ set but not used [-Wunused-but-set-variable]
constexpr size_t num_ivalue_args = sizeof…(ivalue_arg_indices);
^~~~~~~~~~~~~~~
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/torchvision/torchvision/csrc
-I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/loca
l/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/usr/include/python3.6m -c /torchvision/torchvision/csrc/cpu/ROIAlign_cpu.cpp -o build/temp.linux-aarch64-3.6/torchvision/
torchvision/csrc/cpu/ROIAlign_cpu.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Parallel.h:149:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:7,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from /torchvision/torchvision/csrc/cpu/vision_cpu.h:2,
from /torchvision/torchvision/csrc/cpu/ROIAlign_cpu.cpp:2:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)

can any one help how to solve this please?
Thanks

Hi,

Are you saying, fastai library is not compatible with jetson nano device?

Hi @swagatika.som123, please set your default docker-runtime to nvidia and reboot:

https://github.com/dusty-nv/jetson-containers#docker-default-runtime

This will make it so that CUDA/cuDNN/ect can be used while you are building containers.

Thanks a lot. I missed that step. I successfully installed pytorch.
tested using command : ./scripts/docker_test_ml.sh pytorch. it ran successfully.
But when import torch, it is saying that no module torch. why so? please see below error:
numpy OK

done testing container l4t-pytorch:r32.5.1-pth1.6-py3 => numpy

jetson@ubuntu:~/jetson-containers$ python3
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type β€œhelp”, β€œcopyright”, β€œcredits” or β€œlicense” for more information.

import torch
Traceback (most recent call last):
File β€œβ€, line 1, in
ModuleNotFoundError: No module named β€˜torch’

Thanks a lot!
after runing the container, now its working
Sorry ! earlier i didn’t ran the container.
Commands I typed:
jetson@ubuntu:~$ sudo docker run -i -t a1cd3deb253d /bin/bash
root@81054d5a70cc:/# ls
bin dev etc lib mnt proc run srv tmp var
boot dst home media opt root sbin sys usr
root@81054d5a70cc:/# python3
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type β€œhelp”, β€œcopyright”, β€œcredits” or β€œlicense” for more information.

import torch
torch.version

Thank you very much .

Hi,

I have one more small doubt regarding installation of fastai library .
i want to install fastai as part of same docker container.
How do change inside ./scripts/docker_build_ml.sh file .
let us say for fastai:
pip3 install fastai
when i wrote the same above command inside ./scripts/docker_build_ml.sh file, it successfully executed and installed but not part of the docker image
when i ran same docker image, it says no fastai found .
jetson@ubuntu:~$ sudo docker run -i -t a1cd3deb253d /bin/bash

how do i make any library and application to be part of the same docker image.
please give your valuable suggestion or any link so that i can go through.

Thanks

Hi @swagatika.som123, sorry for the delay - to install something inside the container, you need to edit it’s Dockerfile:

https://github.com/dusty-nv/jetson-containers/blob/master/Dockerfile.pytorch

The build script invokes these Dockerfiles. Add this to the end of Dockerfile.pytorch:

RUN pip3 install fastai --verbose

And then run scripts/docker_build_ml.sh pytorch

Hi @dusty_nv ,

Thanks a lot for reply.

To summarise every thing,if we we want to install requirements.txt as part of container, then can we able to add below lines at the end of Dockerfile.pytorch?

RUN pip3 install -r requirements.txt --verbose

CMD [β€œpython3”,β€œapp.py”]

will this work ?

Hi @dusty_nv ,

I executed the mentioned command:

RUN pip3 install fastai --verbose

And then run scripts/docker_build_ml.sh pytorch
got below error :

Command β€œpython setup.py egg_info” failed with error code 1 in /tmp/pip-build-ak6wodlq/spacy/
Exception information:
Traceback (most recent call last):
File β€œ/usr/lib/python3/dist-packages/pip/basecommand.py”, line 215, in main
status = self.run(options, args)
File β€œ/usr/lib/python3/dist-packages/pip/commands/install.py”, line 353, in run
wb.build(autobuilding=True)
File β€œ/usr/lib/python3/dist-packages/pip/wheel.py”, line 749, in build
self.requirement_set.prepare_files(self.finder)
File β€œ/usr/lib/python3/dist-packages/pip/req/req_set.py”, line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File β€œ/usr/lib/python3/dist-packages/pip/req/req_set.py”, line 634, in _prepare_file
abstract_dist.prep_for_dist()
File β€œ/usr/lib/python3/dist-packages/pip/req/req_set.py”, line 129, in prep_for_dist
self.req_to_install.run_egg_info()
File β€œ/usr/lib/python3/dist-packages/pip/req/req_install.py”, line 439, in run_egg_info
command_desc=β€˜python setup.py egg_info’)
File β€œ/usr/lib/python3/dist-packages/pip/utils/init.py”, line 725, in call_subprocess
% (command_desc, proc.returncode, cwd))
pip.exceptions.InstallationError: Command β€œpython setup.py egg_info” failed with error code 1 in /tmp/pip-b
uild-ak6wodlq/spacy/
The command β€˜/bin/sh -c pip3 install fastai --verbose’ returned a non-zero code: 1

Hi @swagatika.som123, can you post the rest of the log? This doesn’t contain the error that caused it.

I personally haven’t installed fastai before so not sure what the issue is - you may want to try searching the forum to see if other people have gotten it working before.

Is this requirements.txt from a git repo? If so, first you need to clone the git repo inside the container:

RUN git clone https://github.com/myuser/myrepo && \
    cd myrepo && \
    pip3 install -r requirements.txt

Likewise, you would need to make sure app.py is installed into the container.
The other way to do this is with a COPY command in your Dockerfile

1 Like