Hi all.
I’m trying to run the MatchboxNet model provided in the Nvidia Catalog (NeMo Speech Models | NVIDIA NGC ) on the Jetson Nano, but having some issues with the preprocessing process.
In this demo (https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/04_Online_Offline_Speech_Commands_Demo.ipynb ) it uses the preprocessor directly, but I don’t know how to implement this with TensorRT.
The preprocessor uses torchaudio, which I can not install on the Jetson Nano. Is the preprocessor code available somewhere so that I can use another library to extract the audio features?
Thank you!
Best regards.
Hi,
You can install PyTorch with the instruction shared in this topic .
For example,
PyTorch v1.7:
$ wget https://nvidia.box.com/shared/static/cs3xn3td6sfgtene6jdvsxlr366m2dhq.whl -O torch-1.7.0-cp36-cp36m-linux_aarch64.whl
$ sudo apt-get install python3-pip libopenblas-base libopenmpi-dev
$ pip3 install Cython
$ pip3 install numpy torch-1.7.0-cp36-cp36m-linux_aarch64.whl
Torchaudio v0.7.0:
$ git clone --branch release/0.7 https://github.com/pytorch/audio torchaudio
$ cd torchaudio
$ export BUILD_VERSION=0.7.0
$ sudo python3 setup.py install
Thanks.
I tried to do that, but when I did “sudo python3 setup.py install” I got the error: “Illegal instruction”.
Getting that error also when trying to import torch, what was working previously.
Thank you for your quick response!
Hi,
Are you using JetPack 4.4.1?
Please noted that the JetPack version and package have some dependency on CUDA.
So you will need to use the package built with the same environment for compatibility.
You can find the dependency on details in the topic 72048.
Thanks.
When doing “dpkg-query --show nvidia-l4t-core” I get “nvidia-l4t-core 32.4.3-20200625213809”.
The info of my Jetson Nano is:
NVIDIA Jetson Nano (Developer Kit Version)
L4T 32.4.3 [ JetPack 4.4 ]
Ubuntu 18.04.4 LTS
Kernel Version: 4.9.140-tegra
CUDA 10.2.89
CUDA Architecture: 5.3
The PyTorch version I installed was 1.7.0.
I also realized it is not only the torch module the one that gives me that “Illegal instruction” error, but also other modules such as numpy, librosa or onnxruntime.
I managed to make it work, but having another problem when trying to use TorchAudio: “RuntimeError: fft: ATen not compiled with MKL support”.
It seems that MKL library is not supported on ARM architectures.
So, in the end, as MatchboxNet audio preprocess is done with TorchAudio, is it possible to use it on Jetson Nano?
Don’t know what to do with that MKL error.
Hi,
Sorry that we don’t have too much experience with torchaudio.
But based on the discussion below, it seems that torchaudio can work on the ARM system with some manually update:
opened 01:02PM - 17 Jul 20 UTC
closed 11:29PM - 30 Jun 21 UTC
module: build
triaged
module: POWER
module: arm
module: fft
function request
## 🐛 Bug
`fft: ATen not compiled with MKL support` RuntimeError thrown when t… rying to compute Spectrogram on Jetson Nano that uses ARM64 processor.
## To Reproduce
Code sample:
```
import torchaudio
waveform, sample_rate = torchaudio.load('test.wav')
spectrogram = torchaudio.transforms.Spectrogram(sample_rate)(waveform)
```
Stack trace:
```
Traceback (most recent call last):
File "spectrogram_test.py", line 4, in <module>
spectrogram = torchaudio.transforms.Spectrogram(sample_rate)(waveform)
File "/home/witty/ai-benchmark-2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/witty/ai-benchmark-2/lib/python3.6/site-packages/torchaudio-0.7.0a0+102174e-py3.6-linux-aarch64.egg/torchaudio/transforms.py", line 84, in forward
self.win_length, self.power, self.normalized)
File "/home/witty/ai-benchmark-2/lib/python3.6/site-packages/torchaudio-0.7.0a0+102174e-py3.6-linux-aarch64.egg/torchaudio/functional.py", line 162, in spectrogram
waveform, n_fft, hop_length, win_length, window, True, "reflect", False, True
File "/home/witty/ai-benchmark-2/lib/python3.6/site-packages/torch/functional.py", line 465, in stft
return _VF.stft(input, n_fft, hop_length, win_length, window, normalized, onesided)
RuntimeError: fft: ATen not compiled with MKL support
```
## Expected behavior
Spectrogram from waveform created
## Environment
Commands used to install PyTorch:
```
wget https://nvidia.box.com/shared/static/yr6sjswn25z7oankw8zy1roow9cy5ur1.whl -O torch-1.6.0rc2-cp36-cp36m-linux_aarch64.whl
sudo apt-get install python-pip libopenblas-base libopenmpi-dev
pip install Cython
pip install numpy torch-1.6.0rc2-cp36-cp36m-linux_aarch64.whl
```
Commands used to install torchaudio:
sox:
```
sudo apt-get update -y
sudo apt-get install -y libsox-dev
pip install sox
```
torchaudio:
```
git clone https://github.com/pytorch/audio.git audio
cd audio && python setup.py install
```
`torchaudio.__version__` output:
`0.7.0a0+102174e`
`collect_env.py` output:
```
PyTorch version: 1.6.0
Is debug build: No
CUDA used to build PyTorch: 10.2
OS: Ubuntu 18.04.4 LTS
GCC version: (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_etc.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.0.0
Versions of relevant libraries:
[pip3] numpy==1.16.1
[pip3] pytorch-ignite==0.3.0
[pip3] torch==1.6.0
[pip3] torchaudio==0.7.0a0+102174e
[conda] Could not collect
```
Other relevant information:
MKL is not installed, because it is not supported on ARM processors; oneDNN installed
## Additional context
I did not install MKL because it is not supported on ARM processors, so building PyTorch from source with MKL support is not possible. Is there any workaround to this problem?
cc @malfet @seemethere @walterddr @mruberry @peterbell10 @ezyang
Thanks.