Error when trying to run Voice Demo for Jetson/L4T

I get the following error when i install the Voice Demo for Jetson/L4T that uses docker BERT and ASR: Voice Demo for Jetson/L4T | NVIDIA NGC

I followed all of the instructions in order. If i run it without --runtime nvidia (as in the instructions) I get error:

error running hook: exit status 1, stdout: , stderr: Auto-detected mode as ‘csv’
invoking the NVIDIA Container Runtime Hook directly (e.g. specifying the docker --gpus flag) is not supported. Please use the NVIDIA Container Runtime instead.

If I include the --runtime nvidia
sudo docker run --runtime nvidia --gpus all -it --rm --shm-size=8g -p 8888:8888 -p 6007:6006–ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/nemo_bert_text_classification:20.07

trtserver: error while loading shared libraries: /usr/lib/aarch64-linux-gnu/libnvinfer.so.7: file too short

I am running Jetpack 5.1 on a Jetson Xavier NX
CUDA 11.4,
TensorRT 8.5.2
cuDNN 8.6.0
VPI 2.2
Ubuntu 20.04.6 LTS
nvme 512GB drive

Any help would be appreciated.

Hi @rick.minicucci, that demo was from an old version of JetPack 4.4 at the time of Xavier NX release, and its container isn’t compatible with JetPack 5. And I believe that nvcr.io/nvidia/nemo_bert_text_classification:20.07 was built for x86 or SBSA as opposed to Jetson. Instead I would recommend checking out the newer tutorials from jetson-ai-lab.com

If you are specifically looking for BERT, I have a Nemo container built for JetPack 5, and have tested that BERT QA works in it:

For Riva ASR/TTS, you can utilize the riva_quickstart_arm64 on NGC:

Thanks for the quick response. I will start on these ASAP

@dusty_nv
I tried the BERT QA. But ultimately i got this error: ValueError: Please use a device with more CPU ram or a smaller dataset
I have an Xavier NX.
The Jetson AI Lab appears to only support Orin
I have a windows 11 pc with an Nvidia 4060TI and CUDA support with plenty of horsepower. Is there a version that will run on it?

Hmm okay, my guess is it’s because Xavier NX has 8GB RAM, but it needs more to run that. You can attempt to mount more swap and disable the desktop UI to save memory like shown here: https://github.com/dusty-nv/jetson-containers/blob/master/docs/setup.md#mounting-swap

The containers for Jetson AI Lab are built for ARM64 architecture and JetPack, but if you install WSL2 on your Windows machine and docker, you should be able to run the original nemo container for x86 that you had tried (although I’m not sure about VRAM requirements and what your discrete GPU card has)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.