NGC RMIRs Error in downloading models

Please provide the following information when requesting support.

Workstation
Hardware - GPU (GeForce RTX 2070)
Hardware - CPU
Operating System - Win 10
Riva Version 2.14.0
TLT Version (if relevant)
How to reproduce the issue ? (This is for errors. Please share the command and the detailed log here)

  • Go to directory for the riva server
  • Git Bash Here
  • Bash riva_init.sh

Response

$ bash riva_init.sh
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connection.
> Pulling Riva Speech Server images.
  > Image nvcr.io/nvidia/riva/riva-speech:2.14.0 exists. Skipping.
  > Image nvcr.io/nvidia/riva/riva-speech:2.14.0-servicemaker exists. Skipping.

Downloading models (RMIRs) from NGC...
Note: this may take some time, depending on the speed of your Internet connection.
To skip this process and use existing RMIRs set the location and corresponding flag in config.sh.
the input device is not a TTY.  If you are using mintty, try prefixing the command with 'winpty'
Error in downloading models.

  • Bash riva_start.sh
$ bash riva_start.sh
the input device is not a TTY.  If you are using mintty, try prefixing the command with 'winpty'
Starting Riva Speech Services. This may take several minutes depending on the number of models deployed.
OCI runtime exec failed: exec failed: unable to start container process: exec: "C:/Program Files/Git/usr/bin/grpc_health_probe": stat C:/Program Files/Git/usr/bin/grpc_health_probe: no such file or directory: unknown
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Health ready check failed.
Check Riva logs with: docker logs riva-speech

  • Iā€™m thinking that i need the RMIRs and that this might be a underlying cause. So solving the download of the RMIRs and getting riva_init.sh to work is my primary concern.

Running: docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi
Produces this:

docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi
Unable to find image 'nvidia/cuda:11.8.0-base-ubuntu22.04' locally
11.8.0-base-ubuntu22.04: Pulling from nvidia/cuda
aece8493d397: Pull complete
5e3b7ee77381: Pull complete
5bd037f007fd: Pull complete
4cda774ad2ec: Pull complete
775f22adee62: Pull complete
Digest: sha256:f895871972c1c91eb6a896eee68468f40289395a1e58c492e1be7929d0f8703b
Status: Downloaded newer image for nvidia/cuda:11.8.0-base-ubuntu22.04
Fri Feb  9 09:47:09 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.33.01              Driver Version: 546.29       CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 2070        On  | 00000000:02:00.0  On |                  N/A |
|  0%   45C    P8              36W / 185W |   1955MiB /  8192MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

Running: wsl -l -v
Procuces this:

wsl -l -v
  NAME                   STATE           VERSION
* docker-desktop-data    Running         2
  Ubuntu                 Stopped         2
  docker-desktop         Running         2

I have added my API key and iā€™m able to pull other models from NGC using CLI
I can confirm and see the Riva volumes, images and containers in Docker

  • riva speech
  • cuda

Riva Skills Quick Start install confirmed

ngc registry resource download-version "nvidia/riva/riva_quickstart:2.14.0"
{
    "download_end": "2024-02-09 10:51:23",
    "download_start": "2024-02-09 10:51:20",
    "download_time": "3s",
    "files_downloaded": 26,
    "local_path": "C:\\Windows\\System32\\riva_quickstart_v2.14.0\\riva_quickstart_v2.14.0",
    "size_downloaded": "147.85 KB",
    "status": "COMPLETED"
}

Need to figure out what models this person has downloaded locally

When running:

bash riva_init.sh config.sh

Error in git bash:

Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connection.
> Pulling Riva Speech Server images.
  > Image nvcr.io/nvidia/riva/riva-speech:2.14.0 exists. Skipping.
  > Image nvcr.io/nvidia/riva/riva-speech:2.14.0-servicemaker exists. Skipping.

Downloading models (RMIRs) from NGC...
Note: this may take some time, depending on the speed of your Internet connection.
To skip this process and use existing RMIRs set the location and corresponding flag in config.sh.
the input device is not a TTY.  If you are using mintty, try prefixing the command with 'winpty'
Error in downloading models.

Error in VS Code

<3>WSL (26) ERROR: CreateProcessParseCommon:754: getpwuid(0) failed 2
Processing fstab with mount -a failed.

<3>WSL (26) ERROR: CreateProcessEntryCommon:331: getpwuid(0) failed 2
<3>WSL (26) ERROR: CreateProcessEntryCommon:502: execvpe /bin/bash failed 2
<3>WSL (26) ERROR: CreateProcessEntryCommon:505: Create process not expected to return

Progress. Forgon to set flag to use local RMIRs
In Git Bash. VS Code doesnā€™t work for me right now.

$ bash riva_init.sh config.sh
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connection.
> Pulling Riva Speech Server images.
  > Image nvcr.io/nvidia/riva/riva-speech:2.14.0 exists. Skipping.
  > Image nvcr.io/nvidia/riva/riva-speech:2.14.0-servicemaker exists. Skipping.

+ [[ non-tegra != \t\e\g\r\a ]]
+ [[ non-tegra == \t\e\g\r\a ]]
+ echo 'Converting RMIRs at D:\Nvidia\RivaServer\models/rmir to Riva Model repository.'
Converting RMIRs at D:\Nvidia\RivaServer\models/rmir to Riva Model repository.
+ docker run --init -it --rm --gpus '"device=0"' -v 'D:\Nvidia\RivaServer\models:/data' -e MODEL_DEPLOY_KEY=tlt_encode --name riva-service-maker nvcr.io/nvidia/riva/riva-speech:2.14.0-servicemaker deploy_all_models /data/rmir /data/models
the input device is not a TTY.  If you are using mintty, try prefixing the command with 'winpty'
+ '[' 1 -ne 0 ']'
+ echo 'Error in deploying RMIR models.'
Error in deploying RMIR models.
+ exit 1

Progress:

In GitBash i used this command

alias docker="winpty docker"

Then i ran this command

winpty bash riva_init.sh config.sh

That generated this output

$ winpty bash riva_init.sh config.sh
riva_init.sh: line 57: [: failed to get console mode for stdout: The handle is i
nvalid: integer expression expected
riva_init.sh: line 57: [: failed to get console mode for stdout: The handle is i
nvalid: integer expression expected
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connectio
n.
> Pulling Riva Speech Server images.
  > Pulling nvcr.io/nvidia/riva/riva-speech:2.14.0. This may take some time...
  > Pulling nvcr.io/nvidia/riva/riva-speech:2.14.0-servicemaker. This may take s
ome time...

+ [[ non-tegra != \t\e\g\r\a ]]
+ [[ non-tegra == \t\e\g\r\a ]]
+ echo 'Converting RMIRs at D:\Nvidia\RivaServer\models/rmir to Riva Model repos
itory.'
Converting RMIRs at D:\Nvidia\RivaServer\models/rmir to Riva Model repository.
+ docker run --init -it --rm --gpus '"device=0"' -v 'D:\Nvidia\RivaServer\models
:/data' -e MODEL_DEPLOY_KEY=tlt_encode --name riva-service-maker nvcr.io/nvidia/
riva/riva-speech:2.14.0-servicemaker deploy_all_models /data/rmir /data/models

==========================
=== Riva Speech Skills ===
==========================

NVIDIA Release  (build 77214116)
Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

https://developer.nvidia.com/tensorrt

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All ri
ghts reserved.

This container image and its contents are governed by the NVIDIA Deep Learning C
ontainer License.
By pulling and using the container, you accept the terms and conditions of this
license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

To install the open-source samples corresponding to this TensorRT release versio
n
run /opt/tensorrt/install_opensource.sh.  To build the open source parsers,
plugins, and samples for current top-of-tree on master or a different branch,
run /opt/tensorrt/install_opensource.sh -b <branch>
See https://github.com/NVIDIA/TensorRT for more information.

find: 'C:/Program': No such file or directory
find: 'Files/Git/data/rmir': No such file or directory
+ '[' 0 -ne 0 ']'
+ [[ non-tegra == \t\e\g\r\a ]]
+ echo

+ echo 'Riva initialization complete. Run ./riva_start.sh to launch services.'
Riva initialization complete. Run ./riva_start.sh to launch services.

This takes care of three of my errors

  • Input device is not a TTY
  • RMIRs not downloading(i can now set it to local)
  • Deploying RMIRs and starting server

Errors left to solve

  • Health ready check failed. When running riva_start.sh

Installing Openssl

  • Not using third party(unsecure) but rather the openssl found in the git installation
  • Open Gitbash as admin from the start menu and run:
openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem

I feel like iā€™m getting closer to a resolution. Left to solve:

  • Server health check fails
  • Error: riva_start.sh: line 18: [: failed to get console mode for stdout: The handle is
    invalid: integer expression expected
  • running winpty docker logs riva-speech doesnā€™t produce any output

This is the output from running: winpty bash riva_start.sh

$ winpty bash riva_start.sh
riva_start.sh: line 18: [: failed to get console mode for stdout: The handle is
invalid: integer expression expected
riva_start.sh: line 18: [: failed to get console mode for stdout: The handle is
invalid: integer expression expected
Riva Speech already running. Skipping...
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Health ready check failed.
Check Riva logs with: docker logs riva-speech

In relation to the error i ran a echo before the if statement to see what is happening. This is the output:

maj_ver: failed to get console mode for stdout: The handle is invalid
min_ver:  25

So i can clearly get the min ver out but not the major.
I can see that i am in fact using a docker version later than 19.03 as specified in the code

I dontā€™ think this is the error i need to solve. I think itā€™s not generating and deploying the models correctly.

New assumption
When doing riva_init.sh i get this:

==========================
=== Riva Speech Skills ===
==========================

NVIDIA Release  (build 77214116)
Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

https://developer.nvidia.com/tensorrt

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All ri
ghts reserved.

This container image and its contents are governed by the NVIDIA Deep Learning C
ontainer License.
By pulling and using the container, you accept the terms and conditions of this
license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

To install the open-source samples corresponding to this TensorRT release versio
n
run /opt/tensorrt/install_opensource.sh.  To build the open source parsers,
plugins, and samples for current top-of-tree on master or a different branch,
run /opt/tensorrt/install_opensource.sh -b <branch>
See https://github.com/NVIDIA/TensorRT for more information.

find: 'C:/Program': No such file or directory
find: 'Files/Git/data/rmir': No such file or directory
+ '[' 0 -ne 0 ']'
+ [[ non-tegra == \t\e\g\r\a ]]
+ echo

+ echo 'Riva initialization complete. Run ./riva_start.sh to launch services.'
Riva initialization complete. Run ./riva_start.sh to launch services.

This specifically is interesting

find: 'C:/Program': No such file or directory
find: 'Files/Git/data/rmir': No such file or directory

Not sure how to change that.

Tried running this:

docker volume rm riva-model-repo
and
bash riva_clean.sh

Now the containter wonā€™t start. I can see it popping up but then dissapearing
I am also missing the docker volume
I have my images though.

  • Restored images and volume
  • Had to re-create the API key
  • Can confirm that both CLI and Docker is logged in with API key
  • Still canā€™t see riva-speech container
  • Back to health check failed and trying to load all models

Iā€™m guessing i will need all of these models locally for asr to work:

  • en-US, de-DE, ja-JP, zh-CN, es-en-US, zh-en-CN, ja-en-JP,
  • downloaded all models. Health check still fails and cointainer doesnā€™t show up

Re-installing wsl Ubuntu 22.04

Iā€™m at a loss here.

  • Toolkit installed 2.14.0
  • Cuda drivers and cuDNN installed
  • Riva ASR and more models downloaded
  • Config.sh file modified to point to local models
  • grpc(ssh and protobuf) installed
  • docker installed
  • container visible(riva-speech:2.14.0)
  • images visible(both 2.14.0 and 2.14.0 servicemaker)
  • wsl installed with Ubuntu and Ubuntu 22.04
  • Kubernetes installed

Error in container:

2024-02-13 10:17:33 ==========================
2024-02-13 10:17:33 === Riva Speech Skills ===
2024-02-13 10:17:33 ==========================
2024-02-13 10:17:33 
2024-02-13 10:17:33 NVIDIA Release 23.12 (build 77214108)
2024-02-13 10:17:33 
2024-02-13 10:17:33 Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2024-02-13 10:17:33 
2024-02-13 10:17:33 Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.
2024-02-13 10:17:33 
2024-02-13 10:17:33 This container image and its contents are governed by the NVIDIA Deep Learning Container License.
2024-02-13 10:17:33 By pulling and using the container, you accept the terms and conditions of this license:
2024-02-13 10:17:33 https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
2024-02-13 10:17:33 
2024-02-13 10:17:33 WARNING: The NVIDIA Driver was not detected.  GPU functionality will not be available.
2024-02-13 10:17:33    Use the NVIDIA Container Toolkit to start this container with GPU support; see
2024-02-13 10:17:33    https://docs.nvidia.com/datacenter/cloud-native/ .
2024-02-13 10:17:33

When running riva_init.sh config.sh

winpty bash riva_init.sh config.sh
riva_init.sh: line 57: [: failed to get console mode for stdout: The handle is i
nvalid: integer expression expected
riva_init.sh: line 57: [: failed to get console mode for stdout: The handle is i
nvalid: integer expression expected
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connectio
n.
> Pulling Riva Speech Server images.
  > Pulling nvcr.io/nvidia/riva/riva-speech:2.14.0. This may take some time...
  > Pulling nvcr.io/nvidia/riva/riva-speech:2.14.0-servicemaker. This may take s
ome time...

+ [[ non-tegra != \t\e\g\r\a ]]
+ [[ non-tegra == \t\e\g\r\a ]]
+ echo 'Converting RMIRs at D:\Nvidia\RivaServer\models/rmir to Riva Model repos
itory.'
Converting RMIRs at D:\Nvidia\RivaServer\models/rmir to Riva Model repository.
+ docker run --init -it --rm --gpus '"device=0"' -v 'D:\Nvidia\RivaServer\models
:/data' -e MODEL_DEPLOY_KEY=tlt_encode --name riva-service-maker nvcr.io/nvidia/
riva/riva-speech:2.14.0-servicemaker deploy_all_models /data/rmir /data/models

==========================
=== Riva Speech Skills ===
==========================

NVIDIA Release  (build 77214116)
Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

https://developer.nvidia.com/tensorrt

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All ri
ghts reserved.

This container image and its contents are governed by the NVIDIA Deep Learning C
ontainer License.
By pulling and using the container, you accept the terms and conditions of this
license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

To install the open-source samples corresponding to this TensorRT release versio
n
run /opt/tensorrt/install_opensource.sh.  To build the open source parsers,
plugins, and samples for current top-of-tree on master or a different branch,
run /opt/tensorrt/install_opensource.sh -b <branch>
See https://github.com/NVIDIA/TensorRT for more information.

find: 'C:/Program': No such file or directory
find: 'Files/Git/data/rmir': No such file or directory
+ '[' 0 -ne 0 ']'
+ [[ non-tegra == \t\e\g\r\a ]]
+ echo

+ echo 'Riva initialization complete. Run ./riva_start.sh to launch services.'
Riva initialization complete. Run ./riva_start.sh to launch services.

When running riva_start.sh

winpty bash riva_start.sh
maj_ver: failed to get console mode for stdout: The handle is invalid
min_ver:  25
riva_start.sh: line 20: [: failed to get console mode for stdout: The handle is
invalid: integer expression expected
riva_start.sh: line 20: [: failed to get console mode for stdout: The handle is
invalid: integer expression expected
Riva Speech already running. Skipping...
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Health ready check failed.
Check Riva logs with: docker logs riva-speech

when running docker logs

$ docker logs riva-speech
Error response from daemon: No such container: riva-speech

running docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi

docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi
Tue Feb 13 09:35:30 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.33.01              Driver Version: 546.29       CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 2070        On  | 00000000:02:00.0  On |                  N/A |
|  0%   50C    P8              35W / 185W |   2181MiB /  8192MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A        20      G   /Xwayland                                 N/A      |
|    0   N/A  N/A        26      G   /Xwayland                                 N/A      |
+---------------------------------------------------------------------------------------+

running wsl -l -v

wsl -l -v
  NAME                   STATE           VERSION
* docker-desktop-data    Running         2
  docker-desktop         Running         2
  Ubuntu                 Running         2
  Ubuntu-22.04           Stopped         2

Is it the TensorRT installation that fails?
Is it also that it cannot find the rmirā€™s because of this error?

find: 'C:/Program': No such file or directory
find: 'Files/Git/data/rmir': No such file or directory

Somehow i have a riva client up and runningā€¦
It popped up AFTER i closed down docker and restarted itā€¦both times it was open with admin rights

Getting this error when running commands in the riva container

I can confirm that GRPC is installed

i have specified the ssl server, key and client certs in the config.sh file
the client is running and i can access it
the server wonā€™t run

I would like to enroll in this course. But given the problems and errors iā€™m having setting this up. Iā€™m not sure that i would be able to complete it due to not being able to run the required software.

https://courses.nvidia.com/courses/course-v1:DLI+S-FX-04+V2/

Complete dead end for me.
Iā€™m lacking a logical flow to follow. I feel like iā€™m either lacking some knowledge or insights because i keep ending up in the same place with seemingly no progress.

I am able to meet the requirements(confirmed and working)

Except for this one

  • Triton Inference Server 2.27.0

Still getting

  • GRPC error not solvable

TritonInference server up and running on docker

curl -v localhost:8000/v2/health/ready
....
< HTTP/1.1 200 OK
< Content-Length: 0
< Content-Type: text/plain

Riva client container up and running
Still Getting GRPC error

I think i figured out what the error was or at least how to solve it

Make sure you have at least 100+ gb free on your HDD
Create NGC Account and Generate API key on the NGC Site

Install pre-requisites (Check out Support Matrix)

ngc registry resource download-version nvidia/riva/riva_quickstart:2.14.0
  • Visual Studio Code with dev docker plugin
  • Set Nvidia NGC API-key environment variable

In Visual Studio Code

  • Open riva_quickstart_v2.14.0 folder
  • Run in terminal bash riva_init.sh or bash riva_init.sh config.sh
  • Run in terminal bash riva_start.sh
  • Run in terminal bash riva_start_client.sh
  • Run in riva_speech docker container(Quick Start Guide Reference):

Transcribe

riva_asr_client --audio_file=/opt/riva/wav/en-US_sample.wav
riva_streaming_asr_client --audio_file=/opt/riva/wav/en-US_sample.wav

Synthesize

riva_tts_client --voice_name=English-US.Female-1 \
                --text="Hello, this is a speech synthesizer." \
                --audio_file=/opt/riva/wav/output.wav

T2T

riva_nmt_t2t_client --source_language_code="en-US" --target_language_code="de-DE" --text="This will become German words."

S2T

riva_nmt_streaming_s2t_client --audio_file=/opt/riva/wav/en-US_sample.wav --source_language_code="en-US" --target_language_code="de-DE"

S2S

riva_nmt_streaming_s2s_client --audio_file=/opt/riva/wav/es-US_sample.wav --source_language_code="es-US" --target_language_code="en-US"

Not needed

  • Triton Inference Server 2.27.0

Errors and solution:

  • I think the GRPC error was because i was using Git-Bash to launch riva init, start and start_client.
    Solved By running the start server in VS Code
  • The fact that the script was searching in git/data was probably because i was using git-bash and because it was located in Program Files it didnā€™t find program as a dir and neither files as a dir. so it failed to locate the rmirā€™s
    Solved By running through VS Code
  • Still missing some rmirā€™s needed but i think thatā€™s just a matter of downloading them
    Possible Solve Edit config.sh to point to local dir where i have all of the models. Will test soon

From here i think i will start looking into the tutorials here: How do I use Riva ASR APIs with out-of-the-box models? ā€” NVIDIA Riva

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.