Please provide the following information when requesting support.
Workstation
Hardware - GPU (GeForce RTX 2070)
Hardware - CPU
Operating System - Win 10
Riva Version 2.14.0
TLT Version (if relevant)
How to reproduce the issue ? (This is for errors. Please share the command and the detailed log here)
Go to directory for the riva server
Git Bash Here
Bash riva_init.sh
Response
$ bash riva_init.sh
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connection.
> Pulling Riva Speech Server images.
> Image nvcr.io/nvidia/riva/riva-speech:2.14.0 exists. Skipping.
> Image nvcr.io/nvidia/riva/riva-speech:2.14.0-servicemaker exists. Skipping.
Downloading models (RMIRs) from NGC...
Note: this may take some time, depending on the speed of your Internet connection.
To skip this process and use existing RMIRs set the location and corresponding flag in config.sh.
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
Error in downloading models.
Bash riva_start.sh
$ bash riva_start.sh
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
Starting Riva Speech Services. This may take several minutes depending on the number of models deployed.
OCI runtime exec failed: exec failed: unable to start container process: exec: "C:/Program Files/Git/usr/bin/grpc_health_probe": stat C:/Program Files/Git/usr/bin/grpc_health_probe: no such file or directory: unknown
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Health ready check failed.
Check Riva logs with: docker logs riva-speech
Iām thinking that i need the RMIRs and that this might be a underlying cause. So solving the download of the RMIRs and getting riva_init.sh to work is my primary concern.
Need to figure out what models this person has downloaded locally
When running:
bash riva_init.sh config.sh
Error in git bash:
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connection.
> Pulling Riva Speech Server images.
> Image nvcr.io/nvidia/riva/riva-speech:2.14.0 exists. Skipping.
> Image nvcr.io/nvidia/riva/riva-speech:2.14.0-servicemaker exists. Skipping.
Downloading models (RMIRs) from NGC...
Note: this may take some time, depending on the speed of your Internet connection.
To skip this process and use existing RMIRs set the location and corresponding flag in config.sh.
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
Error in downloading models.
Error in VS Code
<3>WSL (26) ERROR: CreateProcessParseCommon:754: getpwuid(0) failed 2
Processing fstab with mount -a failed.
<3>WSL (26) ERROR: CreateProcessEntryCommon:331: getpwuid(0) failed 2
<3>WSL (26) ERROR: CreateProcessEntryCommon:502: execvpe /bin/bash failed 2
<3>WSL (26) ERROR: CreateProcessEntryCommon:505: Create process not expected to return
Progress. Forgon to set flag to use local RMIRs
In Git Bash. VS Code doesnāt work for me right now.
$ bash riva_init.sh config.sh
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connection.
> Pulling Riva Speech Server images.
> Image nvcr.io/nvidia/riva/riva-speech:2.14.0 exists. Skipping.
> Image nvcr.io/nvidia/riva/riva-speech:2.14.0-servicemaker exists. Skipping.
+ [[ non-tegra != \t\e\g\r\a ]]
+ [[ non-tegra == \t\e\g\r\a ]]
+ echo 'Converting RMIRs at D:\Nvidia\RivaServer\models/rmir to Riva Model repository.'
Converting RMIRs at D:\Nvidia\RivaServer\models/rmir to Riva Model repository.
+ docker run --init -it --rm --gpus '"device=0"' -v 'D:\Nvidia\RivaServer\models:/data' -e MODEL_DEPLOY_KEY=tlt_encode --name riva-service-maker nvcr.io/nvidia/riva/riva-speech:2.14.0-servicemaker deploy_all_models /data/rmir /data/models
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
+ '[' 1 -ne 0 ']'
+ echo 'Error in deploying RMIR models.'
Error in deploying RMIR models.
+ exit 1
$ winpty bash riva_init.sh config.sh
riva_init.sh: line 57: [: failed to get console mode for stdout: The handle is i
nvalid: integer expression expected
riva_init.sh: line 57: [: failed to get console mode for stdout: The handle is i
nvalid: integer expression expected
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connectio
n.
> Pulling Riva Speech Server images.
> Pulling nvcr.io/nvidia/riva/riva-speech:2.14.0. This may take some time...
> Pulling nvcr.io/nvidia/riva/riva-speech:2.14.0-servicemaker. This may take s
ome time...
+ [[ non-tegra != \t\e\g\r\a ]]
+ [[ non-tegra == \t\e\g\r\a ]]
+ echo 'Converting RMIRs at D:\Nvidia\RivaServer\models/rmir to Riva Model repos
itory.'
Converting RMIRs at D:\Nvidia\RivaServer\models/rmir to Riva Model repository.
+ docker run --init -it --rm --gpus '"device=0"' -v 'D:\Nvidia\RivaServer\models
:/data' -e MODEL_DEPLOY_KEY=tlt_encode --name riva-service-maker nvcr.io/nvidia/
riva/riva-speech:2.14.0-servicemaker deploy_all_models /data/rmir /data/models
==========================
=== Riva Speech Skills ===
==========================
NVIDIA Release (build 77214116)
Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
https://developer.nvidia.com/tensorrt
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All ri
ghts reserved.
This container image and its contents are governed by the NVIDIA Deep Learning C
ontainer License.
By pulling and using the container, you accept the terms and conditions of this
license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh
To install the open-source samples corresponding to this TensorRT release versio
n
run /opt/tensorrt/install_opensource.sh. To build the open source parsers,
plugins, and samples for current top-of-tree on master or a different branch,
run /opt/tensorrt/install_opensource.sh -b <branch>
See https://github.com/NVIDIA/TensorRT for more information.
find: 'C:/Program': No such file or directory
find: 'Files/Git/data/rmir': No such file or directory
+ '[' 0 -ne 0 ']'
+ [[ non-tegra == \t\e\g\r\a ]]
+ echo
+ echo 'Riva initialization complete. Run ./riva_start.sh to launch services.'
Riva initialization complete. Run ./riva_start.sh to launch services.
This takes care of three of my errors
Input device is not a TTY
RMIRs not downloading(i can now set it to local)
Deploying RMIRs and starting server
Errors left to solve
Health ready check failed. When running riva_start.sh
Installing Openssl
Not using third party(unsecure) but rather the openssl found in the git installation
Open Gitbash as admin from the start menu and run:
I feel like iām getting closer to a resolution. Left to solve:
Server health check fails
Error: riva_start.sh: line 18: [: failed to get console mode for stdout: The handle is
invalid: integer expression expected
running winpty docker logs riva-speech doesnāt produce any output
This is the output from running: winpty bash riva_start.sh
$ winpty bash riva_start.sh
riva_start.sh: line 18: [: failed to get console mode for stdout: The handle is
invalid: integer expression expected
riva_start.sh: line 18: [: failed to get console mode for stdout: The handle is
invalid: integer expression expected
Riva Speech already running. Skipping...
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Health ready check failed.
Check Riva logs with: docker logs riva-speech
In relation to the error i ran a echo before the if statement to see what is happening. This is the output:
maj_ver: failed to get console mode for stdout: The handle is invalid
min_ver: 25
So i can clearly get the min ver out but not the major.
I can see that i am in fact using a docker version later than 19.03 as specified in the code
I dontā think this is the error i need to solve. I think itās not generating and deploying the models correctly.
New assumption
When doing riva_init.sh i get this:
==========================
=== Riva Speech Skills ===
==========================
NVIDIA Release (build 77214116)
Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
https://developer.nvidia.com/tensorrt
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All ri
ghts reserved.
This container image and its contents are governed by the NVIDIA Deep Learning C
ontainer License.
By pulling and using the container, you accept the terms and conditions of this
license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh
To install the open-source samples corresponding to this TensorRT release versio
n
run /opt/tensorrt/install_opensource.sh. To build the open source parsers,
plugins, and samples for current top-of-tree on master or a different branch,
run /opt/tensorrt/install_opensource.sh -b <branch>
See https://github.com/NVIDIA/TensorRT for more information.
find: 'C:/Program': No such file or directory
find: 'Files/Git/data/rmir': No such file or directory
+ '[' 0 -ne 0 ']'
+ [[ non-tegra == \t\e\g\r\a ]]
+ echo
+ echo 'Riva initialization complete. Run ./riva_start.sh to launch services.'
Riva initialization complete. Run ./riva_start.sh to launch services.
This specifically is interesting
find: 'C:/Program': No such file or directory
find: 'Files/Git/data/rmir': No such file or directory
images visible(both 2.14.0 and 2.14.0 servicemaker)
wsl installed with Ubuntu and Ubuntu 22.04
Kubernetes installed
Error in container:
2024-02-13 10:17:33 ==========================
2024-02-13 10:17:33 === Riva Speech Skills ===
2024-02-13 10:17:33 ==========================
2024-02-13 10:17:33
2024-02-13 10:17:33 NVIDIA Release 23.12 (build 77214108)
2024-02-13 10:17:33
2024-02-13 10:17:33 Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2024-02-13 10:17:33
2024-02-13 10:17:33 Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2024-02-13 10:17:33
2024-02-13 10:17:33 This container image and its contents are governed by the NVIDIA Deep Learning Container License.
2024-02-13 10:17:33 By pulling and using the container, you accept the terms and conditions of this license:
2024-02-13 10:17:33 https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
2024-02-13 10:17:33
2024-02-13 10:17:33 WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available.
2024-02-13 10:17:33 Use the NVIDIA Container Toolkit to start this container with GPU support; see
2024-02-13 10:17:33 https://docs.nvidia.com/datacenter/cloud-native/ .
2024-02-13 10:17:33
When running riva_init.sh config.sh
winpty bash riva_init.sh config.sh
riva_init.sh: line 57: [: failed to get console mode for stdout: The handle is i
nvalid: integer expression expected
riva_init.sh: line 57: [: failed to get console mode for stdout: The handle is i
nvalid: integer expression expected
Logging into NGC docker registry if necessary...
Pulling required docker images if necessary...
Note: This may take some time, depending on the speed of your Internet connectio
n.
> Pulling Riva Speech Server images.
> Pulling nvcr.io/nvidia/riva/riva-speech:2.14.0. This may take some time...
> Pulling nvcr.io/nvidia/riva/riva-speech:2.14.0-servicemaker. This may take s
ome time...
+ [[ non-tegra != \t\e\g\r\a ]]
+ [[ non-tegra == \t\e\g\r\a ]]
+ echo 'Converting RMIRs at D:\Nvidia\RivaServer\models/rmir to Riva Model repos
itory.'
Converting RMIRs at D:\Nvidia\RivaServer\models/rmir to Riva Model repository.
+ docker run --init -it --rm --gpus '"device=0"' -v 'D:\Nvidia\RivaServer\models
:/data' -e MODEL_DEPLOY_KEY=tlt_encode --name riva-service-maker nvcr.io/nvidia/
riva/riva-speech:2.14.0-servicemaker deploy_all_models /data/rmir /data/models
==========================
=== Riva Speech Skills ===
==========================
NVIDIA Release (build 77214116)
Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
https://developer.nvidia.com/tensorrt
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All ri
ghts reserved.
This container image and its contents are governed by the NVIDIA Deep Learning C
ontainer License.
By pulling and using the container, you accept the terms and conditions of this
license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh
To install the open-source samples corresponding to this TensorRT release versio
n
run /opt/tensorrt/install_opensource.sh. To build the open source parsers,
plugins, and samples for current top-of-tree on master or a different branch,
run /opt/tensorrt/install_opensource.sh -b <branch>
See https://github.com/NVIDIA/TensorRT for more information.
find: 'C:/Program': No such file or directory
find: 'Files/Git/data/rmir': No such file or directory
+ '[' 0 -ne 0 ']'
+ [[ non-tegra == \t\e\g\r\a ]]
+ echo
+ echo 'Riva initialization complete. Run ./riva_start.sh to launch services.'
Riva initialization complete. Run ./riva_start.sh to launch services.
When running riva_start.sh
winpty bash riva_start.sh
maj_ver: failed to get console mode for stdout: The handle is invalid
min_ver: 25
riva_start.sh: line 20: [: failed to get console mode for stdout: The handle is
invalid: integer expression expected
riva_start.sh: line 20: [: failed to get console mode for stdout: The handle is
invalid: integer expression expected
Riva Speech already running. Skipping...
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Waiting for Riva server to load all models...retrying in 10 seconds
Health ready check failed.
Check Riva logs with: docker logs riva-speech
when running docker logs
$ docker logs riva-speech
Error response from daemon: No such container: riva-speech
running docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi
docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi
Tue Feb 13 09:35:30 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.33.01 Driver Version: 546.29 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 2070 On | 00000000:02:00.0 On | N/A |
| 0% 50C P8 35W / 185W | 2181MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 20 G /Xwayland N/A |
| 0 N/A N/A 26 G /Xwayland N/A |
+---------------------------------------------------------------------------------------+
running wsl -l -v
wsl -l -v
NAME STATE VERSION
* docker-desktop-data Running 2
docker-desktop Running 2
Ubuntu Running 2
Ubuntu-22.04 Stopped 2
I would like to enroll in this course. But given the problems and errors iām having setting this up. Iām not sure that i would be able to complete it due to not being able to run the required software.
Complete dead end for me.
Iām lacking a logical flow to follow. I feel like iām either lacking some knowledge or insights because i keep ending up in the same place with seemingly no progress.
I am able to meet the requirements(confirmed and working)
I think the GRPC error was because i was using Git-Bash to launch riva init, start and start_client. Solved By running the start server in VS Code
The fact that the script was searching in git/data was probably because i was using git-bash and because it was located in Program Files it didnāt find program as a dir and neither files as a dir. so it failed to locate the rmirās Solved By running through VS Code
Still missing some rmirās needed but i think thatās just a matter of downloading them Possible Solve Edit config.sh to point to local dir where i have all of the models. Will test soon