Hi NVIDIA! I have a question.
I have cloned and built the project and its running beautifully but now I want to integrate ASR and TTS.
I am attempting to follow Sven Chilton’s , Enabling Voice Interaction with a RAG Pipeline Using NVIDIA Riva NIM Microservices but when attempting to run
export NGC_API_KEY=<PASTE_API_KEY_HERE>
export CONTAINER_NAME=parakeet-ctc-1.1b-asr
docker run -it --rm --name=$CONTAINER_NAME
–runtime=nvidia
–gpus ‘“device=0”’
–shm-size=8GB
-e NGC_API_KEY=$NGC_API_KEY
-e NIM_MANIFEST_PROFILE=9136dd64-4777-11ef-9f27-37cfd56fa6ee
-e NIM_HTTP_API_PORT=9000
-e NIM_GRPC_API_PORT=50051
-p 9000:9000
-p 50051:50051
nvcr.io/nim/nvidia/parakeet-ctc-1.1b-asr:1.0.0
in WSL I am met with:
docker: Error response from daemon: unknown or invalid runtime name: nvidia.
So I am instead attempting to just add the API reference code directly into the hybrid-RAG project but I am unsure where or how to do this precisely.
My goal is pretty obvious, it’s to just integrate ASR and TTS into the hybrid-RAG. Ultimately I intend to implement my Metahuman and NVIDIA A2F to have a ‘meta-actualized’ AI agent, not unlike the digital humans project; to be interfaced by anyone on my website.
Basically- you go to my website and you can talk with my all knowing Metahuman.
Thank you, and I apologize for my imprecision- I am a longtime 3D artist and I’m not yet fluent in the languages you developers speak.
Bug or Error
Feature Request
Documentation Issue
Other