I fine-tune custom models with NVIDIA NeMo but save checkpoint with .ckpt format.
Is it possible to deploy .ckpt format model with javis?
Hi @An-BN
Please refer to below link
https://docs.nvidia.com/deeplearning/jarvis/user-guide/docs/model-overview.html#model-development-with-nemo
Thanks
Hi @SunilJB ,
I see that can using TLT launcher to conver .nemo model to .ejrvs format for building and deploying with Jarvis ServiceMaker.
The problem is my GPU server in offline environment and now i still can not install TLT launcher. Any idea or solution for install TLT launcher in offline environment or another way that i can use .nemo format model in Jarvis
Thanks you
You can use the script at the bottom of the page:
https://docs.nvidia.com/deeplearning/jarvis/user-guide/docs/model-overview.html#nemo-to-jarvis
That’s how I deploy my NeMo models with Jarvis currently.
Hi @P.Biese ,
I am working in VAD model what using MarbleNet what based on QuatzNet. I tried this script and it work. You saved my life, thanks you.
One more thing, Have any reference help me deploy my VAD model and build client for that?
Thanks you!
In Jarvis build documentations, Can you explain for me how i can use .enemo and onnx model. What is <artifact_dir> and <jarvis_repo_dir>?
Many thanks!
<artifact_dir> is the folder where your .enemo file is. <jarvis_repo_dir> is where the jarvis model will be generated. In general, whenever it says .ejrvs or ejrvs-filename, it should also work with .enemo.
Example for <artifact_dir> argument:
/home/ubuntu/jarvis/enemo:/opt/jarvis/servicemaker
Example for <jarvis_repo_dir> argument:
/home/ubuntu/jarvis/models:/opt/jarvis/models
Hi @P.Biese, @SunilJB,
In Jarvis build
I don’t see any pipeline for Voice Activity Detection (VAD).
Does Jarvis have VAD model integrated?
I am using MarbleNet what based on QuatzNet so i tried pipeline speech_recognition
but got error because config param in model does not have vocabulary
key.
This is log error:
Traceback (most recent call last):
File "/opt/conda/bin/jarvis-build", line 8, in <module>
sys.exit(build())
File "/opt/conda/lib/python3.8/site-packages/servicemaker/cli/build.py", line 96, in build
pipeline_config.init_from(nm.get_config())
File "/opt/conda/lib/python3.8/site-packages/servicemaker/pipelines/asr.py", line 210, in init_from
nemo_vocabulary = dparams['vocabulary']
KeyError: 'vocabulary'
Thanks you!
Hi @An-BN
As per my understanding, current beta release doesn’t support Voice Activity Detection (VAD).
Thanks