Listing all models available for use and specify one in the RecognitionConfig

First of all, great framework. Thank you Nvidia team!
I trained a custom model using NeMo on my own dataset and was able to succesfully build and deploy it to a local Jarvis server with Docker.

However, when I run the example transcribe_file.py client, I cannot specify the model name in the RecognitionConfig.

I keep getting the following error:

grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
	status = StatusCode.INVALID_ARGUMENT
	details = "Error: Model QN15x5-TL-0.1 is not available on server"
	debug_error_string = "{"created":"@1625238474.234893929","description":"Error received from peer ipv6:[::1]:50051","file":"src/core/lib/surface/call.cc","file_line":1066,"grpc_message":"Error: Model QN15x5-TL-0.1 is not available on server","grpc_status":3}"

My model is built using the following script:

jarvis-build speech_recognition /servicemaker-dev/QuartzNet15x5-TlEa_IU-SpecCutAug-last.jmir \
/servicemaker-dev/QuartzNet15x5-TlEa_IU-SpecCutAug-last.enemo \
--chunk_size=0.8 \
--padding_factor=4 \
--padding_size=1.6 \
--ms_per_timestep=80 \
--lm_decoder_cpu.asr_model_delay=-1 \
--featurizer.use_utterance_norm_params=False \
--featurizer.precalc_norm_time_steps=0 \
--featurizer.precalc_norm_params=False \
--lm_decoder_cpu.decoder_type=greedy \
--acoustic_model_name=qn15x5-arabic \
--name=QN15x5-TL-0.1

I tried using the name I set in acoustic_model_name, name, the name of the file (QuartzNet15x5-TlEa_IU-SpecCutAug-last), the name of the directory in /data/models/.

Nothing works. Is there more documentation on how to properly use the model field in the config and extra params?
And is there a way to get all model names available for the client?

It looks like I had to do two things:

  1. Make sure to specify /data/models/ as the output directory for jarvis-deploy with the -f flag since there were other models there.
  2. I had to restart the Jarvis server

Then I could use the model specified using the --name argument.
It would be helpful to make this more explicit in the docs.

One question then still remains and that is listing all models available.
Is it possible to do so via some API/gRPC call?

Hi @pineapple9011
There’s no provision as that such yet. For now, you need to have server and logs access to check available models deployed on server.

Thanks