Unable to deploy riva model trained in with tao 4.0.0

Please provide the following information when requesting support.

Hardware - GPU = Titan RTX
Hardware - CPU
Operating System = Ubuntu
Riva Version = 2.8.1
TLT Version (if relevant) = 4.0.0
How to reproduce the issue ? (This is for errors. Please share the command and the detailed log here)

I have been successfully training and deploying TAO models with Riva until I upgraded to TAO 4.0.0. My previous models were forced to use FP32 to deploy successfully, and my aim in upgrading is to attempt to use FP16.

I am using a tao speech_to_text_conformer export command to create a *.riva file, then I am using riva-speech:2.8.1-servicemaker to run a riva-build speech_recognition command to build the rmir file which includes flashlight decoding and a language model. Previously in this command I included --nn.use_trt_fp32, but now I am removing it as I understand this bug was fixed in the latest version of tao. This command completes successfully.

However when I use the quickstart script with my config.sh (bash riva_init.sh config.sh) it fails with the following error:

023-01-31 21:04:57,842 [ERROR] Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/servicemaker/cli/deploy.py", line 91, in deploy_from_rmir
    generator = get_generator(pipeline_config, args)
  File "/usr/local/lib/python3.8/dist-packages/servicemaker/triton/triton.py", line 455, in get_generator
    generator = gen_class(pipeline_config)
  File "/usr/local/lib/python3.8/dist-packages/servicemaker/triton/asr.py", line 1022, in __init__
    super().__init__(self, model_config, f"{model_config.name}", step_types)
  File "/usr/local/lib/python3.8/dist-packages/servicemaker/triton/triton.py", line 440, in __init__
    self._nodes[step] = gen(self, config.pipeline_configs[cfg], pipeline_step=step)
KeyError: 'endpointing'

+ '[' 1 -ne 0 ']'
+ echo 'Error in deploying RMIR models.'
Error in deploying RMIR models.
+ exit 1

Any advice is appreciated in helping me successfully TRT compress and deploy my new FP16 model.

Hi @david.kaleko

Thanks for your interest in Riva

Can you provide details of

  1. does the model infer with tao speech_to_text_conformer infer
  2. complete riva-build command used