Deploying Whisper model

Hardware - GPU (RTX 2080TI)
Hardware - CPU(CPU E5-2697 v2 x2)
Operating System - (Ubuntu 24.10)
Riva Version - Riva 2.17.0

In the documentation for Riva(ASR Overview — NVIDIA Riva), I saw that it is possible to use the Whisper model which is in the .riva(Multilingual Whisper Large-v3 | NVIDIA NGC format), but I did not fully understand the moment of converting this model to a .rmir file, since it requires additional files that are listed in the table as n/a, if there are any instructions for this particular model.I will be glad to receive any help in this matter.

With command riva-build speech_recognition output_path /data/whisper.rmir source_path /servicemaker_dev/whisper.riva --decoder_type pass_through, I’m getting error

[TensorRT-LLM] TensorRT-LLM version: 0.10.0
Traceback (most recent call last):
  File "/usr/local/bin/riva-build", line 8, in <module>
    sys.exit(build())
  File "/usr/local/lib/python3.10/dist-packages/servicemaker/cli/build.py", line 94, in build
    raise Exception("bad input model")
Exception: bad input model

Were you able to move past this? What is your current whisper setup? I want to help you.

No, I have not been able to solve this problem

Please use the correct build command from here or directly deploy from config.sh file.