• Hardware (T4)
• Network Type (speech_to_text)
• TLT Version ( tao: 3.21.08 | docker_tag: v3.21.08-py3)
Following this to build and deploy jasper with a KenLM model but I am not able to find the configuration to follow for best latency and best throughput like it’s mentioned for Citrinet. In citrinet also what does the
--vocab_filename parameter imply ? Training KenLM model on custom corpus generates only kenlm_model binary and there is no vocab_file generated. Please help here. Thanks!