RIVA build error -> EOF error

Please provide the following information when requesting support.

Hardware - GPU (A100/A30/T4/V100)
Hardware - CPU
Operating System-LINUX
Riva Version 2.12.1
TLT Version (if relevant)
How to reproduce the issue ? (This is for errors. Please share the command and the detailed log here)
Complete Error

==========================
=== Riva Speech Skills ===

NVIDIA Release (build 64517161)
Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

To install the open-source samples corresponding to this TensorRT release version
run /opt/tensorrt/install_opensource.sh. To build the open source parsers,
plugins, and samples for current top-of-tree on master or a different branch,
run /opt/tensorrt/install_opensource.sh -b
See GitHub - NVIDIA/TensorRT: NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications. for more information.

Traceback (most recent call last):
File “/usr/local/bin/riva-build”, line 8, in
sys.exit(build())
File “/usr/local/lib/python3.8/dist-packages/servicemaker/cli/build.py”, line 96, in build
nm = model(input_filename, pipeline_config=pipeline_config, encryption_key=input_key)
File “/usr/local/lib/python3.8/dist-packages/servicemaker/readers/tlt.py”, line 76, in init
read_eff_manifest(self, self.source_file)
File “/usr/local/lib/python3.8/dist-packages/servicemaker/readers/tlt.py”, line 65, in read_eff_manifest
manifest = Archive.restore_manifest(restore_path=source_file)
File “”, line 657, in restore_manifest
File “”, line 768, in extract
File “/usr/lib/python3.8/tarfile.py”, line 2060, in extract
tarinfo = self.getmember(member)
File “/usr/lib/python3.8/tarfile.py”, line 1780, in getmember
tarinfo = self._getmember(name)
File “/usr/lib/python3.8/tarfile.py”, line 2356, in _getmember
members = self.getmembers()
File “/usr/lib/python3.8/tarfile.py”, line 1791, in getmembers
self._load() # all members, we first have to
File “/usr/lib/python3.8/tarfile.py”, line 2379, in _load
tarinfo = self.next()
File “/usr/lib/python3.8/tarfile.py”, line 2310, in next
self.fileobj.seek(self.offset - 1)
File “/usr/lib/python3.8/gzip.py”, line 384, in seek
return self._buffer.seek(offset, whence)
File “/usr/lib/python3.8/_compression.py”, line 143, in seek
data = self.read(min(io.DEFAULT_BUFFER_SIZE, offset))
File “/usr/lib/python3.8/gzip.py”, line 498, in read
raise EOFError("Compressed file ended before the "
EOFError: Compressed file ended before the end-of-stream marker was reached

Command for RIVA Build
!sudo docker run --rm --gpus 0 -v /home/pragya_chaturvedi_quantiphi_com/riva_quickstart_v2.12.1/speechtotext_en_us_lm_vdeployable_v4.1:/servicemaker-dev $RIVA_SM_CONTAINER –
riva-build speech_recognition -f
/servicemaker-dev/rmir/$rmir_name
/servicemaker-dev/$ACOUSTIC_MODEL_NAME
–name=$name
–chunk_size=$chunk_size
–left_padding_size=1.92
–right_padding_size=1.92
–ms_per_timestep=$ms_per_timestep
–max_batch_size=$max_batch_size
–nn.fp16_needs_obey_precision_pass
–endpointing.residue_blanks_at_start=-2
–decoder_type=$decoder_type
–flashlight_decoder.asr_model_delay=$asr_model_delay
–decoding_language_model_binary=/servicemaker-dev/$LANGUAGE_MODEL_NAME
–decoding_lexicon=/servicemaker-dev/$DECODING_LEXICON
–flashlight_decoder.lm_weight=0.8
–flashlight_decoder.word_insertion_score=1.0
–flashlight_decoder.beam_size=32
–flashlight_decoder.beam_threshold=20.
–flashlight_decoder.num_tokenization=1
–language_code=$LANGUAGE_CODE
–featurizer.use_utterance_norm_params=False
–featurizer.precalc_norm_time_steps=0
–featurizer.precalc_norm_params=False
–endpointing.start_history=$start_history
–endpointing.stop_history=$stop_history
–endpointing.start_th=$start_th
–endpointing.stop_th=$stop_th
–wfst_tokenizer_model=/servicemaker-dev/$WFST_TOKENIZER
–wfst_verbalizer_model=/servicemaker-dev/$WFST_VERBALIZER
–speech_hints_model=/servicemaker-dev/$speech_hints_model

Config
[riva_config]

language_code=en-US

riva_sm_container=nvcr.io/nvidia/riva/riva-speech:2.12.1-servicemaker

model_loc=/home/pragya_chaturvedi_quantiphi_com/riva_quickstart_v2.12.1/speechtotext_en_us_lm_vdeployable_v4.1

acoustic_model_name=stt_en_conformer_ctc_large.riva

language_model=4gram-pruned-0_2_7_9-en-lm-set-1.0.bin

decoding_lexicon=lexicon.txt

wfst_tokenizer=tokenize_and_classify.far

wfst_verbalizer=verbalize.far

speech_hints_model=speech_class.far

[riva_parameters]

chunk_size=0.16

padding_size=1.92

left_padding_size=1.6

right_padding_size=1.6

ms_per_timestep=80

max_batch_size=16

lattice_beam=false

max_sequence_idle_microseconds=false

start_history=200

stop_history=500

start_th=0.2

stop_th=0.98

[decoder_config]

decoder_type=flashlight

flashlight_decoder.beam_size=20

flashlight_decoder.beam_size_token=25

flashlight_decoder.asr_model_delay=-1

flashlight_decoder.lm_weight=0.2

flashlight_decoder.word_insertion_score=0.2

flashlight_decoder.beam_threshold=20.

Hi @pragya.chaturvedi

Thanks for your interest in Riva

The issue is happening due to incomplete download of your model file, or the model file seems broken

Please download the file again from NGC repository

and try the setup fresh again

Thanks