Jarvis Health Ready check failed - Need Help

Hi,

I have looked an early same topic, but I can not get any clue on how to solve the problem. Hope that you can give me suggestions.

  1. My system
    dGPU with RTX 2080Ti
    ±----------------------------------------------------------------------------+
    | NVIDIA-SMI 465.27 Driver Version: 465.27 CUDA Version: 11.3 |
    |-------------------------------±---------------------±---------------------+
    | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
    | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
    | | | MIG M. |
    |===============================+======================+======================|
    | 0 NVIDIA GeForce … Off | 00000000:01:00.0 On | N/A |
    | N/A 43C P8 5W / N/A | 180MiB / 7982MiB | 4% Default |
    | | | N/A |
    ±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1434 G /usr/lib/xorg/Xorg 108MiB |
| 0 N/A N/A 1686 G /usr/bin/gnome-shell 27MiB |
| 0 N/A N/A 2297 G …AAAAAAAAA= --shared-files 41MiB |
±----------------------------------------------------------------------------+

  1. I did bash jarvis_clean.sh first, the bash jarvis_init.sh. The initialization seems good except that bash jarvis_start.sh gives me the error. Also I did not change config.sh.

Please give me the suggestions how to solve this problem

on-screen log

jiande@jiande-zephyrus-s-gx531gxr-lapto:~/jarvis_quickstart_v1.2.1-beta$ sudo bash jarvis_init.sh
Logging into NGC docker registry if necessary…
Pulling required docker images if necessary…
Note: This may take some time, depending on the speed of your Internet connection.

Pulling Jarvis Speech Server images.
Image nvcr.io/nvidia/jarvis/jarvis-speech:1.2.1-beta-server exists. Skipping.
Image nvcr.io/nvidia/jarvis/jarvis-speech-client:1.2.1-beta exists. Skipping.
Image nvcr.io/nvidia/jarvis/jarvis-speech:1.2.1-beta-servicemaker exists. Skipping.

Downloading models (JMIRs) from NGC…
Note: this may take some time, depending on the speed of your Internet connection.
To skip this process and use existing JMIRs set the location and corresponding flag in config.sh.

==========================

== Jarvis Speech Skills

NVIDIA Release devel (build 22382700)

Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION. All rights reserved.
NVIDIA modifications are covered by the license terms that apply to the underlying
project or file.

NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be
insufficient for the inference server. NVIDIA recommends the use of the following flags:
nvidia-docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 …

/data/artifacts /opt/jarvis

Downloading nvidia/jarvis/jmir_punctuation:1.2.0-beta…
Downloaded 418.11 MB in 1m 40s, Download speed: 4.17 MB/s

Transfer id: jmir_punctuation_v1.2.0-beta Download status: Completed.

Downloaded local path: /data/artifacts/jmir_punctuation_v1.2.0-beta
Total files downloaded: 1
Total downloaded size: 418.11 MB
Started at: 2021-06-26 22:26:31.005329
Completed at: 2021-06-26 22:28:11.154480
Duration taken: 1m 40s

Downloading nvidia/jarvis/jmir_jarvis_asr_citrinet_1024_asrset1p7_streaming:1.2.0-beta…
Downloaded 579.01 MB in 1m 54s, Download speed: 5.07 MB/s

Transfer id: jmir_jarvis_asr_citrinet_1024_asrset1p7_streaming_v1.2.0-beta Download status: Completed.

Downloaded local path: /data/artifacts/jmir_jarvis_asr_citrinet_1024_asrset1p7_streaming_v1.2.0-beta
Total files downloaded: 1
Total downloaded size: 579.01 MB
Started at: 2021-06-26 22:28:16.062877
Completed at: 2021-06-26 22:30:10.219833
Duration taken: 1m 54s

Downloading nvidia/jarvis/jmir_jarvis_asr_citrinet_1024_asrset1p7_offline:1.2.0-beta…
Downloaded 579.01 MB in 2m 11s, Download speed: 4.41 MB/s

Transfer id: jmir_jarvis_asr_citrinet_1024_asrset1p7_offline_v1.2.0-beta Download status: Completed.

Downloaded local path: /data/artifacts/jmir_jarvis_asr_citrinet_1024_asrset1p7_offline_v1.2.0-beta
Total files downloaded: 1
Total downloaded size: 579.01 MB
Started at: 2021-06-26 22:30:15.367606
Completed at: 2021-06-26 22:32:26.549186
Duration taken: 2m 11s

Directory jmir_punctuation_v1.2.0-beta already exists, skipping. Use ‘–force’ option to override.

Downloading nvidia/jarvis/jmir_named_entity_recognition:1.2.0-beta…
Downloaded 420.38 MB in 1m 26s, Download speed: 4.88 MB/s

Transfer id: jmir_named_entity_recognition_v1.2.0-beta Download status: Completed.

Downloaded local path: /data/artifacts/jmir_named_entity_recognition_v1.2.0-beta
Total files downloaded: 1
Total downloaded size: 420.38 MB
Started at: 2021-06-26 22:32:31.587646
Completed at: 2021-06-26 22:33:57.712751
Duration taken: 1m 26s

Downloading nvidia/jarvis/jmir_intent_slot:1.2.0-beta…
Downloaded 422.71 MB in 1m 26s, Download speed: 4.91 MB/s

Transfer id: jmir_intent_slot_v1.2.0-beta Download status: Completed.

Downloaded local path: /data/artifacts/jmir_intent_slot_v1.2.0-beta
Total files downloaded: 1
Total downloaded size: 422.71 MB
Started at: 2021-06-26 22:34:02.789730
Completed at: 2021-06-26 22:35:28.918386
Duration taken: 1m 26s

Downloading nvidia/jarvis/jmir_question_answering:1.2.0-beta…
Downloaded 418.06 MB in 1m 27s, Download speed: 4.8 MB/s

Transfer id: jmir_question_answering_v1.2.0-beta Download status: Completed.

Downloaded local path: /data/artifacts/jmir_question_answering_v1.2.0-beta
Total files downloaded: 1
Total downloaded size: 418.06 MB
Started at: 2021-06-26 22:35:43.580179
Completed at: 2021-06-26 22:37:10.717473
Duration taken: 1m 27s

Downloading nvidia/jarvis/jmir_text_classification:1.2.0-beta…
Downloaded 420.27 MB in 1m 28s, Download speed: 4.77 MB/s

Transfer id: jmir_text_classification_v1.2.0-beta Download status: Completed.

Downloaded local path: /data/artifacts/jmir_text_classification_v1.2.0-beta
Total files downloaded: 1
Total downloaded size: 420.27 MB
Started at: 2021-06-26 22:37:15.584963
Completed at: 2021-06-26 22:38:43.720101
Duration taken: 1m 28s

Downloading nvidia/jarvis/jmir_jarvis_tts_ljspeech:1.2.0-beta…
Downloaded 527.36 MB in 1m 47s, Download speed: 4.92 MB/s

Transfer id: jmir_jarvis_tts_ljspeech_v1.2.0-beta Download status: Completed.
Downloaded local path: /data/artifacts/jmir_jarvis_tts_ljspeech_v1.2.0-beta
Total files downloaded: 1
Total downloaded size: 527.36 MB
Started at: 2021-06-26 22:38:49.001186
Completed at: 2021-06-26 22:40:36.160131
Duration taken: 1m 47s
/opt/jarvis

Converting JMIRs at jarvis-model-repo/jmir to Jarvis Model repository.

==========================

== Jarvis Speech Skills ==

NVIDIA Release devel (build 22382700)

Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION. All rights reserved.
NVIDIA modifications are covered by the license terms that apply to the underlying
project or file.

NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be
insufficient for the inference server. NVIDIA recommends the use of the following flags:
nvidia-docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 …
2021-06-26 22:40:40,698 [INFO] Writing Jarvis model repository to ‘/data/models’…
2021-06-26 22:40:40,698 [INFO] The jarvis model repo target directory is /data/models
2021-06-26 22:40:41,791 [INFO] Extract_binaries for tokenizer → /data/models/jarvis_qa_preprocessor/1
2021-06-26 22:40:41,793 [INFO] Extract_binaries for language_model → /data/models/jarvis-trt-jarvis_qa-nn-bert-base-uncased/1
2021-06-26 22:40:45,587 [INFO] Building TRT engine from PyTorch Checkpoint
2021-06-26 22:42:34,543 [INFO] QA dimensions:(-1, 384, 2, 1, 1)
2021-06-26 22:42:34,544 [INFO] Extract_binaries for token_classifier → /data/models/jarvis_qa_postprocessor/1
2021-06-26 22:42:34,545 [INFO] Extract_binaries for self → /data/models/jarvis_qa/1
2021-06-26 22:42:36,456 [INFO] Extract_binaries for featurizer → /data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-offline-feature-extractor-streaming-offline/1
2021-06-26 22:42:36,458 [INFO] Extract_binaries for nn → /data/models/jarvis-trt-citrinet-1024/1
2021-06-26 22:42:42,541 [INFO] Building TRT engine from ONNX file
[libprotobuf WARNING /workspace/TensorRT/t/oss-cicd/oss/build/third_party.protobuf/src/third_party.protobuf/src/google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING /workspace/TensorRT/t/oss-cicd/oss/build/third_party.protobuf/src/third_party.protobuf/src/google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 564429124
[TensorRT] WARNING: /workspace/TensorRT/t/oss-cicd/oss/parsers/onnx/onnx2trt_utils.cpp:227: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
2021-06-26 22:50:15,914 [INFO] Extract_binaries for vad → /data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-offline-voice-activity-detector-ctc-streaming-offline/1
2021-06-26 22:50:15,915 [INFO] Extract_binaries for lm_decoder → /data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-offline-ctc-decoder-cpu-streaming-offl2021-06-26 22:50:15,950 [INFO] {‘vocab_file’: ‘/data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-offline-ctc-decoder-cpu-streaming-offline/1/vocab.txt’, ‘decoding_language_model_binary’: ‘/data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-offline-ctc-decoder-cpu-streaming-offline/1/jarvis_asr_train_datasets_noSpgi_noLS_gt_3gram.binary’, ‘decoding_vocab’: ‘/data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-offline-ctc-decoder-cpu-streaming-offline/1/dict_vocab.txt’, ‘tokenizer_model’: ‘/data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-offline-ctc-decoder-cpu-streaming-offline/1/tokenizer.model’}
2021-06-26 22:50:15,950 [INFO] Model config has vocab file and tokenizer specified. Will create lexicon file from vocab_file /data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-offline-ctc-decoder-cpu-streaming-offline/1/dict_vocab.txt and tokenizer model /data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-offline-ctc-decoder-cpu-streaming-offline/1/tokenizer.model
2021-06-26 22:50:16,091 [INFO] processed 10000 lines
2021-06-26 22:50:16,229 [INFO] processed 20000 lines
2021-06-26 22:50:16,370 [INFO] processed 30000 lines
2021-06-26 22:50:16,511 [INFO] processed 40000 lines
2021-06-26 22:50:16,651 [INFO] processed 50000 lines
2021-06-26 22:50:16,791 [INFO] processed 60000 lines
2021-06-26 22:50:16,931 [INFO] processed 70000 lines
2021-06-26 22:50:17,073 [INFO] processed 80000 lines
2021-06-26 22:50:17,215 [INFO] processed 90000 lines
2021-06-26 22:50:17,358 [INFO] processed 100000 lines
2021-06-26 22:50:17,499 [INFO] processed 110000 lines
2021-06-26 22:50:17,640 [INFO] processed 120000 lines
2021-06-26 22:50:17,783 [INFO] processed 130000 lines
2021-06-26 22:50:17,925 [INFO] processed 140000 lines
2021-06-26 22:50:18,069 [INFO] processed 150000 lines
2021-06-26 22:50:18,212 [INFO] processed 160000 lines
2021-06-26 22:50:18,354 [INFO] processed 170000 lines
2021-06-26 22:50:18,495 [INFO] processed 180000 lines
2021-06-26 22:50:18,637 [INFO] processed 190000 lines
2021-06-26 22:50:18,779 [INFO] processed 200000 lines
2021-06-26 22:50:18,922 [INFO] processed 210000 lines
2021-06-26 22:50:19,064 [INFO] processed 220000 lines
2021-06-26 22:50:19,207 [INFO] processed 230000 lines
2021-06-26 22:50:19,243 [INFO] skipped 0 empty lines
ine/1
2021-06-26 22:50:19,246 [INFO] Extract_binaries for self → /data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-offline/1
2021-06-26 22:50:20,467 [INFO] Extract_binaries for tokenizer → /data/models/jarvis_tokenizer/1
2021-06-26 22:50:20,469 [INFO] Extract_binaries for language_model → /data/models/jarvis-trt-jarvis_punctuation-nn-bert-base-uncased/1
2021-06-26 22:50:24,333 [INFO] Building TRT engine from PyTorch Checkpoint
2021-06-26 22:51:31,439 [INFO] Capit dimensions:2
2021-06-26 22:51:31,439 [INFO] Punct dimensions:4
2021-06-26 22:51:31,440 [INFO] Extract_binaries for label_tokens_punct → /data/models/jarvis_punctuation_label_tokens_punct/1
2021-06-26 22:51:31,441 [INFO] Extract_binaries for label_tokens_cap → /data/models/jarvis_punctuation_label_tokens_cap/1
2021-06-26 22:51:31,444 [INFO] Extract_binaries for self → /data/models/jarvis_punctuation/1
2021-06-26 22:51:35,732 [INFO] Extract_binaries for preprocessor → /data/models/tts_preprocessor/1
2021-06-26 22:51:35,734 [INFO] Extract_binaries for encoder → /data/models/jarvis-trt-tacotron2_encoder/1
Available devices:
Device: 0 : ‘NVIDIA GeForce RTX 2080 with Max-Q Design’, 46 SMs, support Co-op Launch ← [ ACTIVE ]
Conv selected stable alg 104
Conv selected stable alg 104
Conv selected stable alg 104
Selected stable alg 0
FC selected stable alg 35
2021-06-26 22:53:02,485 [INFO] Extract_binaries for decoder → /data/models/tacotron2_decoder_postnet/1
Available devices:
Device: 0 : ‘NVIDIA GeForce RTX 2080 with Max-Q Design’, 46 SMs, support Co-op Launch ← [ ACTIVE ]
Conv selected stable alg 104
Conv selected stable alg 104
Conv selected stable alg 104
Selected stable alg 0
FC selected stable alg 35
2021-06-26 22:55:01,064 [INFO] Extract_binaries for waveglow → /data/models/jarvis-trt-waveglow/1
Available devices:
Device: 0 : ‘NVIDIA GeForce RTX 2080 with Max-Q Design’, 46 SMs, support Co-op Launch ← [ ACTIVE ]
Tensor ‘spect’ ={1 1 80 80 }
Tensor ‘spect’ ={8 1 80 80 }
Tensor ‘z’ ={1 8 2656 1 }
Tensor ‘z’ ={8 8 2656 1 }
2021-06-26 23:02:32,244 [INFO] Extract_binaries for denoiser → /data/models/waveglow_denoiser/1
Available devices:
Device: 0 : ‘NVIDIA GeForce RTX 2080 with Max-Q Design’, 46 SMs, support Co-op Launch ← [ ACTIVE ]
2021-06-26 23:03:03,809 [INFO] Extract_binaries for self → /data/models/tacotron2_ensemble/1
2021-06-26 23:03:05,807 [INFO] Extract_binaries for featurizer → /data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-feature-extractor-streaming/1
2021-06-26 23:03:05,809 [WARNING] /data/models/jarvis-trt-citrinet-1024 already exists, skipping deployment. To force deployment rerun with -f or remove the /data/models/jarvis-trt-citrinet-1024
2021-06-26 23:03:05,809 [INFO] Extract_binaries for vad → /data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-voice-activity-detector-ctc-streaming/1
2021-06-26 23:03:05,811 [INFO] Extract_binaries for lm_decoder → /data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-ctc-decoder-cpu-streaming/1
2021-06-26 23:03:05,846 [INFO] {‘vocab_file’: ‘/data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-ctc-decoder-cpu-streaming/1/vocab.txt’, ‘decoding_language_model_binary’: ‘/data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-ctc-decoder-cpu-streaming/1/jarvis_asr_train_datasets_noSpgi_noLS_gt_3gram.binary’, ‘decoding_vocab’: ‘/data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-ctc-decoder-cpu-streaming/1/dict_vocab.txt’, ‘tokenizer_model’: ‘/data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-ctc-decoder-cpu-streaming/1/tokenizer.model’}
2021-06-26 23:03:05,846 [INFO] Model config has vocab file and tokenizer specified. Will create lexicon file from vocab_file /data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-ctc-decoder-cpu-streaming/1/dict_vocab.txt and tokenizer model /data/models/citrinet-1024-asr-trt-ensemble-vad-streaming-ctc-decoder-cpu-streaming/1/tokenizer.model
2021-06-26 23:03:05,976 [INFO] processed 10000 lines
2021-06-26 23:03:06,109 [INFO] processed 20000 lines
2021-06-26 23:03:06,243 [INFO] processed 30000 lines
2021-06-26 23:03:06,379 [INFO] processed 40000 lines
2021-06-26 23:03:06,514 [INFO] processed 50000 lines
2021-06-26 23:03:06,649 [INFO] processed 60000 lines
2021-06-26 23:03:06,783 [INFO] processed 70000 lines
2021-06-26 23:03:06,919 [INFO] processed 80000 lines
2021-06-26 23:03:07,056 [INFO] processed 90000 lines
2021-06-26 23:03:07,194 [INFO] processed 100000 lines
2021-06-26 23:03:07,330 [INFO] processed 110000 lines
2021-06-26 23:03:07,467 [INFO] processed 120000 lines
2021-06-26 23:03:07,605 [INFO] processed 130000 lines
2021-06-26 23:03:07,743 [INFO] processed 140000 lines
2021-06-26 23:03:07,880 [INFO] processed 150000 lines
2021-06-26 23:03:08,019 [INFO] processed 160000 lines
2021-06-26 23:03:08,157 [INFO] processed 170000 lines
2021-06-26 23:03:08,292 [INFO] processed 180000 lines
2021-06-26 23:03:08,430 [INFO] processed 190000 lines
2021-06-26 23:03:08,568 [INFO] processed 200000 lines
2021-06-26 23:03:08,705 [INFO] processed 210000 lines
2021-06-26 23:03:08,842 [INFO] processed 220000 lines
2021-06-26 23:03:08,979 [INFO] processed 230000 lines
2021-06-26 23:03:09,015 [INFO] skipped 0 empty lines
2021-06-26 23:03:09,0162021-06-26 23:03:09,018 [INFO] Extract_binaries for self → /data/models/citrinet-1024-asr-trt-ensemble-vad-streaming/1
2021-06-26 23:03:10,268 [WARNING] /data/models/jarvis_tokenizer already exists, skipping deployment. To force deployment rerun with -f or remove the /data/models/jarvis_tokenizer
2021-06-26 23:03:10,268 [INFO] Extract_binaries for language_model → /data/models/jarvis-trt-jarvis_text_classification_domain-nn-bert-base-uncased/1
2021-06-26 23:03:15,141 [INFO] Building TRT engine from PyTorch Checkpoint
2021-06-26 23:04:23,900 [INFO] Text Classification classes:4
2021-06-26 23:04:23,900 [INFO] Extract_binaries for self → /data/models/jarvis_text_classification_domain/1
2021-06-26 23:04:25,181 [WARNING] /data/models/jarvis_tokenizer already exists, skipping deployment. To force deployment rerun with -f or remove the /data/models/jarvis_tokenizer
2021-06-26 23:04:25,181 [INFO] Extract_binaries for language_model → /data/models/jarvis-trt-jarvis_ner-nn-bert-base-uncased/1
2021-06-26 23:04:30,137 [INFO] Building TRT engine from PyTorch Checkpoint
2021-06-26 23:05:38,116 [INFO] NER classes: 13
2021-06-26 23:05:38,117 [INFO] Extract_binaries for label_tokens → /data/models/jarvis_ner_label_tokens/1
2021-06-26 23:05:38,118 [WARNING] /data/models/jarvis_detokenize already exists, skipping deployment. To force deployment rerun with -f or remove the /data/models/jarvis_detokenize
2021-06-26 23:05:38,118 [INFO] Extract_binaries for self → /data/models/jarvis_ner/1
2021-06-26 23:05:39,335 [WARNING] /data/models/jarvis_tokenizer already exists, skipping deployment. To force deployment rerun with -f or remove the /data/models/jarvis_tokenizer
2021-06-26 23:05:39,335 [INFO] Extract_binaries for language_model → /data/models/jarvis-trt-jarvis_intent_weather-nn-bert-base-uncased/1
2021-06-26 23:05:44,116 [INFO] Building TRT engine from PyTorch Checkpoint
2021-06-26 23:06:56,706 [INFO] Intent classes: 18
2021-06-26 23:06:56,706 [INFO] Entity classes: 31
2021-06-26 23:06:56,708 [INFO] Extract_binaries for label_tokens → /data/models/jarvis_label_tokens_weather/1
2021-06-26 23:06:56,712 [WARNING] /data/models/jarvis_detokenize already exists, skipping deployment. To force deployment rerun with -f or remove the /data/models/jarvis_detokenize
2021-06-26 23:06:56,712 [INFO] Extract_binaries for self → /data/models/jarvis_intent_weather/1

  • echo
    [INFO] filtered 0 lines
  • echo ‘Jarvis initialization complete. Run ./jarvis_start.sh to launch services.’
    Jarvis initialization complete. Run ./jarvis_start.sh to launch services.
    jiande@jiande-zephyrus-s-gx531gxr-lapto:~/jarvis_quickstart_v1.2.1-beta$ ls
    asr_lm_tools jarvis_start_client.sh
    config.sh jarvis_start.sh
    examples jarvis_stop.sh
    jarvis_api-1.2.1b0-py3-none-any.whl nb_demo_speech_api.ipynb
    jarvis_clean.sh protos
    jarvis_init.sh
    jiande@jiande-zephyrus-s-gx531gxr-lapto:~/jarvis_quickstart_v1.2.1-beta$ sudo ./jarvis_start.sh
    [sudo] password for jiande:
    Starting Jarvis Speech Services. This may take several minutes depending on the number of models deployed.
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Waiting for Jarvis server to load all models…retrying in 10 seconds
    Health ready check failed.
    Check Jarvis logs with: docker logs jarvis-speech

Reply