output to Nvidia-smi
Thu Jul 8 12:02:16 2021
±----------------------------------------------------------------------------+
| NVIDIA-SMI 460.27.04 Driver Version: 460.27.04 CUDA Version: 11.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 2060 On | 00000000:01:00.0 Off | N/A |
| N/A 70C P0 64W / N/A | 753MiB / 5934MiB | 42% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1196 G /usr/lib/xorg/Xorg 416MiB |
| 0 N/A N/A 2276 G /usr/bin/gnome-shell 198MiB |
| 0 N/A N/A 8689 G …/debug.log --shared-files 32MiB |
| 0 N/A N/A 20080 G …_18131.log --shared-files 2MiB |
| 0 N/A N/A 24661 G glmark2 4MiB |
| 0 N/A N/A 30884 G …AAAAAAAAA= --shared-files 61MiB |
| 0 N/A N/A 30964 G …AAAAAAAAA= --shared-files 34MiB |
±----------------------------------------------------------------------------+
I have been trying to setup jarvis… by during running jarvis_init.sh I am getting conversion error…
Logging into NGC docker registry if necessary…
Pulling required docker images if necessary…
Note: This may take some time, depending on the speed of your Internet connection.
Pulling Jarvis Speech Server images.
Image nvcr.io/nvidia/jarvis/jarvis-speech:1.1.0-beta-server exists. Skipping.
Image nvcr.io/nvidia/jarvis/jarvis-speech-client:1.1.0-beta exists. Skipping.
Image nvcr.io/nvidia/jarvis/jarvis-speech:1.1.0-beta-servicemaker exists. Skipping.
Downloading models (JMIRs) from NGC…
Note: this may take some time, depending on the speed of your Internet connection.
To skip this process and use existing JMIRs set the location and corresponding flag in config.sh.
==========================
== Jarvis Speech Skills ==
NVIDIA Release (build 21060478)
Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION. All rights reserved.
NVIDIA modifications are covered by the license terms that apply to the underlying
project or file.
NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be
insufficient for the inference server. NVIDIA recommends the use of the following flags:
nvidia-docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 …
/data/artifacts /opt/jarvis
Downloading nvidia/jarvis/jmir_punctuation:1.0.0-b.1…
Downloaded 418.11 MB in 4m 11s, Download speed: 1.66 MB/s
Transfer id: jmir_punctuation_v1.0.0-b.1 Download status: Completed.
Downloaded local path: /data/artifacts/jmir_punctuation_v1.0.0-b.1
Total files downloaded: 1
Total downloaded size: 418.11 MB
Started at: 2021-07-08 06:18:40.041490
Completed at: 2021-07-08 06:22:51.382292
Duration taken: 4m 11s
/opt/jarvis
Converting JMIRs at jarvis-model-repo/jmir to Jarvis Model repository.
- docker run --init -it --rm --gpus ‘“device=0”’ -v jarvis-model-repo:/data -e MODEL_DEPLOY_KEY=tlt_encode --name jarvis-service-maker nvcr.io/nvidia/jarvis/jarvis-speech:1.1.0-beta-servicemaker deploy_all_models /data/jmir /data/models
==========================
== Jarvis Speech Skills ==
NVIDIA Release (build 21060478)
Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION. All rights reserved.
NVIDIA modifications are covered by the license terms that apply to the underlying
project or file.
NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be
insufficient for the inference server. NVIDIA recommends the use of the following flags:
nvidia-docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 …
2021-07-08 06:22:56,101 [INFO] Writing Jarvis model repository to ‘/data/models’…
2021-07-08 06:22:56,101 [INFO] The jarvis model repo target directory is /data/models
2021-07-08 06:22:57,073 [INFO] Extract_binaries for tokenizer → /data/models/jarvis_tokenizer/1
2021-07-08 06:22:58,068 [INFO] Extract_binaries for language_model → /data/models/jarvis-trt-jarvis_punctuation-nn-bert-base-uncased/1
2021-07-08 06:23:01,484 [INFO] Building TRT engine from PyTorch Checkpoint
[TensorRT] ERROR: …/rtSafe/safeRuntime.cpp (25) - Cuda Error in allocate: 2 (out of memory)
[TensorRT] ERROR: …/rtSafe/safeRuntime.cpp (25) - Cuda Error in allocate: 2 (out of memory)
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/servicemaker/triton/export_bert_pytorch_to_trt.py”, line 1200, in
pytorch_to_trt()
File “/opt/conda/lib/python3.8/site-packages/servicemaker/triton/export_bert_pytorch_to_trt.py”, line 1159, in pytorch_to_trt
return convert_pytorch_to_trt(
File “/opt/conda/lib/python3.8/site-packages/servicemaker/triton/export_bert_pytorch_to_trt.py”, line 963, in convert_pytorch_to_trt
with build_engine(
AttributeError: enter
2021-07-08 06:23:15,549 [ERROR] Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/servicemaker/cli/deploy.py”, line 88, in deploy_from_jmir
generator.serialize_to_disk(
File “/opt/conda/lib/python3.8/site-packages/servicemaker/triton/triton.py”, line 341, in serialize_to_disk
module.serialize_to_disk(repo_dir, jmir, config_only, verbose, overwrite)
File “/opt/conda/lib/python3.8/site-packages/servicemaker/triton/triton.py”, line 232, in serialize_to_disk
self.update_binary(version_dir, jmir, verbose)
File “/opt/conda/lib/python3.8/site-packages/servicemaker/triton/triton.py”, line 489, in update_binary
bindings = self.build_trt_engine_from_pytorch_bert(
File “/opt/conda/lib/python3.8/site-packages/servicemaker/triton/triton.py”, line 455, in build_trt_engine_from_pytorch_bert
raise Exception(“convert_pytorch_to_trt failed.”)
Exception: convert_pytorch_to_trt failed.
-
echo
-
echo ‘Jarvis initialization complete. Run ./jarvis_start.sh to launch services.’
Jarvis initialization complete. Run ./jarvis_start.sh to launch services.
Please help to solve the issue.
Also, I wanted to know Jarvis compatibility with my system?
Thank you
Gentle reminder… I am still looking forward for solution…!!