RTX A6000 - Crashing on Training of Textual Inversion

Hello, I am trying to start a local calculation for textual inversion for stable diffusion mode in Visions of Chaos program. Probably the problem is some bug that appears specifically for the RTX A6000 card. Can someone advise me if this bug can be fixed or bypassed in some way? Thank you very much for your help, Tomas Krejzek.

report:

Textual Inversion Training
GitHub - rinongal/textual_inversion
Executing command line;
python --version&“c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\scripts\activate.bat”&python -B -W ignore “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\main.py” --base configs/stable-diffusion/v1-finetune_lowmemory.yaml -t --no-test --actual_resume “models\ldm\stable-diffusion-v1\sd-v1-4-full-ema.ckpt” --gpus 0, --data_root “F:\!VOC\Train\Silvialow” --logdir “F:\!VOC\Train” --init_word “figure” --name “Krejzkova”&“c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\scripts\deactivate.bat”

Python 3.10.6
Using device: cuda:0
_CudaDeviceProperties(name=‘NVIDIA RTX A6000’, major=8, minor=6, total_memory=49139MB, multi_processor_count=84)
Global seed set to 23
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: [‘vision_model.encoder.layers.18.layer_norm2.weight’, ‘vision_model.encoder.layers.15.mlp.fc2.weight’, ‘vision_model.encoder.layers.12.self_attn.k_proj.bias’, ‘vision_model.encoder.layers.11.self_attn.out_proj.bias’, ‘vision_model.encoder.layers.7.self_attn.q_proj.weight’, ‘vision_model.encoder.layers.12.self_attn.out_proj.weight’, ‘vision_model.embeddings.class_embedding’, ‘vision_model.encoder.layers.16.layer_norm2.weight’, ‘vision_model.encoder.layers.17.layer_norm1.bias’, ‘vision_model.encoder.layers.11.self_attn.out_proj.weight’, ‘vision_model.encoder.layers.23.self_attn.out_proj.weight’, ‘vision_model.encoder.layers.1.layer_norm1.bias’, ‘vision_model.encoder.layers.5.layer_norm2.bias’, ‘vision_model.encoder.layers.3.layer_norm2.bias’, ‘vision_model.encoder.layers.2.self_attn.out_proj.weight’, ‘vision_model.encoder.layers.7.layer_norm1.bias’, ‘vision_model.encoder.layers.12.self_attn.v_proj.weight’, 'vision_model.encoder.la


Global seed set to 23
initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
[W C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\distributed\c10d\socket.cpp:558] [c10d] The client socket has failed to connect to [vfx001.global.upp.cz]:56301 (system error: 10049 - The requested address is not valid in its context.).
[W C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\distributed\c10d\socket.cpp:558] [c10d] The client socket has failed to connect to [vfx001.global.upp.cz]:56301 (system error: 10049 - The requested address is not valid in its context.).

distributed_backend=gloo
All distributed processes registered. Starting with 1 processes

c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\core\datamodule.py:469: LightningDeprecationWarning: DataModule.setup has already been called, so it will not be called again. In v1.6 this behavior will change to always call DataModule.setup.
rank_zero_deprecation(
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

| Name | Type | Params

0 | model | DiffusionWrapper | 859 M
1 | first_stage_model | AutoencoderKL | 83.7 M
2 | cond_stage_model | FrozenCLIPEmbedder | 123 M
3 | embedding_manager | EmbeddingManager | 1.5 K

768 Trainable params
1.1 B Non-trainable params
1.1 B Total params
4,264.947 Total estimated model params size (MB)

Monitoring val/loss_simple_ema as checkpoint metric.
Merged modelckpt-cfg:
{‘target’: ‘pytorch_lightning.callbacks.ModelCheckpoint’, ‘params’: {‘dirpath’: ‘F:\\!VOC\\Train\Silvialow2022-09-14T20-26-58_Krejzkova\checkpoints’, ‘filename’: ‘gs-{global_step:06}_e-{epoch:06}’, ‘every_n_train_steps’: 10, ‘verbose’: True, ‘save_last’: True, ‘monitor’: ‘val/loss_simple_ema’, ‘save_top_k’: 1}}

Data

train, PersonalizedBase, 500
validation, PersonalizedBase, 5
accumulate_grad_batches = 1
Setting learning rate to 5.00e-03 = 1 (accumulate_grad_batches) * 1 (num_gpus) * 1 (batchsize) * 5.00e-03 (base_lr)
Project config
model:
base_learning_rate: 0.005
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.012
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: caption
image_size: 64
channels: 4
cond_stage_trainable: true
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: false
embedding_reg_weight: 0.0
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings:
- ‘
initializer_words:
- figure
per_image_tokens: false
num_vectors_per_token: 1
progressive_words: false
embedding_manager_ckpt: ‘’
placeholder_tokens:
- '

unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions:
- 4
- 2
- 1
num_res_blocks: 2
channel_mult:
- 1
- 2
- 4
- 4
num_heads: 8
use_spatial_transformer: true
transformer_depth: 1
context_dim: 768
use_checkpoint: true
legacy: false
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions:
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
ckpt_path: models\ldm\stable-diffusion-v1\sd-v1-4-full-ema.ckpt
data:
target: main.DataModuleFromConfig
params:
batch_size: 1
num_workers: 48
wrap: false
train:
target: ldm.data.personalized.PersonalizedBase
params:
size: 256
set: train
per_image_tokens: false
repeats: 100
validation:
target: ldm.data.personalized.PersonalizedBase
params:
size: 256
set: val
per_image_tokens: false
repeats: 10

Lightning config
modelcheckpoint:
params:
every_n_train_steps: 10
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 5
max_images: 1
increase_log_steps: false
trainer:
benchmark: false
max_steps: 10
find_unused_parameters: false
accelerator: ddp
gpus: 0,

Validation sanity check: 0it [00:00, ?it/s]
Validation sanity check: 0%| | 0/2 [00:00<?, ?it/s]

                                                                  Global seed set to 23

Training: 0it [00:00, ?it/s]
Training: 0%| | 0/505 [00:00<?, ?it/s]

Epoch 0: 0%| | 1/505 [00:39<5:29:39, 39.24s/it]

Epoch 0: 0%| | 2/505 [00:39<2:47:13, 19.95s/it, loss=0.124, v_num=0, train/loss_simple_step=0.00778, train/loss_vlb_step=3.9e-5, train/loss_step=0.00778, global_step=1.000]

Epoch 0: 1%| | 4/505 [00:41<1:25:55, 10.29s/it, loss=0.103, v_num=0, train/loss_simple_step=0.0604, train/loss_vlb_step=0.000201, train/loss_step=0.0604, global_step=2.000]

Epoch 0: 1%| | 5/505 [00:41<1:09:43, 8.37s/it, loss=0.0678, v_num=0, train/loss_simple_step=0.00441, train/loss_vlb_step=2.53e-5, train/loss_step=0.00441, global_step=4.000]

DDIM Sampler: 0%| | 1/200 [00:02<08:30, 2.56s/it]e[AIteration 1

DDIM Sampler: 1%|1 | 2/200 [00:03<05:14, 1.59s/it]e[AIteration 2

DDIM Sampler: 2%|1 | 3/200 [00:04<04:08, 1.26s/it]e[AIteration 3

DDIM Sampler: 2%|2 | 4/200 [00:05<03:41, 1.13s/it]e[AIteration 4


DDIM Sampler: 0%| | 0/200 [00:00<?, ?it/s]e[AIteration 0

Summoning checkpoint.
Traceback (most recent call last):
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\main.py”, line 840, in
trainer.fit(model, data)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\trainer\trainer.py”, line 740, in fit
self._call_and_handle_interrupt(
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\trainer\trainer.py”, line 685, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\trainer\trainer.py”, line 777, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\trainer\trainer.py”, line 1199, in _run
self._dispatch()
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\trainer\trainer.py”, line 1279, in _dispatch
self.training_type_plugin.start_training(self)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\plugins\training_type\training_type_plugin.py”, line 202, in start_training
self._results = trainer.run_stage()
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\trainer\trainer.py”, line 1289, in run_stage
return self._run_train()
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\trainer\trainer.py”, line 1319, in _run_train
self.fit_loop.run()
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\loops\base.py”, line 145, in run
self.advance(*args, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\loops\fit_loop.py”, line 234, in advance
self.epoch_loop.run(data_fetcher)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\loops\base.py”, line 145, in run
self.advance(*args, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\loops\epoch\training_epoch_loop.py”, line 216, in advance
self.trainer.call_hook(“on_train_batch_end”, batch_end_outputs, batch, batch_idx, **extra_kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\trainer\trainer.py”, line 1495, in call_hook
callback_fx(*args, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\trainer\callback_hook.py”, line 179, in on_train_batch_end
callback.on_train_batch_end(self, self.lightning_module, outputs, batch, batch_idx, 0)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\main.py”, line 452, in on_train_batch_end
self.log_img(pl_module, batch, batch_idx, split=“train”)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\main.py”, line 420, in log_img
images = pl_module.log_images(batch, split=split, **self.log_images_kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\autograd\grad_mode.py”, line 27, in decorate_context
return func(*args, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\models\diffusion\ddpm.py”, line 1361, in log_images
sample_scaled, _ = self.sample_log(cond=c,
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\autograd\grad_mode.py”, line 27, in decorate_context
return func(*args, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\models\diffusion\ddpm.py”, line 1289, in sample_log
samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size,
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\autograd\grad_mode.py”, line 27, in decorate_context
return func(*args, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\models\diffusion\ddim.py”, line 96, in sample
samples, intermediates = self.ddim_sampling(conditioning, size,
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\autograd\grad_mode.py”, line 27, in decorate_context
return func(*args, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\models\diffusion\ddim.py”, line 154, in ddim_sampling
outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\autograd\grad_mode.py”, line 27, in decorate_context
return func(*args, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\models\diffusion\ddim.py”, line 182, in p_sample_ddim
e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\models\diffusion\ddpm.py”, line 1028, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\nn\modules\module.py”, line 1110, in _call_impl
return forward_call(*input, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\models\diffusion\ddpm.py”, line 1513, in forward
out = self.diffusion_model(x, t, context=cc)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\nn\modules\module.py”, line 1110, in _call_impl
return forward_call(*input, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\modules\diffusionmodules\openaimodel.py”, line 737, in forward
h = module(h, emb, context)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\nn\modules\module.py”, line 1110, in _call_impl
return forward_call(*input, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\modules\diffusionmodules\openaimodel.py”, line 83, in forward
x = layer(x, emb)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\nn\modules\module.py”, line 1110, in _call_impl
return forward_call(*input, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\modules\diffusionmodules\openaimodel.py”, line 250, in forward
return checkpoint(
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\modules\diffusionmodules\util.py”, line 116, in checkpoint
return func(*inputs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\modules\diffusionmodules\openaimodel.py”, line 263, in _forward
h = self.in_layers(x)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\nn\modules\module.py”, line 1110, in _call_impl
return forward_call(*input, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\nn\modules\container.py”, line 141, in forward
input = module(input)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\nn\modules\module.py”, line 1110, in _call_impl
return forward_call(*input, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\nn\modules\conv.py”, line 447, in forward
return self._conv_forward(input, self.weight, self.bias)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\nn\modules\conv.py”, line 443, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
You can try to repro this exception using the following code snippet. If that doesn’t trigger the error, please include your original repro script when reporting this issue.

import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([2, 1920, 32, 32], dtype=torch.float, device=‘cuda’, requires_grad=True)
net = torch.nn.Conv2d(1920, 640, kernel_size=[3, 3], padding=[1, 1], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().float()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()

ConvolutionParams
data_type = CUDNN_DATA_FLOAT
padding = [1, 1, 0]
stride = [1, 1, 0]
dilation = [1, 1, 0]
groups = 1
deterministic = false
allow_tf32 = true
input: TensorDescriptor 0000018F5513D620
type = CUDNN_DATA_FLOAT
nbDims = 4
dimA = 2, 1920, 32, 32,
strideA = 1966080, 1024, 32, 1,
output: TensorDescriptor 0000018F5513D9A0
type = CUDNN_DATA_FLOAT
nbDims = 4
dimA = 2, 640, 32, 32,
strideA = 655360, 1024, 32, 1,
weight: FilterDescriptor 0000018E3451CAC0
type = CUDNN_DATA_FLOAT
tensor_format = CUDNN_TENSOR_NCHW
nbDims = 4
dimA = 640, 1920, 3, 3,
Pointer addresses:
input: 00000004B5C00000
output: 00000004B2000000
weight: 00000005A1000000

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\main.py”, line 842, in
melk()
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\main.py”, line 823, in melk
trainer.save_checkpoint(ckpt_path)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\trainer\trainer.py”, line 1913, in save_checkpoint
self.checkpoint_connector.save_checkpoint(filepath, weights_only)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\trainer\connectors\checkpoint_connector.py”, line 471, in save_checkpoint
_checkpoint = self.dump_checkpoint(weights_only)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\trainer\connectors\checkpoint_connector.py”, line 410, in dump_checkpoint
model.on_save_checkpoint(checkpoint)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\pytorch_lightning\utilities\distributed.py”, line 50, in wrapped_fn
return fn(*args, **kwargs)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\models\diffusion\ddpm.py”, line 1493, in on_save_checkpoint
self.embedding_manager.save(os.path.join(self.trainer.checkpoint_callback.dirpath, “embeddings.pt”))
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Stable Diffusion\ldm\modules\embedding_manager.py”, line 132, in save
torch.save({“string_to_token”: self.string_to_token_dict,
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\serialization.py”, line 380, in save
_save(obj, opened_zipfile, pickle_module, pickle_protocol)
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\serialization.py”, line 601, in _save
storage = storage.cpu()
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch\storage.py”, line 83, in cpu
return _type(self, getattr(torch, self.class.name))
File “c:\Users\tomas.krejzek\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\venv\voc_sd\lib\site-packages\torch_utils.py”, line 45, in type
return dtype(self.size()).copy
(self, non_blocking)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

===========================================================================
Finished in 05m47s

Hi,

Could you please let us know the environment you’re using?
cuDNN version, nvidia-smi output, PyTorch version (if NGC container - it’s version)

Thank you.