Isaac gym task=Ant pipeline=cpu CUDA error: out of memor

I run python train.py task=Ant pipeline=cpu, but have error:

num envs 4096 env spacing 5
Box(-1.0, 1.0, (8,), float32) Box(-inf, inf, (60,), float32)
Env info:
{'action_space': Box(-1.0, 1.0, (8,), float32), 'observation_space': Box(-inf, inf, (60,), float32)}
Error executing job with overrides: ['task=Ant', 'pipeline=cpu']
Traceback (most recent call last):
  File "train.py", line 127, in launch_rlg_hydra
    'play': cfg.test,
  File "/home/dmitriy/miniconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/torch_runner.py", line 139, in run
    self.run_train()
  File "/home/dmitriy/miniconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/torch_runner.py", line 122, in run_train
    agent = self.algo_factory.create(self.algo_name, base_name='run', config=self.config)
  File "/home/dmitriy/miniconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/common/object_factory.py", line 15, in create
    return builder(**kwargs)
  File "/home/dmitriy/miniconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/torch_runner.py", line 23, in <lambda>
    self.algo_factory.register_builder('a2c_continuous', lambda **kwargs : a2c_continuous.A2CAgent(**kwargs))
  File "/home/dmitriy/miniconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/algos_torch/a2c_continuous.py", line 18, in __init__
    a2c_common.ContinuousA2CBase.__init__(self, base_name, config)
  File "/home/dmitriy/miniconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/common/a2c_common.py", line 980, in __init__
    A2CBase.__init__(self, base_name, config)
  File "/home/dmitriy/miniconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/common/a2c_common.py", line 163, in __init__
    self.game_rewards = torch_ext.AverageMeter(self.value_size, self.games_to_track).to(self.ppo_device)
  File "/home/dmitriy/miniconda3/envs/rlgpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 673, in to
    return self._apply(convert)
  File "/home/dmitriy/miniconda3/envs/rlgpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 430, in _apply
    self._buffers[key] = fn(buf)
  File "/home/dmitriy/miniconda3/envs/rlgpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 671, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "/home/dmitriy/miniconda3/envs/rlgpu/lib/python3.7/site-packages/torch/cuda/__init__.py", line 170, in _lazy_init
    torch._C._cuda_init()
RuntimeError: CUDA error: out of memory

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.