OOM to run cnn/resnet under WSL2 with RTX 3050 Ti

I followed CUDA on WSL :: CUDA Toolkit Documentation and everything appears to be working except for cnn/resnet from docker, I have a laptop with 16-core CPU(i7, 64GB) and a RTX 3050 Ti(4GB), is this configuration still under powered to run any meaningful tensorflow load, in this case, the resnet from the wsl-user-guide page.

Limit:                  2966867150
InUse:                  2659413504
MaxInUse:               2693054464
NumAllocs:                    2420
MaxAllocSize:            427819008

2022-04-05 00:11:23.765107: W tensorflow/core/common_runtime/bfc_allocator.cc:429] ****************xx****************xx***********************************x**********************______
2022-04-05 00:11:23.765136: W tensorflow/core/framework/op_kernel.cc:1655] OP_REQUIRES failed at conv_ops.cc:539 : Resource exhausted: OOM when allocating tensor with shape[256,56,56,256] and type half on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
  File "cnn/resnet.py", line 50, in <module>
    nvutils.train(resnet50, args)
  File "/workspace/nvidia-examples/cnn/nvutils/runner.py", line 216, in train
    initial_epoch=initial_epoch, **valid_params)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 819, in fit
    use_multiprocessing=use_multiprocessing)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py", line 694, in fit
    steps_name='steps_per_epoch')
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py", line 265, in model_iteration
    batch_outs = batch_function(*batch_data)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 1123, in train_on_batch
    outputs = self.train_function(ins)  # pylint: disable=not-callable
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/backend.py", line 3727, in __call__
    outputs = self._graph_fn(*converted_inputs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py", line 1551, in __call__
    return self._call_impl(args, kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py", line 1591, in _call_impl
    return self._call_flat(args, self.captured_inputs, cancellation_manager)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py", line 1692, in _call_flat
    ctx, args, cancellation_manager=cancellation_manager))
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py", line 545, in call
    ctx=ctx)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute
    six.raise_from(core._status_to_exception(e.code, message), None)
  File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.ResourceExhaustedError:  OOM when allocating tensor with shape[256,56,56,256] and type half on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[{{node bn2a_branch1/cond/then/_80/FusedBatchNormV3}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 [Op:__inference_keras_scratch_graph_20352]

Function call stack:
keras_scratch_graph

If possible can that guide page add some ‘system requirement’ to run tensorflow?