Training ResNet50 with Tensorflow 1.5.0 on RTX 3070 problems

I trying to run on a docker created as follows:

docker run --gpus=all -it -p "8888:8888" -v "/home/miguel/ml-resnet-50/:/notebooks/" --name ml-resnet-50 tensorflow/tensorflow:1.5.0-gpu-py3 jupyter notebook --ip 0.0.0.0 --no-browser --allow-root

On a Linux PC Ubuntu 20.04 with RTX 3070 Nvidia Card the follow code:

model.fit(
    x=imgs_train,
    y=clss_train,
    batch_size=16,
    epochs=2,
    verbose=1,
    validation_data=(imgs_val, clss_val)
    )

And getting following error:

InternalError: Blas SGEMM launch failed : m=48400, n=64, k=64
[[Node: res2a_branch2a/Conv2D = Conv2D[T=DT_FLOAT, data_format=“NHWC”,
dilations=[1, 1, 1, 1], padding=“VALID”, strides=[1, 1, 1, 1],
use_cudnn_on_gpu=true,
_device=“/job:localhost/replica:0/task:0/device:GPU:0”](max_pooling2d/MaxPool,
res2a_branch2a/kernel/read)]] [[Node: loss/mul/_2859 =
_Recvclient_terminated=false, recv_device=“/job:localhost/replica:0/task:0/device:CPU:0”,
send_device=“/job:localhost/replica:0/task:0/device:GPU:0”,
send_device_incarnation=1, tensor_name=“edge_15435_loss/mul”,
tensor_type=DT_FLOAT,
_device=“/job:localhost/replica:0/task:0/device:CPU:0”
]]

Caused by op ‘res2a_branch2a/Conv2D’, defined at: File
“/usr/lib/python3.5/runpy.py”, line 184, in _run_module_as_main
main”, mod_spec) File “/usr/lib/python3.5/runpy.py”, line 85, in _run_code
exec(code, run_globals) File “/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py”, line
16, in
app.launch_new_instance() File “/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py”,
line 658, in launch_instance
app.start() File “/usr/local/lib/python3.5/dist-packages/ipykernel/kernelapp.py”, line
478, in start
self.io_loop.start() File “/usr/local/lib/python3.5/dist-packages/zmq/eventloop/ioloop.py”, line
177, in start
super(ZMQIOLoop, self).start() File “/usr/local/lib/python3.5/dist-packages/tornado/ioloop.py”, line 888,
in start
handler_func(fd_obj, events) File “/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py”,
line 277, in null_wrapper
return fn(*args, **kwargs) File “/usr/local/lib/python3.5/dist-packages/zmq/eventloop/zmqstream.py”,
line 440, in _handle_events
self._handle_recv() File “/usr/local/lib/python3.5/dist-packages/zmq/eventloop/zmqstream.py”,
line 472, in _handle_recv
self._run_callback(callback, msg) File “/usr/local/lib/python3.5/dist-packages/zmq/eventloop/zmqstream.py”,
line 414, in _run_callback
callback(*args, **kwargs) File “/usr/local/lib/python3.5/dist-packages/tornado/stack_context.py”,
line 277, in null_wrapper
return fn(*args, **kwargs) File “/usr/local/lib/python3.5/dist-packages/ipykernel/kernelbase.py”, line
283, in dispatcher
return self.dispatch_shell(stream, msg) File “/usr/local/lib/python3.5/dist-packages/ipykernel/kernelbase.py”, line
233, in dispatch_shell
handler(stream, idents, msg) File “/usr/local/lib/python3.5/dist-packages/ipykernel/kernelbase.py”, line
399, in execute_request
user_expressions, allow_stdin) File “/usr/local/lib/python3.5/dist-packages/ipykernel/ipkernel.py”, line
208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent) File
“/usr/local/lib/python3.5/dist-packages/ipykernel/zmqshell.py”, line
537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File
“/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py”,
line 2728, in run_cell
interactivity=interactivity, compiler=compiler, result=result) File
“/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py”,
line 2850, in run_ast_nodes
if self.run_code(code, result): File “/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py”,
line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns) File “”, line 2, in
model = get_model() File “”, line 4, in get_model
model = ResNet50(include_top=False,input_shape=(pipeline[‘img_height’],
pipeline[‘img_width’], 3)) File
“/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/applications/resnet50.py”,
line 235, in ResNet50
x = conv_block(x, 3, [64, 64, 256], stage=2, block=‘a’, strides=(1, 1)) File
“/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/applications/resnet50.py”,
line 122, in conv_block
name=conv_name_base + ‘2a’)(input_tensor) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/engine/topology.py”,
line 258, in call
output = super(Layer, self).call(inputs, **kwargs) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/layers/base.py”,
line 652, in call
outputs = self.call(inputs, *args, **kwargs) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/layers/convolutional.py”,
line 167, in call
outputs = self._convolution_op(inputs, self.kernel) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/nn_ops.py”,
line 838, in call
return self.conv_op(inp, filter) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/nn_ops.py”,
line 502, in call
return self.call(inp, filter) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/nn_ops.py”,
line 190, in call
name=self.name) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_nn_ops.py”,
line 639, in conv2d
data_format=data_format, dilations=dilations, name=name) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py”,
line 787, in _apply_op_helper
op_def=op_def) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py”,
line 3160, in create_op
op_def=op_def) File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py”,
line 1625, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

InternalError (see above for traceback): Blas SGEMM launch failed :
m=48400, n=64, k=64 [[Node: res2a_branch2a/Conv2D =
Conv2D[T=DT_FLOAT, data_format=“NHWC”, dilations=[1, 1, 1, 1],
padding=“VALID”, strides=[1, 1, 1, 1], use_cudnn_on_gpu=true,
_device=“/job:localhost/replica:0/task:0/device:GPU:0”](max_pooling2d/MaxPool,
res2a_branch2a/kernel/read)]] [[Node: loss/mul/_2859 =
_Recvclient_terminated=false, recv_device=“/job:localhost/replica:0/task:0/device:CPU:0”,
send_device=“/job:localhost/replica:0/task:0/device:GPU:0”,
send_device_incarnation=1, tensor_name=“edge_15435_loss/mul”,
tensor_type=DT_FLOAT,
_device=“/job:localhost/replica:0/task:0/device:CPU:0”
]]

Any idea why it happeds?