GTX1650 is not so much faster than CPU(Ryzen3 3200 with VEGA8)

I did Tensorflow Machine Learning (RNN) with the GTX1650, but it’s not as fast as I thought.

Before installing the GTX1650, it took about 75 seconds for 1 epoch when using only the CPU, but it was reduced to about 45 seconds when using the GTX1650 GPU.

It was only reduced from 75 seconds to only 30 seconds.

I thought it would be much shorter.

Is this normal?

Graphics card is GTX1650 with 4G GDDR6
CPU AMD Ryzen 3 3200 with VEGA

Tensorflow 2.3.1
CUDA 10.1.243_426.00
CuDNN 7.6.5.32

OS is Win10 and
I ran it in jupyter.

There were no errors in the execution of the GPU computation.

The GPU was recognized as 0, and the GPU share increased to around 25%.

Although it is certain that the GPU is used, the computation speed is not as expected.