RTX 2080 is slower than GTX 1070M

Hello, I have bought a new computer with I9-8950 and RTX 2080 for machine learning purposes. I am surprised that the Flair NLP (based on PyTorch) machine learning program (I9-8950, Windows 10 Pro , Cuda 11.1, driver 456.71) is more than 2 times slower comparing to the same one running on a GTX 1070M equipped laptop (I7-6700, Ubuntu 20.04, Cuda 11.0, driver 450.80.02). I tried to play with some settings on the control panel but it does help. Can somebody experienced the same issue and is able to figure out what is not going on?
Thank you in advance