Torc cuda issue with Asus Rtx 3060 ti

Description

A clear and concise description of the bug or issue.

Environment

TensorRT Version:
**GPU Typeasus rtx 3060 ti (not the oc one) :
**Nvidia Driver Version
710
:
**CUDA Version11.6:
CUDNN Version:
**Operating System + VersionUbuntu 18.04:
Python Version (if applicable):
TensorFlow Version (if applicable):
**PyTorch Version (if applicable)current nightly:
Baremetal or Container (if container which image + tag):

Hello everyone,
I have been using torch + cuda for almost a year now and just upgraded my gpu from 1050ti to 3060ti.
I am having difficulties transferring tensors-models to gpu, with torch.device(0) or similar methods.

I have noticed that when I type

>>> import torch
>>> torch.cuda.is_available()
True

>>> torch.cuda.get_device_name(0)
u'NVIDIA GeForce RTX 3060 Ti'

there is an u here(Does this mean uncompatible)? Also device count returns this.

>>> torch.cuda.device_count()
1L

Hi,

This issue doesn’t look like TensorRT related. We will move this post to CUDA related forum to get better help.

Thank you.