Why cant chat with rtx run on gtx gpus

I own a rtx 3070 but the whole rig sips like 500w off the wall per hour on full load even in idle it sips some 60-80w, now i do also own a gtx 1650 ti gaming laptop that barely consumes power and would be great to run chat with rtx on, if tensorflow runs fine, and having trained image recognition models on my gtx based laptop with a good amount of ram so why cant chat with rtx run on a Turing based gtx gpu as long as its fed ram/memory whats stopping it ? other than wanting to sell more rtx 30 and 40 based cards and new gpus is there a technical limitation that stops it from running on gtx gpu’s ?