Hello everyone,
I have RTX4090 in my windows laptop and I want to run jupyter notebooks of the seminar: “Fundamentals of Deep Learning” locally. I have installed torch 2.7.1+cu118. But triton kernel is missing… So, I think to create a container with triton kernel from NGC. That’s the tag: nvcr.io/nvidia/tritonserver:24.01-pyt-python-py3 . If I pull the container, will this type of torch be compatible to run locally the labs? Am I at the right way?
Thank you