Problem in running NVIDIA GeForce RTX 2070 with Max-Q Design

Please help me to set up my local laptop to run deep learning codes (tensorflow) in jupyter notebook and cross validation codes for classic ML in my GPU (also CPU optimally). I tried using the guidelines but GPU is not gtting detected or used for training models. Not sure what is going wrong. Please guide me with step my step procedure on how to set it up with snapshots as I’m new to this.

System details are as below

  • OMEN Laptop - 15-ek0022tx
  • CPU - 16.0 GB RAM (Intel(R) Core™ i7-10750H CPU @ 2.60GHz)
  • GPU 0 - 8.0 GB (Intel(R) UHD Graphics)
  • GPU 1 - 8.0 GB (NVIDIA GeForce RTX 2070 with Max-Q Design)
  • OS - Windows 10 Version-20H2
  • CUDA 10.1
  • NVIDIA Geforce Driver 460.89
  • MS Visual Studio 2019 - 16.8.3

Hi @arockialiborious,
We recommend you to use NVIDIA NGC containers to avoid any system dependency related issue.

Thanks!

Hi @AakankshaS - Thank you. I’m new to using NVIDIA and Im not familiar with this and could not find how to use it. Can you please suggest some documentation link about what is NVIDIA NGC and how to use it or what are the pre-requisites? Most documents on Nvidia docker is for UBUNTU and not windows 10.

Hi @arockialiborious,
You can use NGC containers like regular docker containers.
A pull command is given for each container.
reference image-

Thanks!