Hi
I am reading about parallelization techniques used in GPUs and CPUs.
I would like to know if the GPU inside the TX2 employs SIMT.
Also am I using cuBLAS and cuDNN when running inference?
Hi
I am reading about parallelization techniques used in GPUs and CPUs.
I would like to know if the GPU inside the TX2 employs SIMT.
Also am I using cuBLAS and cuDNN when running inference?
Hi @Aizzaac, yes I believe so, more for info please see this blog post: Using CUDA Warp-Level Primitives | NVIDIA Technical Blog
Yes, most of the machine learning frameworks (for example PyTorch, TensorFlow, caffe, MXNet, ect) will be using cuDNN when you run models in them with GPU enabled.