Does the GPU inside the TX2 employ SIMT?

Hi

I am reading about parallelization techniques used in GPUs and CPUs.
I would like to know if the GPU inside the TX2 employs SIMT.
Also am I using cuBLAS and cuDNN when running inference?

Hi @Aizzaac, yes I believe so, more for info please see this blog post: https://developer.nvidia.com/blog/using-cuda-warp-level-primitives/

Yes, most of the machine learning frameworks (for example PyTorch, TensorFlow, caffe, MXNet, ect) will be using cuDNN when you run models in them with GPU enabled.

1 Like