Using torch with nvidia docker

Hi,

I have two boxes, and they all have Ubuntu 14.04. One is using GeForce GTX TITAN X, the other is using TITAN X PASCAL. Both of the boxes have CUDA driver 8.0 and CUDNN 5.1. The issue is that when I trying to train a neural network using torch with docker, the box with TITAN X PASCAL will not use the GPU immediately, there’s a 5 min delay. The other box is fine, and if I run the code without docker is fine too. Has anybody run into this issue? How should I debug this?

Sounds like JIT compile issue

Make sure you are compiling all codes for the correct GPU types you are using.