nvidia-docker openGL clash

My goal: use pytorch for OpenAI gym, with nvidia-docker, and be able to see the gym environment (VNC)
My system: Core i7, GTX 1060
My environment: ubuntu, nvidia driver intalled with runfile and --no-opengl-files

The problem: OpenAI gym renders with glitch. see here:
https://askubuntu.com/questions/942768/how-can-i-properly-run-openai-gym-with-nvidia-docker-and-see-the-environments

So everwhere I see here on nvidia, in order to use GPU for compute, while keeping integrated graphics for desktop/VNC etc, I need to intall nvidia driver via runfile with no opengl, and NOT with apt-get.

However, it seems the official CUDA docker images install the driver and cuda with apt-get. As you can see, Open AI gym is not rendering properly, so I am wondering if it’s because the docker images use apt-get to install cuda.

Specifically, I am just running nvidia-docker run -it floydhub/pytorch bash. In the image, with nvidia-smi I can see GPU, and torch.cuda.is_available() returns True. However, when I run a VNC server and connect, the OpenAI environment runs but with an completely distorted image on OpenAI gym environment (see link above)

So, are there Nvidia containers build with no OpenGL? Or what should I do next?