Cannot set breakpoint in Nsight for Visual Studio for a simple OpenGL program

I am using:

  • Windows 10 Education
  • Nsight 5.5 for Visual Studio
  • Visual Studio 2017 Enterprise, version 15.5.6
  • nVidia GeForce GTX 1050 Ti
  • driver 391.01 installed
  • glfw 3.2.1 windows pre-compiled downloaded from glfw website.
  • gl3w just downloaded and built a days ago.

I have read the documentation on Graphics Debugger > Shader Debugging > OpenGL. I don’t have VS2015. It is my first use of Nsight. I read a thread in this forum here:, but it offers me no help.

The program is build in VS2017 with Debug mode and x64 architecture. The program runs correctly in VS2017 without Nsight and in Frame Debugging mode within Nsight.

The screenshot for Shader tab:

I clicked the link for vertex shader in the above Shader tab, the source popped up, then I tried to set the breakpoint at the line “gl_Position = vPosition;” in the source tab, but failed.

The source is the first one in the Red Book “OpenGL Programming Guide: The Official Guide to Learning OpenGL, Version 4.5 with SPIR-V (9th Edition)” with some small irrelevant modifications. It is attached below (does this forum support file attachment?). Other parts of the source can be downloaded from the associated website of the book:

So, why can’t I set breakpoint? Did I not add necessary debugging information into GLSL(how to do that?)? How to solve this issue? Thanks a lot.

(1) 01-triangles.cpp

//  Triangles.cpp

#include <cstdlib>
#include <cstdio>
#include <iostream>
using namespace std;
#include <glm/fwd.hpp>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
//#include <glad/glad.h>
#include "vgl.h"
#include "LoadShaders.h"

enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };

GLuint  VAOs[NumVAOs];
GLuint  Buffers[NumBuffers];

const GLuint  NumVertices = 6;

void glm_test() {
	glm::vec4 vec(1.0f, 0.0f, 0.0f, 1.0f);
//	glm::vec3 swizzle = vec.xzy;
	glm::mat4 trans;
	trans = glm::translate(trans, glm::vec3(1.0f, 1.0f, 0.0f));
	vec = trans * vec;
	std::cout << vec.x << vec.y << vec.z << std::endl;

// init

init( void )
	GLfloat  vertices[NumVertices][2] = {
		{ -0.90f, -0.90f },{ 0.85f, -0.90f },{ -0.90f,  0.85f },  // Triangle 1
		{ 0.90f, -0.85f },{ 0.90f,  0.90f },{ -0.85f,  0.90f }   // Triangle 2
	glGenVertexArrays(NumVAOs, VAOs);
	//glGenBuffers(NumBuffers, Buffers);
	glCreateBuffers(NumBuffers, Buffers);
	glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
	//glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
	glBufferStorage(GL_ARRAY_BUFFER, sizeof(vertices), vertices, 0);
	glCreateVertexArrays(NumVAOs, VAOs);
	glCreateBuffers(NumBuffers, Buffers);
	glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
	glNamedBufferStorage(Buffers[ArrayBuffer], sizeof(vertices), vertices, 0);
	glVertexAttribPointer(vPosition, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));

	ShaderInfo  shaders[] = {
		{ GL_VERTEX_SHADER, "E:\\path\\to\\the\\vertex\\shader\\triangles.vert" },
		{ GL_FRAGMENT_SHADER, "E:\\path\\to\\the\\frag\\shader\\triangles.frag" },
	GLuint program = LoadShaders(shaders);


// display

display( void )
    static const float black[] = { 0.2f, 0.2f, 0.2f, 1.0f };

    glClearBufferfv(GL_COLOR, 0, black);

    //glBindVertexArray( VAOs[Triangles] );
    glDrawArrays( GL_TRIANGLES, 0, NumVertices );

void error_callback(int error, const char* description)
	fprintf(stderr, "Error: %s\n", description);

// main

main( int argc, char** argv )
	int major, minor, rev;
	glfwGetVersion(&major, &minor, &rev);
	printf("%s\n", glfwGetVersionString());

	if (!glfwInit())

    GLFWwindow* window = glfwCreateWindow(800, 600, "Triangles", NULL, NULL);

	if (gl3wIsSupported(4, 5));
	glViewport(150, 100, 600, 400);
	int fbw, fbh;
	glfwGetFramebufferSize(window, &fbw, &fbh);

	int nrAttributes;
	glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &nrAttributes);
	std::cout << "Maximum # of vertex attributes supported: " << nrAttributes << std::endl;

    while (!glfwWindowShouldClose(window))



(2) triangles.vert

#version 450 core

layout( location = 0 ) in vec4 vPosition;

    gl_Position = vPosition;


#version 450 core

out vec4 fColor;

void main()
    fColor = vec4(0.5, 0.4, 0.8, 1.0);

I installed Visual Studio 2015 and went over the whole steps, then I got:

Is it true that the Pascal core GTX 1050 Ti is not supported by Nsight 5.5? I didn’t find anywhere in the documentation that explicitly specifies this restriction. But why would Nsight 5.5 be so hostile to GTX 1050 Ti? It isn’t that old.

I tried remote debugging just now, but got the same error (see the screenshot in the previous post). My host does not install any nVidia graphics card; only an integrated intel graphics chip is used. So does “this GPU” mean the intel integrated graphics chip in the host, or the GeForce GTX 1050 Ti in the target? Hope anyone who successfully went through Nsight’s shader debugging with a GeForce GTX card could lend me a hand. Thanks a lot!

Hi hzhou3,

Thanks for your detailed post about the issue.
Unfortunately, Shader debugging is supported only on Kepler architecture GPUs at the moment.

Thank you for the clear reply. Now I am considering purchasing a cheap Kepler card. Could you please take a look at this thread: Thanks again.