I have this code:
static void use(Mesh const *mesh, Program const *program) {
glBindBuffer(GL_ARRAY_BUFFER, mesh->vertexbuf);
check();
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, mesh->vertexsize, (GLvoid *)0);
if (!check()) {
fprintf(stderr, "glVertexAttribPointer(0, 2, GL_FLOAT, FL_FALSE, %d, 0) mesh->vertexbuf %d failed\n", mesh->vertexsize, mesh->vertexbuf);
}
glEnableVertexAttribArray(0);
check();
This code prints this error (please use your imagination for the implementation of the check() function):
GL version: NVIDIA Tegra X1 (nvgpu)/integrated, 4.2.0 NVIDIA 32.1.0, 4.20 NVIDIA via Cg compiler
src/glctx.cpp:715: void use(const Mesh*, const Program*): GL_INVALID_OPERATION (0x502)
glVertexAttribPointer(0, 2, GL_FLOAT, FL_FALSE, 16, 0) mesh->vertexbuf 7 failed
Like 715 is the line right after glVertexAttribPointer().
It fails with GL_INVALID_OPERATION on the glVertexAttribPointer() call.
The ARRAY_BUFFER is a valid buffer (id 7) and the stride is valid (16) and the pointer offset is 0 (so, NULL pointer.)
According to my reading of the specification, this should be totally valid: glVertexAttribPointer - OpenGL 4 Reference Pages
What should I look for to debug or work around this error?
glVertexAttribPointer works for me
where is your glBufferData call?
glBufferData comes before (this is a streaming buffer, and it will have various data in it.)
This code works on another system, I’m porting it here. But, of course, that doesn’t mean the other system is correct :-)
Does Jetson OpenGL have a debug runtime like the Direct3D debug runtime, with more helpful error messages / text?
I tried putting some scratch data into the buffer right there, but that doesn’t help.
static unsigned char zeros[256];
static void use(Mesh const *mesh, Program const *program) {
glBindBuffer(GL_ARRAY_BUFFER, mesh->vertexbuf);
check();
glBufferData(GL_ARRAY_BUFFER, 256, zeros, GL_DYNAMIC_DRAW);
check();
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, mesh->vertexsize, (GLvoid *)0);
if (!check()) {
fprintf(stderr, "glVertexAttribPointer(0, 2, GL_FLOAT, FL_FALSE, %d, 0) mesh->vertexbuf %d failed\n", mesh->vertexsize, mesh->vertexbuf);
}
glEnableVertexAttribArray(0);
Same problem.
I ended up hooking up nsight to the target, and it tells me:
"ID,Origin,Source,Message
34,Target,NVIDIA Nsight Graphics,"The following incompatibilities were seen during capture: glDisableVertexAttribArray (No VAO is bound as required in GL Core Profile), glEnableVertexAttribArray (No VAO is bound as required in GL Core Profile), glVertexAttribPointer (No VAO is bound as required in GL Core Profile) "
"
Which is slightly more useful, but not a lot – I have a buffer bound.
I had hoped to be able to walk backwards and look at binding and enable state, but no such luck it seems.
Also: The documentation didn’t tell me where the JetPack installer put the graphics debugger on my Ubuntu host system.
I ended up starting it with a script I found in
/opt/nvidia/nsight-graphics-for-l4t/nsight-graphics-for-l4t-2018.7/host/linux-desktop-nomad-x64/nv-nsight-gfx
This was really hard to find, because the new spiffy 4.2 JetPack installer talked about the Tegra_Graphics_Debugger, which is not installed under that name.
And the answer is that the code was using the Compability profile on the previous target, but the Core profile on the nano, and in Core profile, the default VAO (0) is not valid, and a specific VAO needs to be generated/bound, which I had totally forgotten to do.
Still don’t know why the driver dislikes GL_ALPHA format texture data, though.
Hi snarky,
For this error, could you share a test code so that we can build and run to reproduce it?
@DaneLLL I think this program shows the problem:
// Show that glTexImage2D() doesn't like GL_ALPHA internal format
//
// Build with:
//
// sudo apt install libglfw3-dev libglew-dev
//
// g++ -o bug -g main.cpp -lGL -lGLEW -lglfw
//
// Run with:
//
// ./bug
//
#include <stdio.h>
#include <GL/glew.h> // Initialize with glewInit()
#include <GLFW/glfw3.h>
static void show_the_bug() {
// no errors pending
fprintf(stderr, "glGetError before: 0x%x\n", glGetError());
// try to allocate a texture with GL_ALPHA internal format
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, 256, 256, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
// see error generated
fprintf(stderr, "glGetError after: 0x%x\n", glGetError());
// try with GL_RGB
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 256, 256, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
// see that it works
fprintf(stderr, "glGetError after: 0x%x\n", glGetError());
}
static GLFWwindow *window;
static void glfw_error_callback(int error, const char* description)
{
fprintf(stderr, "GLFW error: %d: %s\n", error, description);
}
static int initialize_gl() {
glfwSetErrorCallback(glfw_error_callback);
if (!glfwInit()) {
fprintf(stderr, "GLFW init failed\n");
return 1;
}
// GL 4.2 + GLSL 420
const char* glsl_version = "#version 420 core";
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
int display_w, display_h;
auto const *vm = glfwGetVideoMode(glfwGetPrimaryMonitor());
display_w = vm->width - 128;
display_h = vm->height - 64;
window = glfwCreateWindow(display_w, display_h, "App", NULL, NULL);
if (window == NULL) {
fprintf(stderr, "GLFW create window failed\n");
return 1;
}
glfwMakeContextCurrent(window);
glfwSwapInterval(1); // Enable vsync
bool err = glewInit() != GLEW_OK;
if (err) {
fprintf(stderr, "Failed to initialize OpenGL loader!\n");
return 1;
}
glfwMakeContextCurrent(window);
return 0;
}
static void terminate_gl() {
glfwDestroyWindow(window);
glfwTerminate();
}
int main(int, char**)
{
if (initialize_gl() != 0) {
return 1;
}
show_the_bug();
terminate_gl();
return 0;
}
OK, so AAACTUALLY, GL_ALPHA was dropped from core profile for GL4, so the driver is correct.
But annoying :-D