Seemingly cooking a triangle mesh with 32 bit indices, but detected as 16 bit

Hello,

I seem to cook a triangle mesh with 32 bit indices, but I can only use PxU16 to read the mesh triangles. If I use PxU32, it crashed.

My code is as follows. The crash location is marked.

// These vertices and indices come from reading a mesh in the graphics engine
std::vector<physx::PxVec3> pxVertices;
std::vector<unsigned int> indices;

physx::PxTriangleMeshDesc meshDesc;
meshDesc.points.count = pxVertices.size();
meshDesc.triangles.count = indices.size() / 3;
meshDesc.points.stride = sizeof(physx::PxVec3);
meshDesc.triangles.stride = sizeof(unsigned int) * 3;
meshDesc.points.data = &pxVertices[0];
meshDesc.triangles.data = &indices[0];
meshDesc.flags = physx::PxMeshFlags(0);
if (meshDesc.flags & physx::PxMeshFlag::e16_BIT_INDICES)
{
	// CustomPrint("A: these are 16 bit indices")
}
else
{
	// CustomPrint("A: these are 32 bit indices")
}

physx::PxDefaultMemoryOutputStream stream;
bool ok = m_pCooking->cookTriangleMesh(meshDesc, stream);
if (!ok)
{
	// CustomPrint("ERROR: Cannot cook triangle mesh");
	return nullptr;
}

physx::PxDefaultMemoryInputData rb(stream.getData(), stream.getSize());

physx::PxTriangleMesh* pTriangleMesh = m_pPhysics->createTriangleMesh(rb);
if (pTriangleMesh)
{
	physx::PxTriangleMeshGeometry triGeom;
	triGeom.triangleMesh = pTriangleMesh;
	physx::PxTransform transform(0.0f, 0.0f, 0.0f);
	physx::PxRigidStatic* pActor = m_pPhysics->createRigidStatic(transform);
	physx::PxShape* shape = pActor->createShape(triGeom, *m_pMaterial, transform);
	PX_UNUSED(shape);
	m_pScene->addActor(*pActor);
	
	// Iterating through the triangles
	const physx::PxU32 nbVerts = pTriangleMesh->getNbVertices();
	const physx::PxVec3* verts = pTriangleMesh->getVertices();
	const physx::PxU32 nbTris = pTriangleMesh->getNbTriangles();
	const void* tris = pTriangleMesh->getTriangles();
	const physx::PxTriangleMeshFlags theseflags = pTriangleMesh->getTriangleMeshFlags();
	if (theseflags & physx::PxTriangleMeshFlag::eHAS_16BIT_TRIANGLE_INDICES)
	{
		// CustomPrint("B: these are 16 bit indices")
	}
	else
	{
		// CustomPrint("B: these are 32 bit indices")
	}
	const physx::PxU32* triIndices = (const physx::PxU32*)tris; // works if PxU16

	for (physx::PxU32 i = 0; i < nbTris; ++i)
	{
		for (physx::PxU32 j = 0; j < 3; ++j)
		{
			const physx::PxU32 index = triIndices[3 * i + j]; // works if PxU16
			const physx::PxVec3& vertex = verts[index]; // CRASH here
			// AddVisualizationVertex(vertex);
		}
	}
}

This prints:

A: these are 32 bit indices
B: these are 16 bit indices

If I use PxU16, it is fine. I visualised it in PhysX Visual Debugger and the mesh seems to be correct too. However I want to use PxU32 to accomodate larger meshes.

I use PhysX 3.3.4 on Visual Studio 2013 and Windows 8.1.

So does anybody know what I am doing wrong?

Thank you!