Hi! I’m creating an image with the following information:
VkImageCreateInfo info =
{
VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
NULL,
0,
height > 1 ? VK_IMAGE_TYPE_2D : VK_IMAGE_TYPE_1D,
vkformat,
extents,
mips,
cube ? numFaces : numImages,
VK_SAMPLE_COUNT_1_BIT,
VK_IMAGE_TILING_OPTIMAL,
VK_IMAGE_USAGE_SAMPLED_BIT,
VK_SHARING_MODE_CONCURRENT,
2,
queues,
VK_IMAGE_LAYOUT_PREINITIALIZED
};
Where in my case mips is 9, cube is false and numImages is 1, height is 256 and extents is 256, 256, 1. The memory is created using the VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT only. However, when running the following:
vkImageSubresource subres;
subres.arrayLayer = 0;
subres.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
subres.mipLevel = mip;
VkSubresourceLayout layout;
vkGetImageSubresourceLayout(VkRenderDevice::dev, this->img, &subres, &layout);
‘layout’ becomes:
offset: 0
size: 65536
rowPitch: 0
arrayPitch: 89600
depthPitch: 89600
Basically I get no bytes per row, but I do get bytes per array layer, which is also bigger than the entire size of my image. Is the vkGetImageSubresourceLayout only available for certain types of images? The memoryTypeBits I get from vkGetImageMemoryRequirements is 2, and the two first memory properties are 0 and 1 respectively, so the only flags I can use when allocating image memory is either 0, or VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT.
Anyways, tested this on an AMD card and it works fine, I get a valid row, array and depth pitch for each mip level.