cuDNN reduce tensor doesn't respect input data formats

Hi,
I’m using cudnnReduceTensor to calculate channel-wise average of the input. To do that I set alpha to 1/n and cudnn reduce descriptor to ADD. My input and output tensors are set to NHWC format. Everything is in float. But cudnnReduceTensor treats the input array in NCHW format anyway. I’m using CuDNN (v8201). Is this a bug or something else?

1 Like

Hi @dhananjt
Can you please share with us an API and error log?

Thanks

Has this issue been resolved? I’m having an issue that looks like what he’s talking about. I’m just doing a simple average along the images dim in order to calc the average error over all the images in a batch.

The resulting image that comes out looks like it’s not coming out as NHWC.

EDIT: I set the output tensor’s format to CUDNN_TENSOR_NCHW and it worked…Can you at least update the docs about how reductions output? Although, this really doesn’t help as I’d like to then pass that avg error back through my network…and it’s all in NHWC. Sheesh.

EDIT: And just to be more clear. Even though I set the output tensors format to NCHW, I can still save the image as the data in the tensor is still NHWC. Maybe I’m confused over how the format flag works? I’d think it’d inform any function on how the data in the tensor is actually stored. Very strange.

I couldn’t solve my issue. I had tried everything (including ensuring tensors’ formats).
I ended up switching to PyTorch as it provides a wrapper around CUDA.

2 Likes

I still have the same problem with version 9.1.1. I reported the bug with an api log to the nvidia developer site.

This issue is tracked in NVBUG ID 4694955
[Public]

We are glad to let you know the issue is root caused and fixed in house . This fix will be part of next CUDNN release after 9.2.1 soon . Thanks again for reporting bugs to us .

Best,
Yuki

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.