TensorRT assertion failed when trying to buildCudaEngine

I have an inception module of varying width as a hyper parameter. I’ve found that TensorRT can handle my model as long as the width of my inception module is not too large. IE if I have 8 branches in the module it is ok, but I get errors when the number of branches reaches 12.
The error message I get with 12 branches is:
cudnnBuilder2.cpp:1006: nvinfer1::cudnn::Engine* nvinfer1::builder::buildEngine(nvinfer1::CudaEngineBuildConfig&, const nvinfer1::cudnn::HardwareContext&, const nvinfer1::Network&): Assertion `it != tensorScales.end()’ failed.

Any feedback would be great as this is the only difference in my model and it will be a few days until I can finish training a new model to test a different number of branches.

[edited to remove incorrect statement]

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth

Please file a bug here: https://developer.nvidia.com/nvidia-developer-program
Please include the steps/files used to reproduce the problem along with the output of infer_device.