RX and TX Ring sizes

What is the downside to increasing RX and TX ring sizes?

We have a very simple test environment that is showing very high latency and we wonder if we need to increase the RX and TX sizes to help reduce pause frames with our 40Gb nics.

What is the harm to max out the ring sizes? The VMWare driver defaults to 512.

This forum seems like such a joke compared to other vendors in this space. It’s as if no one is even working at the place or moderating. I asked the similar thing when looking into tx_queue_stopped errors, no reply. So to answer your question even though you’ve probably moved beyond this (but more to maybe stimulate SOME sort of action from the MLX people), I have not seen a negative in increasing the ring buffers. Some would argue don’t touch them unless you are specifically addressing drops from overrunning the rings, but I’ve also seen where increasing them may help even though “dropped” in the ethtool stats isn’t technically incrementing. It’s easy to increase and then test/monitor.