From the Mellanox documentation link - I could get the description of the rx[i]_cache_full counters.
However, it is not clear under what conditions would this counter increase and what would be the side effect?
Could packet drops be a side effect?
How does one recover from this condition?
I am attaching an image snapshot from one of our Mellanox NIC cards which shows this counter.
The card being used is MT27710 Family ConnectX-4 Lx
“Internal page cache” is the rx page buffer management mechanism of mlx5 Ethernet driver. The driver maintains its own pool of pages for rx.
In the RX datapath, when the lifetime of a skb ends, driver always recycles the used pages and put them in a pool, instead of releasing and un-mapping them. On the other hand, when the driver needs to prepare pages for RX, it always gets the pages from the pool instead of reallocating the pages from the kernel and mapping them.
Each stat counter of the page cache is a per ring stat.
rx[i]_cache_reuse: The number of times the driver successfully get pages from the pool
rx[i]_cache_full: The number of times the driver cannot put the pages to the pool since the pool is full.
rx[i]_cache_empty: The number of times the driver cannot get any pages from the pool since the pool is empty.
rx[i]_cache_busy: The number of times the first available page in the pool is occupied when the driver try to get pages from the pool.
You can also open a support case if you have any issue relating to this inquiry for further analysis and/or troubleshooting.