Tesla p100 decoder capacity issue

I am evaluating tesla cards of an ip camera network upgrade, and I am having a issue where decoding is significantly underperforming encoding. I have a bunch of 1080p h264 30 fps streams at about 4.3mbs and my decoding cap appears to be at 10 streams while my encoding seems to be north of 20. I am trying to isolate whether this is an issue with the my camera stream management platform or this is a capacity issue with the Nivdia card, the Tesla p100.

Here are my driver details

Tue Oct 10 12:22:22 2017
| NVIDIA-SMI 375.66 Driver Version: 375.66 |
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| 0 Tesla P100-PCIE… Off | 0000:3B:00.0 Off | 0 |
| N/A 31C P0 41W / 250W | 2944MiB / 12193MiB | 10% Default |

| Processes: GPU Memory |
| GPU PID Type Process name Usage |
| 0 262349 C …/local/WowzaStreamingEngine/java/bin/java 2942MiB |

I have a plugin that records and charts it the decoder capacity if that would help. but it basically says encoding at 40% and decoder at 100% at 15 streams.

I would appreciate any help.

See this:
Looks like the P100 has 3 nvenc units but only one nvdec unit. Might explain this.

I looked at that and thought it was an odd decision that you should not be able to process as many streams as you encode but decoding should be less intensive than decoding. Still there isn’t any details on capacity in decoding.

Stumbled upon this:
Points towards 1 nvenc>=1 nvdec taking the numbers for 4k@30 YUV422