Were are just now getting to migrating our network from the 4.x series over to 5.x series. In doing so, we’re setting up demo environments to make sure that after transition we won’t experience issues, but we have noticed that NCLU is gone, and replaced by NVUE.
After doing some basics with NVUE we’re a bit blown away at how incomplete it seems. NVUE is missing all kinds of commands we would need to configure these devices and if we hand-edit the files those edits just get overwritten the next time an NVUE command is used.
By all intents and purposes, it feels like NVUE is a no-go in production until it’s more mature but then when we are ready, if we use it, then it will overwrite nearly all configs.
I am hoping that others have blazed this trail before us and have some advice on migrating to NVUE? It almost feels like we would be better to migrate to using ansible at this point but for a while there Cumulus and Ansible we not playing well together. The entire transition to NVIDIA really feels like Cumulus is getting back burnered.
NVUE uses its own data to build the config files, which is independent to the content of the config files. Once you have used NVUE to apply a configuration, it is advisable not to switch between NVUE and editing the flat file. You should stick to the same method for configuration, otherwise as you have noted, the config file will get overwritten next time NVUE config is applied.
Also see warning from: https://docs.nvidia.com/networking-ethernet-software/cumulus-linux-53/System-Configuration/NVIDIA-User-Experience-NVUE/NVUE-CLI/
There are also various tools to assist with migrating from NCLU to NVUE. Eg.
Here’s the command I was trying to run until I saw @erichan and their reply:
nv set interface swp2 link speed 1000
and this was the result of that command:
Error at speed: '1000' is not one of ['auto']
I utilized “1G” instead and it appears to not be an issue, but outside of NVUE there’s further issues with this.
According to 5.3 documentation, on Spectrum chipset, it’s no longer accepted to use link speed specification, everything is essentially auto negotiated and detected but I have unfortunately had ports that will not report the link as up unless a port speed is specified.
This is making more sense now in context. Thank you for your quick response. It sounds like we still have a documentation bug though as the link speed setting is still very much supported. What is not allowed in the future is setting link speed and breakout together in ports.conf file.
In the future, ports.conf will only be used for breakout config and /etc/network/interfaces will only be used for speed config. To NVuE users this distinction will be invisible as the config will always go to the right files, but for folks doing manual file edits, at some point ports.conf will not support taking a speed specification along with a breakout at the same time. (There were race conditions where ports.conf speed setting would fight the interface speed setting, and the system would not know which one should be respected if they did not match).
If you can point me to the documentation location that is tripping the understanding up, I can seek to make sure it is cleaned up.
For NVIDIA Spectrum ASICs , MTU is the only port attribute you can directly configure. The Spectrum firmware configures FEC, link speed, duplex mode and auto-negotiation automatically, following a predefined list of parameter settings until the link comes up. However, you can disable FEC if necessary, which forces the firmware to not try any FEC options.
If you read this the way I did, since I am on Spectrum chipset, then I cannot set link speed because it’s automatically set. But as you can see I can set it, and without setting it the port won’t even register as up