In the uploaded image you can see the console output timestamp for the last step and the system time. The training has got stuck for ~20 hrs. This has happened for the 2nd time for the given training config. I have ran 4 other trainings with different training configs and did not face this issue, previous training also had the same input size for the network as well as the same batch size. Usually if its a GPU memory issue, tao exits with a memory out of space error, no error messages are popping up. GPU memory utilization is 74% and I can see GPU activity in nvtop .
The difference b/w the configs is in the lr_scheduler’s ‘decay_steps’ for ‘cosine_decay’, the value was updated from 500 to 650800. I don’t see how this change can cause this behaviour.
Please docker pull TAO5.0 docker(nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5) and retry. Because it is the latest version for Unet. Also, you can find the source code inside the docker to help to debug.
There is an ambiguity in the visualisation, does ‘global_step’ and ‘decay_steps’ represent epochs or steps. Why the decay_steps is a high value is because I need the learning rate to anneal till the very last epoch and 650800 is the last step in the final epoch.