UNet Training on Tao toolkit is getting stuck

• Hardware : NVIDIA GeForce RTX 2060
• Network Type : UNet
• TLT Version : nvidia/tao/tao-toolkit-tf:
v3.22.05-tf1.15.5-py3
• Training spec file
tao_unet_05_08_24_train_v5.txt (1.7 KB)

In the uploaded image you can see the console output timestamp for the last step and the system time. The training has got stuck for ~20 hrs. This has happened for the 2nd time for the given training config. I have ran 4 other trainings with different training configs and did not face this issue, previous training also had the same input size for the network as well as the same batch size. Usually if its a GPU memory issue, tao exits with a memory out of space error, no error messages are popping up. GPU memory utilization is 74% and I can see GPU activity in nvtop .

Also attaching a screenshot of htop . You can see that RAM utilization is also low.

Please try to check which training config did not face this issue. And find the difference between the configs. Thanks.

The difference b/w the configs is in the lr_scheduler’s ‘decay_steps’ for ‘cosine_decay’, the value was updated from 500 to 650800. I don’t see how this change can cause this behaviour.

Please docker pull TAO5.0 docker(nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5) and retry. Because it is the latest version for Unet. Also, you can find the source code inside the docker to help to debug.

The source code is in tao_tensorflow1_backend/nvidia_tao_tf1/cv/unet/utils/model_fn.py at 2ec95cbbe0d74d6a180ea6e989f64d2d97d97712 · NVIDIA/tao_tensorflow1_backend · GitHub.
Seems that you set a large value for decay_steps.
You can check the decay_steps: 650800 with below code.
An example,

import matplotlib.pyplot as plt
import tensorflow as tf

y = []
z = []
EPOCH = 200
global_step = tf.Variable(0, name='global_step', trainable=False)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for global_step in range(EPOCH):
        learning_rate1 = tf.train.cosine_decay(
            learning_rate=0.1, global_step=global_step, decay_steps=50)
        learning_rate2 = tf.train.cosine_decay(
            learning_rate=0.1, global_step=global_step, decay_steps=100)

        lr1 = sess.run([learning_rate1])
        lr2 = sess.run([learning_rate2])
        y.append(lr1)
        z.append(lr2)

x = range(EPOCH)
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(x, y, 'r-', linewidth=2)
plt.plot(x, z, 'b-', linewidth=2)
plt.title('cosine_decay')
ax.set_xlabel('step')
ax.set_ylabel('learing rate')
plt.legend(labels=['decay_steps=50', 'decay_steps=100'],z loc='upper right')
plt.show()

Result of above example,
image

BTW, there is another issue for TAO 5.0 as well. Need to change the code according to UNet training progress counter frozen after ~18.000 steps - #19 by Morganh.

There is an ambiguity in the visualisation, does ‘global_step’ and ‘decay_steps’ represent epochs or steps. Why the decay_steps is a high value is because I need the learning rate to anneal till the very last epoch and 650800 is the last step in the final epoch.

The calculation is:
A = min(global_step, decay_steps)
cosine_decay = 0.5 * (1 + cos(pi * A / decay_steps))
decayed = (1 - alpha) * cosine_decay + alpha
decayed_learning_rate = learning_rate * decayed

The global_step is in the range of epochs. In above example, it is 0 to 199.

The training didnt get stuck when I updated the decay_steps to 80

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.