I tried to run the turbulence channel example, testing both the Launder Spalding Wall Functions k_ep and k_om cases.
However, I didn’t get the results given in the user guide. My case didn’t converge. The loss drops to ~ 10e-3 at around 3000 steps and then it increases, oscilating between 10e-1 and 10e1 till the end.
Moreover, my plof of TKE is simply a horizontal line, unlike the sloping line given in the guide.
I didn’t change any parameter. May I know why this is happening?
Is there a bug or something wrong with the python code?
I rechecked and realised that I can get the graphs given in the TKE plot at the end of the training, which is 800000 steps. However, the loss is high at around 1e-1.
But if I stop at 3000 steps where loss is 1e-3, I got a horizontal line in the TKE plot.
I wonder why this is so. Shouldn’t lower loss give better results?
No, it shouldn’t. I have included the same thing in my paper with proper reasoning. This is the disadvantage of Modulus. They don’t store the model with the least training loss. Sometimes, SiReNs decides to go crazy after some iterations.