I followed the three_fin_3d example and create my own heat transfer problem. I found that

the chip temperature goes up and I don’t know why it drops down again.

Is this because that the training iterations is too long, should I terminate it when it reached the maximum value?

Hi @cxi1

Is a higher temperature expected? This is very problem dependent, but typically when I see something like this it indicates the model was unstable, jumped out of the local minima (i.e. almost diverged) then re-converged to the solution it was optimizing to originally.

Check the output fields during this to see if there’s some fluctuations going on that are un-physical. Ideally these monitors should have smooth convergence over the training iteration (although this doesn’t always happen for tricky problems).

I expected the temperature to be 70 degrees compared with fluid inlet temperature of 25 degrees. But no matter

how much I increase the source term, heat sink temp will jump higher but always converge to the fluid temperature of 25 degrees. And the fluid temperature never increases, always keeps 25 degrees, which is strange.

Later I switched to Modified Fourier Network, so it seems to be that large conductivity difference between fluid and heat sink is probably the cause to this. Later I tried with different heat sink design parameters, and also fluid parameters and run parameterized simulation, I find the predicted temperature variations is small， which I think there may exists non-dimensionize geometry issues of my code, my inlet is a circle of radius 0.017 mm, I think I need to scale the geometry somehow to retrain the flow and then the heat solver.

And I find it a headache to follow the documentation of limerock, anuresym, and chip2D example, because I don’t

know the logic what to set for scale and translate values.