Pressure gradient boundary condition

The Lid driven cavity flow example does not use a pressure boundary condition, but for ordinary CFD, a boundary condition of zero pressure gradient is imposed on the wall surface.
So I used GradNormal as shown in the attached file, and added a boundary condition where “normal_gradient_p” is zero on the wall surface, but it does not seem to be learning well. Could you give me any ideas as to the cause or what I should fix? (3.7 KB)

Hi @user106225

With physics-informed learning, similar methods to numerical solvers can be used but do not always work the best. The optimization of a solver and a neural network are very different, thus sometimes different strategies need to be used for training a AI surrogate. For all practical purposes, its largely comes down to empirical testing. Each PDE is different and may require different strategies for getting good convergence.

Does this mean a pressure gradient boundary won’t work? No, I’m sure there’s a way to weight the losses or adjust the learning appropriately to get something the learns better. But that’s up to the user to figure out. I would suggest looking through the different examples we have as well as the literature in the field for guidance based on methods other’s have found successful. We also have a recommended practices section in our user guide with some typical strategies we found to help.


I tried to understand user106225’s ldc and three fin 3d’s code to adapt to my problem. I have some qns:

  1. To impose neumann BC, these lines of code are req:

gn_p = GradNormal(‘p’, dim=2, time=False)

nodes = … + gn_p.make_nodes()

no_slip = PointwiseBoundaryConstraint(
outvar={“u”: 0, “v”: 0, ‘normal_gradient_p’: 0},
criteria=y < height / 2,
ldc_domain.add_constraint(no_slip, “no_slip”)

Is that so? It seems that I can give any name (e.g. gn_p) to the GradNormal but normal_gradient_p can’t be changed.

  1. If instead of the above, I use only:

outlet = PointwiseBoundaryConstraint(
outvar={“p__x”: 0},
ldc_domain.add_constraint(outlet, “outlet”)

Is this valid as well? Thanks.

Hi, @tsltaywb
If the Neumann boundary is parallel to the axis, as in, it is equivalent to impose constraints on “normal_gradient_p” and “p_x”, thus either method gives the same result.
If the Neumann boundary is not parallel to the axis, I think “normal_gradient_p” should be used.

1 Like

Hi user106225,

Thanks for the explanations!

I’m working on a deep learning project using NVIDIA GPUs, and I’m encountering a perplexing problem with my model training. I’m attempting to train a complex neural network for image classification, but I keep running into an error that’s proving to be quite challenging to resolve. The error message I’m getting reads: “CUDA out of memory error.” I’ve checked the memory requirements of my model and tried various batch sizes, but the error persists, even though I have a high-end NVIDIA GPU with athelas font substantial memory capacity.