Help for the use of SupervisedGridConstraint

I am use modulus to solve a 3D PDE equation. Just PINN, not DeepONet.
The boundary conditional of my PDE equation is not just a value, and is not easy to write as a sympy funtion.
So I have generated a dataset of points, which contains 10000 boundary coordinate points (x,y,z) and the corrosponding true value for the pde (u).

I looked at Constraints and found it seems that I should use SupervisedGridConstraint to add those boundary points as a Boundary Condition.

So I write the code as:

classical_method_boundar_r_1 = np.loadtxt('../classical_method_results/grid_r_is_1.txt',unpack=True)
x_bc,y_bc,z_bc,u_bc  = classical_method_boundar_r_1
from modulus.domain.constraint.discrete import DictGridDataset,SupervisedGridConstraint
bc = PointwiseBoundaryConstraint(
    outvar={"u": 0},
    lambda_weighting={"u": 1},
    quasirandom = True,
    fixed_dataset = False,

domain.add_constraint(bc, "bc")

where the grid_r_is_1.txt is:

-0.140437 -0.0724004 0.987439 0.0087098
-0.135563 -0.0811617 0.987439 0.00870911
-0.130142 -0.0895963 0.987439 0.00870838
-0.124197 -0.09767 0.987439 0.00870761

Unfortunatly, it crashes and says:

Cell In [20], line 15
     12 slv = Solver(cfg, domain)
     14 # start solver
---> 15 slv.solve(callback=print)

File /modulus/modulus/solver/, in Solver.solve(self, sigterm_handler, callback, if_save)
    157 def solve(self, sigterm_handler=None, callback=None, if_save=True):
    158     if self.cfg.run_mode == "train":
--> 159         self._train_loop(sigterm_handler, callback=callback, if_save=if_save)
    160     elif self.cfg.run_mode == "eval":
    161         self._eval()

File /modulus/modulus/, in Trainer._train_loop(self, sigterm_handler, callback, if_save)
    520 if self.cfg.cuda_graphs:
    521     # If cuda graphs statically load it into defined allocations
    522     self.load_data(static=True)
--> 524     loss, losses = self._cuda_graph_training_step(step)
    525 else:
    526     # Load all data for constraints
    527     self.load_data()

File /modulus/modulus/, in Trainer._cuda_graph_training_step(self, step)
    715     self.global_optimizer_model.zero_grad(set_to_none=True)
     27 norm = self.weight.norm(dim=1, p=2, keepdim=True)
     28 weight = self.weight_g * self.weight / norm
---> 29 return F.linear(input, weight, self.bias)

RuntimeError: size mismatch, got 10, 10x3,300

I totally have no idea about how to debug it.
Could anyone give me some suggestion?

Thanks in advance!

Hi @Zhao-ZC

Its a little hard to tell what your doing since the Constraint you have doesn’t use x_bc,y_bc,z_bc,u_bc. Its a CSG constraint using a geometry object.

The error you have is from your input not being of the correct size. This could be from declaring the network incorrectly, adding the data in correctly or defining your variables with incorrect dimensions.

Without knowing too much of the nature of your problem, for adding boundary points though I would suggest just using dictionaries and PointwiseConstraint.from_numpy() function. SupervisedGridConstraint is for image like data. These forum threads may provide some insight: