NVIDIA Modulus: How to implement residual-based adaptive sampling in PINN?

Hello everyone!

I am using the NVIDIA Modulus framework to develop PINN (Physics-Informed Neural Networks) applications and would like to implement residual-based adaptive sampling functionality to improve training efficiency and accuracy. I am seeking advice from the community on how to achieve this.

My Goals:

  1. Add a mechanism to dynamically adjust sampling points based on the magnitude of local residuals during training.
  2. Improve the model’s performance using this adaptive sampling approach.

Key Questions:

  1. Has anyone successfully implemented a residual-based adaptive sampling method in the NVIDIA Modulus framework?
  2. Does Modulus provide any tools or interfaces to support dynamic sampling point updates during training?

Challenges:

  1. I couldn’t find official examples or documentation related to adaptive sampling in Modulus.
  2. I’m unsure how to dynamically update sampling points during the training loop.

If anyone has experience with similar functionality or has sample code to share, I would greatly appreciate it. Below is an example implementation using DeepXDE to illustrate my idea. Any suggestions on how to adapt this for NVIDIA Modulus would be very helpful!

Thank you in advance for your help!

import numpy as np
from deepxde.backend import tf
import deepxde as dde
import torch

def main():
    def pde(x, y):
        dy_t = dde.grad.jacobian(y, x, j=1)
        dy_xx = dde.grad.hessian(y, x, j=0)
        return (
                dy_t
                - dy_xx
                + tf.exp(-x[:, 1:])
                * (tf.sin(np.pi * x[:, 0:1]) - np.pi ** 2 * tf.sin(np.pi * x[:, 0:1]))
        )

    def func(x):
        return np.sin(np.pi * x[:, 0:1]) * np.exp(-x[:, 1:])

    geom = dde.geometry.Interval(-1, 1)
    timedomain = dde.geometry.TimeDomain(0, 1)
    geomtime = dde.geometry.GeometryXTime(geom, timedomain)
    data = dde.data.TimePDE(geomtime, pde, [], num_domain=10, train_distribution='pseudo',
                            solution=func, num_test=10000)

    layer_size = [2] + [32] * 3 + [1]
    activation = "tanh"
    initializer = "Glorot uniform"
    net = dde.maps.FNN(layer_size, activation, initializer)

    def output_transform(x, y):
        return tf.sin(np.pi * x[:, 0:1]) + (1 - x[:, 0:1] ** 2) * (x[:, 1:]) * y
    net.apply_output_transform(output_transform)

    model = dde.Model(data, net)

    model.compile("adam", lr=1e-3, metrics=["l2 relative error"])
    losshistory, train_state = model.train(epochs=10000)

    error = losshistory.metrics_test[-1:]

    for i in range(40):
        X = geomtime.random_points(10000)
        Y = np.abs(model.predict(X, operator=pde))[:, 0]
        err_eq = torch.tensor(Y)
        X_ids = torch.topk(err_eq, 1, dim=0)[1].numpy()
        data.add_anchors(X[X_ids])

        losshistory, train_state = model.train(epochs=1000)
        error.append(losshistory.metrics_test[-1])

    dde.saveplot(losshistory, train_state, issave=True, isplot=True)
    np.savetxt(f'error_RAR-G.txt', error)
    return error

Looking forward to your suggestions! 😊