How to save derivatives to vti and npz

Hello,

We would like to save derivatives of velocity(u__x, u__y, …) to vti and npz when inferencing like taylor_green.py.
To export u__x, u__y, … in PointVTKInferencer results in failure.

Can you please let us know how to do it?

The code we tried is as follows

# add inference data for time slices
ngridx, ngridy, ngridz = 128, 64, 16
infer_time = 100.0
fill_time_mf_actual = 1.25
time_steps_mf_actual = np.array([0.0063,0.0479,0.1041,0.2036,0.3036,0.4031,0.5033,0.598,0.6989,0.7987,0.8979,1.004,1.098,1.198,1.248])
time_steps_infer = infer_time * time_steps_mf_actual/fill_time_mf_actual
for j, specific_inlet_vel in enumerate(np.linspace(inlet_vel_min, inlet_vel_max, 5)):
    vel = inlet_vel_min + j*(inlet_vel_max-inlet_vel_min)/(nd_inlet_vel_infer-1)
    for i, specific_time in enumerate(time_steps_infer):
        vtk_obj = VTKUniformGrid(
            bounds=[channel_length, channel_width, channel_height],
            npoints=[ngridx, ngridy, ngridz],
            export_map={"u": ["u", "v", "w"], "p": ["p"], "u__y" : ["u__y"], "u__z" : ["u__z"],},"v__x" : ["v__x"], "v__z" : ["v__z"],"w__x" : ["w__x"], "w__y" : ["w__y"],
        )
        grid_inference = PointVTKInferencer(
            vtk_obj=vtk_obj,
            nodes=nodes,
            input_vtk_map={"x": "x", "y": "y", "z": "z"},
            output_names=["u", "v", "w", "p", "u__y", "u__z", "v__x", "v__z", "w__x", "w__y"],
            requires_grad=False,
            invar={"pred_inlet_vel": np.full([ngridx*ngridy*ngridz, 1], specific_inlet_vel), "t": np.full([ngridx*ngridy*ngridz, 1], specific_time)},
            batch_size=10000,
        )
        domain.add_inferencer(grid_inference, name="inlet_vel_slice_" + str(j).zfill(4) + "_time_slice_" + str(i).zfill(4))

error messages:
Traceback (most recent call last):
File “myinfer.py”, line 240, in run
slv.solve()
File “/opt/conda/lib/python3.8/site-packages/modulus-22.3-py3.8.egg/modulus/continuous/solvers/solver.py”, line 133, in solve
self._eval()
File “/opt/conda/lib/python3.8/site-packages/modulus-22.3-py3.8.egg/modulus/trainer.py”, line 617, in _eval
self._record_inferencers(step)
File “/opt/conda/lib/python3.8/site-packages/modulus-22.3-py3.8.egg/modulus/trainer.py”, line 280, in _record_inferencers
self.record_inferencers(step)
File “/opt/conda/lib/python3.8/site-packages/modulus-22.3-py3.8.egg/modulus/continuous/solvers/solver.py”, line 111, in record_inferencers
self.domain.rec_inferencers(
File “/opt/conda/lib/python3.8/site-packages/modulus-22.3-py3.8.egg/modulus/continuous/domain/domain.py”, line 77, in rec_inferencers
inferencer.save_results(
File “/opt/conda/lib/python3.8/site-packages/modulus-22.3-py3.8.egg/modulus/continuous/inferencer/inferencer.py”, line 217, in save_results
pred_outvar = self.forward(invar)
File “/opt/conda/lib/python3.8/site-packages/modulus-22.3-py3.8.egg/modulus/continuous/inferencer/inferencer.py”, line 90, in forward_nograd
pred_outvar = self.model(invar)
File “/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1110, in _call_impl
return forward_call(*input, **kwargs)
File “/opt/conda/lib/python3.8/site-packages/modulus-22.3-py3.8.egg/modulus/graph.py”, line 147, in forward
outvar.update(e(outvar))
File “/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1110, in _call_impl
return forward_call(*input, **kwargs)
File “/opt/conda/lib/python3.8/site-packages/modulus-22.3-py3.8.egg/modulus/derivatives.py”, line 74, in forward
grad = gradient(var, grad_var)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/modulus-22.3-py3.8.egg/modulus/derivatives.py”, line 17, in gradient
“”"
grad_outputs: List[Optional[torch.Tensor]] = [torch.ones_like(y, device=y.device)]
grad = torch.autograd.grad(
~~~~~~~~~~~~~~~~~~~ <— HERE
[
y,
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
inference is finished
copy moldflow result data
cp -r /examples/isid/utils/template/fillrate_csv_mf_b1 ./
python /examples/isid/utils/plot_results_mold_r5.2.py myinfer 1.0 n3 176.063877 normal 0.002000607 0.007099696 22.40110954 1.0 0000
model output directory : ./outputs/myinfer
Traceback (most recent call last):
File “/examples/isid/utils/plot_results_mold_r5.2.py”, line 154, in
tke_points = tke_points / np.max(tke_points)
File “<array_function internals>”, line 180, in amax
File “/opt/conda/lib/python3.8/site-packages/numpy/core/fromnumeric.py”, line 2791, in amax
return _wrapreduction(a, np.maximum, ‘max’, axis, None, out,
File “/opt/conda/lib/python3.8/site-packages/numpy/core/fromnumeric.py”, line 86, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation maximum which has no identity
mv mold_stat.csv outputs/mold_1f_26.7.6/mold_stat_0000.csv
mv: cannot stat ‘mold_stat.csv’: No such file or directory
postproc is finished

Hi @yokoi.toshiaki

Please try turning on requires_grad=True in your PointVTKInferencer. By default this is off to save on computation, but this means the any gradients cannot be computed. Turning this on will enable gradients during inference at the sacrifice of increased memory usage, so monitor your batch size.

The same is true if you are using a validator or monitor: