I’m working with a similar use case to the aneurysm example in the left atria. I am pretty confused with the viscosity and density units employed. The meshes of the aneurysm are defined in mm, but the density appears to be in g/cm3. The aneurysm meshes are defined in mm, but the density seems to be in g/cm3. According to literature, blood viscosity should be between 3 × 10-3 to 4 × 10-3 Pa*s or 3 - 4 centipoise (cP). Given the kinematic viscosity, the dynamic viscosity does not seem defined in any of these units. Am I missing something or is the rheolifferent in aneurysms?
In fact I have conducted some tests using CFD data for training purposes and the values of density and viscosity at which the predicted velocity field and the continuity and momentum equations do not conflict are 1.06 and 0.00035. My meshes are defined in meters. These are some weird values and I’m quite sure that the CFD data from Ansys is correct.
I am also interested in this question. I understand that the values input into a ML tool like this are more or less unitless in terms of how the NN processes it, but what’s not clear to me is whether the inputs to Navier Stokes (nu and rho) need to be in the same units as the meshes. Is there benefit (e.g. convergence time, accuracy, etc.) in putting these into the same units even if they’re not convenient to read, or are they relatively independant?
Additionally, if I’m working with in non compressible fluid and I want to substitute volumetric values (which is documented as acceptable) for the following parameter from the aneurysm sample outvar={"normal_dot_vel": 2.540},
my meshes are in terms of mm, and my flow rate is 2 gpm, can I use a value of 2? I.e outvar={"normal_dot_vel": 2},
I’m sure this question seems basic, but it helps me grow my understanding.
I understand that the values input into a ML tool like this are more or less unitless in terms of how the NN processes it
The key here is scaling, just like how your normalize data for data-driven problems. Feeding numbers that are either super small or super large will produce undesirable results. It just so happens non-dimensionalizing is a pretty good physically based normalization that works which keeps your PDE loss calculable with model outputs. I’m probably over simplifying, there may be some nuance here to why non-dimensionalization is good such as say auto-grad / higher-order gradient stability…
but what’s not clear to me is whether the inputs to Navier Stokes (nu and rho) need to be in the same units as the meshes
The important part is what is being evaluated in the PDE is consistent and makes sense. For example, if my PDE needs Kelvin units, I could predict Celsius in my neural network, then feed it through a converter node where it just adds 273.15 to it before the loss calc.
my meshes are in terms of mm, and my flow rate is 2 gpm, can I use a value of 2
If you have defined the state variable of the network to be in GPM, then this works. If this unit works in your PDE loss (units are consistent) then you should be good.
@ngeneva thanks for the reply. I think you’ve helped me identify a difference between how I’m trying to use the tool, and how nvidia intends me to use it. In the examples I’ve investigated so far, none of them have expanded upon the provided modulus code. Is the user (e.g. me) encouraged to be creating additional files within the modulus directory? E.g. to create custom architectures, aggregators, loss functions, etc?
I’m currently trying to get a simple pipe+flow to work in modulus, and despite being simpler than the Aneurysm sample am having difficulties getting it to converge correctly. It’ll give output that looks close, and then shoot past it and garble it up, oscillating between these outputs until it eventually settles on erroneous solution. In my previous work with neural networks, this type of overshoot was often reflective of a loss function issue, but I didn’t see those kinds of modifications to be within scope of this tool. After all, if the given sample can solve an aneurysm, it should be able to solve a pipe with a couple bends in it. Should I be investigating custom loss funcs, aggregators, and such?
I believe I was mainly thinking of the from_sympy function. There’s quite a few examples that use this that can be used to move from one variable to another. E.g. turbulent examples have quite a few of these.