MDL compiler runs forever while exhausting RAM

I’m trying to compile a MDL material function using the API provided by MDL SDK.

The issue I’m having is, when I call any of the API functions which translate the material to bytecode, (CUDA, LLVM or Native)
the application that I built runs forever and gradually uses all my system RAM

Would like help on diagnosing and fixing this compilation issue.

For reference,
I included the MDL script below:
https://pastebin.com/Snt8svA2

The material function I’m compiling is iray_uber
and I’m compiling “surface.scattering” of the material

Happy new year jys365!

i filed this as a bug and can repro it. I will report about the progress here

cu
Jan

hi jys365,

we have fixed the issue and will release the fix with the next regular release. Also the next release will greatly reduce the amount of generated code for your material. Meanwhile there are workarounds you can use:

a) the bug will not happen in “instance compile” mode

b) (highly reccomended not only as a work around but as a general performance improvement)
You can disable switching BSDFs at runtime using ternary operators by setting the “fold_ternary_on_df” option to true inside the mi::neuraylib::IExecution_context object used with the backend. This will replace the parameters used as condition for those ternary operators by the values used at instantiation time.
The consequence is that you will need to recompile the material if the user changes the parameters (i.e. any of the boolean parameters) But not only will it compile but also result in much faster code (switching bsdf at runtime is costly)

Thanks a lot for your detailed response.

Yes I noticed that instance compilation works.

If you don’t mind, I would like an opinion on the following:

I’m now trying to compile potentially a hundred of materials in a single link unit.

Given that link unit is designed to efficiently package shared codes, I would think this is a good way to minimize kernel memory usage if many of the materials use the same MDL function (but different parameters)

The downside that I’m expecting to see is, it will have to run as a single-threaded process which may take some time, as opposed to potentially having multiple threads compiling different materials simultaneously

Wanted thoughts on this - compiling a single link unit which has many materials.

Using the link unit instead of separate compilation works best if your materials share a lot of code ( which is happening more frequently with class compilation compared to instance compilation) If a lot of code is shared, also a lot of compilation time is saved. The costly part of compilation is the actual code generation and thats not happening multiple times for shared code. On the other hand, instance compilation increases the uniqueness of code for different materials but also reduces code size which also has a positive effekt on compilation time.
So it really depends on your usecase whether single threaded compilation for the link unit will be slower. You can also do hybrid, group smaller numbers of materials into multiple link units.