Inferencing features on surfaces of objects, rather than whole objects

I see a lot of existing examples for SDG using objects (including the ability to produce clean, pixel-tight labelling), which is well and good.

Is it at all possible to perform synthetic image generation of procedural features that are on the surfaces of objects, for example rust, cracks in walls?

A full-fledged solution might look like the following, and have the same benefits as running replicator on objects:

  • Ability to add certain nodes that can make localized, procedural modification to a material. Maybe it’s a modification to base color or bump.
  • Ability to randomize the placement of features along the surface of a target object
    • Bonus: specify the vertices on the target object where these features will appear
  • Along with generated images, create tightly-bound labeled masks for those localized modifications

Any hints on how to have a similar pipeline for materials, with current tools/API?