Is the result a camera space normal buffer for post processing? I believe Iray can give you that directly, but i would need to check that.
That is ultimately the goal. And yes, Iray is apparently supposed to provide that, but at least as it’s implemented in Daz Studio, it seems that the canvas only renders a single sample (regardless of the render settings), meaning it’s often very noisy and frequently causes problems with post processing.
One of my previous efforts at a shader can be used to export a surface’s albedo data*, so the hope was to be able to procedurally generate texture maps based on the surface normals, plug that into the input of that shader, and thus get the normals out.
*It sets the surface to be diffuse black, so that it doesn’t react to any external light, and then plugs the base colour maps into an emission channel - it’s a slightly ugly workaround, but as it’s usually used on everything on the scene at once, emitting but ignoring light works (and as it can have its ray depth set to 1 and doesn’t need to handle any intense parameters, it still renders pretty fast).
(In the long run, it’s also a learning experience; I want to be able to create more complex shaders and understand more about the parameters of Iray, and these were a relatively simple starting point for me to learn hands-on and get an idea of how things work “under the skin”).
In MDL you dont have access to the camera position or space directly.
That surprises me. I would have thought there was a way to extract the incident ray direction in order that it could be used for things like pearlescent effects (and from that it should be possible to calculate the difference between the surface normal and that vector).
Is that not the case? I’ve certainly seen MDL based shaders that affect the surface based on incident angle, so I’m now wondering how those work internally.