Calculating Normals in Perspective Space?

I’m trying to create shaders to use in Daz Studio that procedurally generate their colour maps based on their surface normals.

However, the “Normal” shader block seems to provide its vectors in world space, and I need the output in perspective/camera space to get the result I want.

What would be the correct method of converting from one to the other? I have tried searching the documentation, but while I can find some references to converting between world and object space, I can’t immediately find details on using perspective space (although it’s entirely possible I’m using the wrong terminology).

Hi David,
The normals you provide to the state are by default required to be in world space. If you are talking about the auxiliary output that we use to generate the normal output buffers from, these are also in the same space. When processing them you are in renderer code and can do what you need to transform. In your case, assuming you have world space normals, probably multiply with the rotation part of the view matrix (or the inverse transposed) to convert to view space. When you have normals in object space you need to go back to world space first.
I’m not quite sure what you need to normals in projection space for, but from camera space you could project them using the projection matrix of the camera.


Hi, What Kai wrote would be possible if you modify the renderer code, but i believe you are talking about writing an MDL material that outputs camera space normals as a color, right?

In MDL you dont have access to the camera position or space directly. you would need to make the camera position and orientation a parameter of your material and feed it in manually, then compute a transformation.

Is the result a camera space normal buffer for post processing? I believe Iray can give you that directly, but i would need to check that.

Is the result a camera space normal buffer for post processing? I believe Iray can give you that directly, but i would need to check that.

That is ultimately the goal. And yes, Iray is apparently supposed to provide that, but at least as it’s implemented in Daz Studio, it seems that the canvas only renders a single sample (regardless of the render settings), meaning it’s often very noisy and frequently causes problems with post processing.

One of my previous efforts at a shader can be used to export a surface’s albedo data*, so the hope was to be able to procedurally generate texture maps based on the surface normals, plug that into the input of that shader, and thus get the normals out.

*It sets the surface to be diffuse black, so that it doesn’t react to any external light, and then plugs the base colour maps into an emission channel - it’s a slightly ugly workaround, but as it’s usually used on everything on the scene at once, emitting but ignoring light works (and as it can have its ray depth set to 1 and doesn’t need to handle any intense parameters, it still renders pretty fast).

(In the long run, it’s also a learning experience; I want to be able to create more complex shaders and understand more about the parameters of Iray, and these were a relatively simple starting point for me to learn hands-on and get an idea of how things work “under the skin”).

In MDL you dont have access to the camera position or space directly.

That surprises me. I would have thought there was a way to extract the incident ray direction in order that it could be used for things like pearlescent effects (and from that it should be possible to calculate the difference between the surface normal and that vector).

Is that not the case? I’ve certainly seen MDL based shaders that affect the surface based on incident angle, so I’m now wondering how those work internally.

First there are 2 different things: view direction dependent and camera dependent. Camera dependent is no problem, except that it will look unexpected in mirrors. Most tools like max/maya have means to feed such info into the parameters of materials automatically. for you, camera dependence would be enough.
second there is view dependence. this means that a mirror will show the effect as if you look at it directly. this is problematic since it prevents some rendering techniques (and there are some physics issues with that.
For effects where you typically use view dependence (falloff, perlescence) MDL provides BSDF. (thin_film, measured_curve_factor…) This allows the simulation of perlescence in a physically correct manner (at the level of the microfacet normal of the bsdf, not on the surface normal) and allows more efficient sampling.

As for the buffers, i will dig a bit around why this is so strange in DAZ. for “normal” there should not be noise and i believe there is a distinction between aliased and antialiased buffers (multisampling “depth” is typically not usefull). But i am currently not that well versed in DAZ.

Sorry, I’ve been a bit busy over the last few days.

Well, I don’t know if there’s a correct place to request this, but the addition of a function that can allow shaders to have access to the ray and/or eye vector would be very welcome for the purpose of procedural generation of surface textures. I’ve been talking with several other artists who’ve had similar issues where they needed that information for a shader, but hit a road block because it wasn’t available.

This is certainly under heavy discussion for upcomming MDL versions.