That’s doing too much.
The ray.direction is a normalized vector in world space inside the closest hit program domain. Multiplying the ray.direction with the intersection distance gives the vector from ray.origin to the surface hit point. Since the ray.direction is normalized, the length() of that scaled vector is the intersection distance again. Means in your code.
float depth = t_hit;
If you want to write the depth as float value you would simply use a per-ray payload which contains such a “float depth;” member and fill it when the primary ray hits something with the variable of the rtIntersectionDistance semantic. Means the local depth variable isn’t needed.
prd_radiance.depth = t_hit;
If the ray hits nothing, you could either initialize that per-ray payload member to your desired zFar value inside the ray generation program and have no miss program, or if you have a miss program anyway, write your zFar value there.
There is also no need to store the same value into a float3. You can save that bandwidth.
The output buffer for that could be a single float value, or if you render colors as well, you could store RGB and depth into single float4 (better performance than separate buffers).
Be careful about depth values in case you want to merge this with a rasterizer!
If you simply store the intersection distance into the depth value in a pinhole camera you will get a radial depth, but that’s not how rasterizes do that. There the depth is on a planar plane with distance along the normally orthogonal view direction to that plane, not radial.