Instead of ray-tracing it would be field-tracing.
More will have to be pioneered, but here is the basic concept:
Field-tracing would change the colors of the pixels on the camera with the information from a field domain rather than from rays projected from the camera to potential objects and light sources. The field would be simulated with voxel vector grids that would be formed to the scene mesh geometries by 3D tensors on the fly. Once formed they could be saved into storage for later. Voxel vectors would be quicker to pinpoint and calculate colors of objects than millions of separate individual ray equations that can have random lines of incidence potentially terminating or making noise. It would be more efficient to “trace lighting” based on the sum data of a voxel vector field because you could do special shortcut algorithms as one unit instead of trial and error of separate random ray projections finding potential light sources. Each “unit atom” of the voxel vector field would hold an RGBA value that would dynamically notify the graphics engine camera what colors its pixels should be in relation to an object’s polygons and other proximal voxel RGBA objects as a web. This way any vantage point could be visualized with less re-calculation. If an object changes in the scene, the voxel vector field array in memory would update through a parallel single-unit domain equation so processing would not have to calculate all new pixels from scratch and instead just “hue shift” what was already there for a new frame.
Field-tracing would enable more accurate simulations of refraction than ray-tracing.
For example: Volumetric refraction of light while crossing a medium change at incidence of refraction.
Ray tracing fails here. In Nvidia’s RTX Quake demo, when you transition out from under water, it is an inaccurate simulation since it is not being calculated through a volume of field.
Here is a link to a physics paper that explains the quantum mechanics of light as an electromagnetic field propagation: