In the informative interviews done by Mike Seymour on FXGuide about deep compositing, he has described examples of fog and candle lights in a row as an explanation of deep composting. These can be achieved in Fusion with the Volume Fog Tool.
The volume Fog tool is a special case render that takes 3D camera and lighting along with ‘deep images’ and renders them out from the cameras point of view.
The ‘Deep Images’ have XY and Z so that it can fit a volume of a 3D scene. The WPP is used to occlude the depth, so beauty passes can be merged into the volume image.
So in Anonymous nonlinear fog volumes of varying density can be integrated with the 3D scene. A rendered beauty pass along with its WPP(RGBA-XYZ) is put into 3D environment of the Volume tool and reacts with the fog volume. Because of the World Position Pass makes a fixed reference in terms of its XYZ coordinates, a Fog volume will stay fixed correctly to the scene. Zbuffer on the other hand is from the cameras rendered point of view, so camera based depth cue fog is not fixed to the scenes point of view.
In Fusion, you can create volumes procedurally using tools like fast noise, or particles, you can also load them via EXR. By doing things procedurally or a mixture of loading and procedurally the amount of data to disk can be dramatically reduced.
In Fusion 6.3, adding lighting models to the Volume Fog tooling, takes what can be achieved in this new deep arena to a whole new level. GPU powering this development makes the Deep Volumes interactive/realtime when rendering.
- Ezkonrnd, vdvfqwus, Wzqnrahr and 57 others like this