What is FP16 Demotion?
The game world would be a boring place if it didn't have light. Simulated lighting adds an extra layer of realism into any computer-generated image, and it's been used to great effect in every 3D game for over a decade. However, accurate lighting poses a serious challenge - to light a scene accurately with High Dynamic Range (HDR) requires significant hardware grunt.
This was initially achieved with the use of fixed-point Integer calculations, known as Int16, which are very general-purpose and limited in nature. Integer calculations remained the standard for all HDR lighting models through to DirectX 9.0, only being superseded by Floating Point calculations, known as FP16, when the DirectX 9.0b update hit. Unlike Integer calculations, which are limited to a maximum of 65,536 levels, Floating Point is able to represent any number of levels - only limited by the computing performance on-hand.
FP16, as the name suggests, is a method of HDR lighting that uses Floating Point calculations at a depth of 16 bits: 16 bits for the Red, Green, Blue and Alpha channels, for a total of 64 bits per pixel. The data generated are stored in the frame buffer, and as the resolution increases, the demand for memory rises in turn. There also exists FP10, FP24 and FP32 formats. FP16 is a popular HDR rendering format used in many DirectX 9 titles, as it provides a nice balance between workload and accuracy.
|Standard lighting (left) vs HDR lighting (right) rendered with FP16 in Half Life 2: Lost Coast (from Wikipedia).|
However, FP16 Demotion aims to change how HDR lighting is calculated; its full name is the R11G11B10 format, which means the Red and Green channels have 11 bits each, with Blue sitting at 10 bits for a total of 32 bits per pixel - dropping Alpha channel data completely. The end result is a halving in frame buffer size, cutting the amount of memory demanded by the HDR lighting engine by fifty per cent.
Atomic contacted AMD Australia Technical Manager Garrath Johnson, who explains in the official AMD response (posted on page 7) that: "The alleged "optimization" is the selective use of the HDR format R11G11B10 at times when the memory cost of the FP16 HDR format would otherwise impact game play." However, the ATI/AMD response goes on to explain that NVIDIA don't have a problem with this:
Given that in their own documents, NVIDIA indicates that the R11G11B10 format "offers the same dynamic range as FP16 but at half the storage", it would appear to us that our competitor shares the conviction that R11G11B10 is an acceptable alternative.
The section that the ATI response quoted was included in the NVIDIA DirectX10 Technical Brief (PDF, Page 11), verbatim, seemingly fully supporting the use of FP16 Demotion. Wait, what? Didn't NVIDIA just claim this was a bad thing?