Don’t know your ambient occlusion from your chromatic aberration? Rick Lane takes you through the complicated world of graphics options menus.
PC games uniquely offer a huge amount of control over how they look when we play them. From adjusting the resolution to enabling post-process effects, we can adjust our playing experience to suit our rig perfectly, down to the finest detail. What’s more, new options are being introduced all the time, with types of lighting, more advanced ambient occlusion and all sorts of post-processing effects constantly changing how games look.
But what do all the different graphics settings in your games actually do? If you’ve looked at your settings and had no idea which type of ambient occlusion setting you should use, or whether or not you should enable motion blur then we’re here to help you tune your playing experience to your PC. This month, we’ve compiled a complete guide to graphics settings. We’ll take you through the most common options (and a few less common ones) and explain what they do, how they work, and whether you should adjust them.
Resolution and texture quality
What is it? The number of pixels on your monitor and the fidelity of the textures that paint the game world.
What does it do? Before we go into the more advanced graphics settings, let’s briefly cover the basics. Resolution determines the number of pixels used to render an image on your monitor. Texture Quality, meanwhile, improves the overall look of a game by painting the world, characters, and objects with higher-quality textures.
How does it work? Any image on your screen comprises a set number of tiny pixels. Generally, the more pixels in an image, the sharper and more detailed it will appear, making curved lines appear smoother, for example.
Textures are two-dimensional skins that fit over 3D shapes in a game world to lend them detail. Choosing a higher-texture quality will replace all the textures in the game with higher-resolution ones. Modern textures include far more data than a painted image, though, and can include transparency, lighting information (which is often embedded, or ‘baked’, into textures to improve performance) and mapping of surface bumps or ‘normals’.
Other detail settings In terms of adjusting detail, other settings to look out for here include terrain quality (or geometric detail), which will replace the basic 3D map with one that has a greater number of lumps, bumps and contours. Vegetation quality will add a denser number of trees and grasses to the game world. Shadow Quality will increase the number of shadows and so on.
Should I tweak it? Resolution, texture quality and other general detail settings should be your first port of call for improving the look of your games. A very high resolution can often negate the need for other settings such as anti-aliasing, while detail settings will have a noticeable impact on what you see in-game.
What is it? Vertical synchronisation.
What does it do? Synchronises the maximum framerate to your monitor’s refresh rate, preventing tearing artefacts.
How does it work? Whenever you run a game on your PC, it’s possible for your graphics card to run the game at a higher refresh rate than your monitor is capable of displaying. This can result in an unsightly phenomenon known as tearing, where the top half of the image on your monitor ‘tears’ away from the bottom half. This effect most commonly occurs when you move the game camera quickly, such as when you’re spinning your character around. One half of the screen will take momentarily longer to refresh than the other, resulting in a split image.
V-sync prevents this situation by locking the game’s maximum frame rate to the refresh rate of your monitor. So if your monitor runs at 60Hz, v-sync will prevent the frame rate from going over 60fps.
Should I use it? Yes, if you experience frequent and noticeable screen tearing, and if your graphics card and monitor don’t support a variable refresh technology, such as FreeSync or G-Sync. Rather than fixing the refresh rate like v-sync, these systems vary the refresh rate to match the frame rate, meaning you can run your game at faster frame rates without tearing, and in most cases, still maintain tear-free gaming if your graphics card drops below 60fps.
Standard v-sync can adversely impact performance, though, because it won’t allow the game to drop a small number frames if it needs to render more complex scenes. Instead, it will grind the game almost to a halt until the scene has been rendered completely. Unless you’re confident that your PC can run a game without dropping below 60fps, which is the standard refresh rate of most monitors, then use v-sync with caution.
What is it? Edge smoothing.
What does it do? Anti-aliasing is a technique used by nearly all modern games to combat a problem known as aliasing. In general terms, aliasing refers to the distortion of a signal, or multiple signals that become indistinguishable from one another. The word is derived from ‘alias’, as the distortion conceals the true identity of the signal.
For computer graphics specifically, aliasing is the distortion of the edges of a rendered object. This occurs because your computer draws lines using pixels, and because each pixel is an individual square, they’re arranged like bricks in a wall. Any line that isn’t perfectly horizontal or vertical has to be staggered like a staircase, with the bottom corner of one pixel touching the top corner of another, resulting in the edge of the line appearing jagged, and games don’t always make these edges look smooth. The lower the screen resolution, the more jagged that edge will appear. Anti-aliasing is designed to smooth out those jagged edges.
How does it work? There’s a wide range of anti-aliasing options available, which are split into roughly two categories – spatial anti-aliasing and post-process anti-aliasing. Spatial anti-aliasing works by rendering the image at a higher resolution than the game is currently running, and then downsampling, or scaling down the resolution to fit your monitor. Post-process anti-aliasing works by smoothing out edges after the image has been rendered, adjusting contrasts and employing colour correction between each pixel to trick the eye into seeing a smooth line. Both methods of anti-aliasing have various subcategories, which are as follows:
Full-scene anti-aliasing (FSAA)
FSAA is one of the earliest anti-aliasing systems, and pretty much works through brute force by rendering a scene multiple times and then downscaling it. It’s almost identical in function to supersampling (SSAA). These AA systems are ideal for photo-realistic images, but for sharper, more stylised images such as animation, can result in the image appearing blurred. They’re also extremely demanding of your graphics hardware.
Multi-sampling anti-aliasing (MSAA)
MSAA is a more optimised form of Supersampling. In Supersampling, samples are taken from multiple locations for each pixel, and each sample is fully rendered before being blended with other samples to produce the final pixel.
Multi-sampling, on the other hand, uses a pixel shader to calculate all of the sampling locations in the pixel. It’s more efficient because the calculation is performed once per pixel, rather than sampling each location within a pixel individually, as with SMAA.
MSAA offers various tiers of anti-aliasing, usually 2x, 4x, and 8x. These settings render the image twice, four times and eight times respectively, each resulting in smoother-looking edges, but at a greater cost to performance.
Fast approximate anti-aliasing (FXAA)
FXAA is a post-process anti-aliasing system created by Nvidia. It’s an algorithm that works in two stages. Firstly, it finds all the edges contained in an image using a specifically optimised search system, and then ‘smoothes’ those edges on a per-pixel basis. FXAA is far less performance-intensive than spatial anti-aliasing, but can result in a blurrier image.
Temporal anti-aliasing (TXAA)
Temporal anti-aliasing is slightly different to other anti-aliasing techniques, as it’s designed to smooth out a moving image rather than a still one. If a game’s anti-aliasing sampling rate is slower than its ability to render objects on screen, it can result in a form of object-stuttering as the game camera moves. TXAA uses a blend of hardware anti-aliasing and specially designed filters to combat this effect, resulting in not just a smoother image, but a smoother sequence of images.
Morphological anti-aliasing (MLAA)
MLAA can be roughly considered as AMD’s equivalent to Nvidia’s FXAA, although it functions slightly differently. It’s a colour-driven form of anti-aliasing, which searches for jagged edges via stark colour differences between pixels, and smoothes them out by blending the colours around them. Recently, a more advanced version of MLAA called SMAA, or sub-pixel morphological anti-aliasing, has been introduced, and is quickly gaining traction among many developers for its efficiency.
Should I use it? Using any anti-aliasing system will immediately improve the overall quality of the rendered image, particularly at lower resolutions. For lower-end computers, you’re better off using FXAA or MSAA and then focusing on other settings to improve image quality. However, higher-end systems are better off being set at a higher resolution, with any anti-aliasing added later as a bonus.
What is it? A method of filtering textures to improve their visual appearance in any given scene.
What does it do? While anti-aliasing improves the look of objects’ edges, anisotropic filtering focuses on the texture in the middle of an object.
As we already discussed, using higher-quality textures will improve how a game looks, but high-resolution textures can crush a game’s performance. To get around this problem, game developers employ different versions of the same texture – known as mipmaps – which can be called into the rendering process relative to the player’s position in the game world. For example, if you’re standing next to a tree in-game, the game will use a high-quality texture, but if you’re standing on a mountain looking down at a forest, the game will source samples from the mipmap to paint the woodland in that’s front of you.
Because mipmaps are so much smaller than high-res textures, the game will sample several of them to render any given scene, which can lead to blurring or unsightly artefacts appearing on-screen as you look at objects further away. Anisotropic filtering is designed to counter these side effects of mipmapping, producing a sharper image.
How does it work? To fully understand anisotropic filtering, you need to understand the kinds of texture filtering that came before it. The most basic form of texture filtering is bilinear filtering. Here, four texel samples (a texel is an individual pixel in a texture) are taken across the x and y axis of a mipmap (hence ‘bilinear’), and the final colour of the texel is calculated.
Bilinear filtering is computationally cheap but only provides rudimentary results. A more advanced form is trilinear filtering, which takes its texel samples from the mipmap in question and the next nearest mipmap as well. The problem with both these filtering methods, however, is they assume that the camera is viewing the texture as a flat square, when in modern games, the player could be viewing the texture from countless possible angles. These filtering methods don’t take into account that, when viewed from a sharp angle, a single texel can cover a larger area of the texture because the player can’t see most of it. This leads to visual artefacts and blurring.
This is where anisotropic filtering comes in. When something is described as ‘isotropic’, it means it’s the same in every direction, so anisotropic filtering is filtering that’s relative to direction. Anisotropic filtering works by increasing the number of texel samples taken in relation to the sharpness of the angle between the camera and the texture.
Like multisampling anti-aliasing, most games offer several tiers of anisotropic filtering, usually 2x, 4x, 8x, and 16x. These settings mean the game will take up to the number of samples set by the player, using trilinear filtering as the baseline (1x).
Should I use it? Absolutely. Anisotropic filtering dramatically increases the sharpness of in-game textures for a relatively low performance cost, even at 16x AF. The effect of anisotropic filtering tails off beyond 8x AF, but the performance cost on modern machines is negligible enough that you might as well shoot straight for the highest setting first and see how you get on.
What is it? Lighting isn’t a graphical setting itself, but there are many important graphical settings that come under the umbrella term of ‘lighting’, and they can be crucial for infusing atmosphere and believability into a game world.
What does it do? Lighting is the most important component in making a game look realistic. Simulating how light bounces off objects, scatters and creates shadows and reflections massively impacts how a game looks and feels – it’s ultimately the key to photorealism.
How does it work? From the player’s perspective, there are several lighting-related settings you may encounter while playing a game, all of which perform slightly different tasks.
Bloom is an effect designed to simulate the feathering effect produced by extremely bright lights. No lens can focus perfectly, and when light passes through it, the lens produces an ‘airy disc’ around the image, like the ring of light around a total eclipse. If the light source is particularly bright relative to the location of the lens, this airy disc becomes visible.
Bloom is intended to recreate this effect, and can be used to replicate things like the glare of bright sunlight through a window, or a helicopter spotlight shining into your eyes.
Bloom lighting is hard to explain without going into considerable detail, but in essence, it’s a kind of blurring filter, requiring a floating point frame buffer and a convolution kernel to work. In (very) simplified terms, the lighting image in the frame buffer is essentially passed through the convolution kernel, which adds each pixel’s shaded value to the neighbouring pixels, resulting in a blurring of the light designed to replicate the airy disc phenomenon.
High dynamic range (HDR) lighting
HDR is an advanced set of lighting calculations that go beyond what’s considered to be standard dynamic range (SDR) – the latter is the dynamic range of visual images along a conventional gamma curve, based on the limits of old cathode ray tube (CRT) displays. In computer graphics, high dynamic range lighting enables a much greater ratio of contrast to be preserved. In other words, it can render very light or very dark images and colours without being forced to convert them to hard black or white.
HDR lighting also enables game developers to simulate certain lighting phenomena, such as the way the glare of bright light affects the eyes if you move suddenly from a dark area. It also enables more accurate simulation of reflections, refractions and the way light moves through transparent objects such as glass.
Volumetric lighting is a graphical effect that enables players to ‘see’ light in the game world. It usually comes in the form of crepuscular rays, also known as god rays or light shafts, but can also be used to render other types of lighting such as spotlights. Rather than treating light as an abstraction, volumetric lighting models it as a transparent 3D object, almost like a drinking glass that’s been turned upside down.
And like a glass, the 3D light has a ‘volume’, and can therefore simulate the effect of passing through other physical elements, such as clouds or smoke.
Should I use it? The appearance of these graphics settings varies greatly between games. Some games conflate bloom and HDR lighting under the same setting, while volumetric lighting might appear under the title ‘Light shafts’ or ‘God rays’. Generally, though, the answer is yes.
As we mentioned at the start of this section, lighting has a huge effect on the appearance of a game world. That said, there are some games that go completely overboard in their lighting, particularly with regard to bloom effects, where almost every light source seems to glare like the sun (the Syndicate reboot is a good example of horribly overdone bloom).
In these cases, you might want to switch off the bloom lighting or, if possible, reduce the amount of bloom to make it slightly less obnoxious.
What is it? A rendering technique used to calculate how ambient light illuminates a scene, surface or object.
What does it do? Ambient occlusion is a method of representing how lights affect a given scene at various points. Generally, occlusion refers to the blocking or obscuring of something. In computer graphics, ambient occlusion refers specifically to lighting. In a tunnel, for example, the interior of the tunnel is darker, or more occluded, than its entrance and exit. Ambient occlusion is a way of calculating which areas should be illuminated or shaded, as well as the gradient of light or shade on any given point.
How does it work? Ambient occlusion is traditionally calculated by casting ‘rays’ from every direction of a surface. In outdoor scenes, rays that reach the sky will be illuminated, unlike rays that are blocked from the sky. In an indoor scene, the walls of the room take the place of the sky as the origin of ambient light.
Ray casting is computationally expensive, however, so there are several types of ambient occlusion techniques available to create a similar effect, some of which are commonly encountered in games. The most common types are SSAO, HBAO and HDAO.
Screen space ambient occlusion (SSAO)
SSAO is an algorithm that’s implemented in a game as a pixel shader. It calculates only the area visible to the player (hence ‘screen space’), by assuming the game world is limited to the Z-buffer, which is a two-dimensional version of the scene used to store crucial rendering data about that scene.
Horizon-based ambient occlusion (HBAO)
Unlike SSAO, Nvidia’s HBAO technology uses a physically based (meaning it accurately models the flow of light) algorithm to create deeper, richer shadows. However, originally HBAO could only achieve this task at half-scaled resolution, resulting in a blurring effect. The more recent HBAO+ technology does it at the default resolution, though, and at a smaller cost to performance.
High-definition ambient occlusion (HDAO)
An AMD technology, HDAO is the most advanced form of ambient occlusion, creating an extremely sharp image. However, it’s also the most computationally expensive technique used in games.
Should I use it? Most gaming PCs will be able to handle SSAO without much problem, and HBAO+ brings this more advanced technology to a wider range of rigs. It definitely makes game lighting look more realistic. We only recommend using HDAO on very powerful PCs.
What is it? A relatively recent advancement in computer graphics, tessellation refers to the breaking down of polygons into smaller components to produce much more detailed surfaces.
What does it do? Tessellation can be used in a number of ways. One of the most common functions of tessellation is in displacement mapping. Remember that earlier we mentioned the bumps or ‘normals’ that can be applied to a texture to lend it definition? Well, displacement mapping is a much more accurate, dramatic version of normal mapping. It can be used not just to make walls look bumpy, but also to simulate mountains and canyons on terrain.
However, displacement mapping is performance-intensive because its surface maps require a large number of vertices to create the desired effect. Tessellation is a huge help here because it provides many more vertices without requiring additional polygons. Interestingly, tessellation can also be used to make surfaces look smoother, as those added vertices can be flattened out to create more graceful curves.
How does it work? Tessellation was introduced with DirectX 11, and essentially enables graphical primitives (such as vertices) to be generated on the GPU. The GPU can take a standard mesh of a character model from a game, and then dynamically subdivide that mesh by generating more vertices on the fly. This process can make heads look smoother, clothes more ruffled, stone walls rougher and so on.
Tessellation adds several additional stages to a standard graphics pipeline. The first one is a tessellation control shader, which analyses a mesh and calculates the subdivision. That’s followed by the generation of the added vertices, and finalised by a tessellation evaluation shader, which is run for every new vertex and emplaces them correctly.
Should I use it? Tessellation is hardware-intensive and all about fine detailing, and it’s best to think of it in this way when changing graphics settings. If you’ve got the basics covered with plenty of performance headroom to spare then switch it on. If not, leave it be.
Motion blur and depth of field
What is it? Post-process effects intended to replicate the function of the human eye.
What does it do? Motion blur and depth of field are similar effects that function in slightly different ways. Motion blur is designed to replicate the blurring effect created when you move your head or a camera around quickly. Depth of field, on the other hand, aims to simulate focus, so objects in the foreground will be sharp and clear, while objects in the background will be blurry and fuzzy.
How does it work? There are various ways to achieve motion blur in games. The most common technique, however, is the use of a velocity buffer; in simple terms, this takes two frames from a game and calculates the velocity vector between them through a pixel shader. Then, in the post-process stage, it can sample across the velocity direction on a per-pixel basis, the result of which is a blurred image for any fast-moving object.
Depth of field, meanwhile, is most efficiently achieved using a technique called forward-mapped Z-buffering. It essentially maps sprites over the rendered image, which are then blended with the pixels, matching their colour and diameter, but inverting their alpha value, which results in a blurring effect.
Should I use it? These settings really come down to personal taste – if used subtly, they can add a touch more realism to a scene, but they can become really distracting if they’re overdone. Some folks on the also consider motion blur and depth of field to be rather pointless effects. Given how much effort goes into rendering a sharp and crisp image, it seems nonsensical to then want to blur that image through post-processing, especially when your eye will naturally focus on different parts of the screen and blur the image when it moves quickly anyway. If you’re capturing images or video while playing, you should definitely turn them off, as you want the images to be as sharp as possible.
What is it? A post-process effect that simulates a specific camera phenomenon.
What does it do? Chromatic aberration is one of the newest and strangest post-processing effects available. It replicates what’s essentially a flaw in old or poor-quality cameras. Chromatic aberration occurs when the lens in a camera can’t focus colours precisely, resulting in a slight blurring around the edges of the image. It’s also known as ‘colour fringing’, because the blurred outline is marked by a red, green and blue tinge.
How does it work? Like motion blur and depth of field, chromatic aberration can be replicated in a bunch of ways, mainly using shaders in the post-process stage.
Should I use it? In the vast majority of cases, no. Chromatic aberration is essentially simulating a rubbish camera, so unless you want your game to look like it was filmed on Super 8 film in 1980, we recommend switching it off. There are a few exceptions, however, where it adds to a game’s style, and where natural realism isn’t necessarily appropriate. Alien: Isolation made effective use of chromatic aberration because it was specifically replicating the look of the 1979 film.