john gillooly is worried that both major GPU manufacturers are falling into old, ugly habits.
Open Standards are truly wonderful things, making for the glue that holds technology together. Without industry-wide agreement on everything from software protocols to hardware interconnects, computing simply wouldn’t be where it is today, and we would be stuck in a series of walled gardens, with choice very much a secondary concern.
This is especially true when it comes to the PC, enabling computers to be built from parts manufactured by a massive array of companies. It means that the Taiwanese motherboard you buy will fit into a case designed in Europe and powered by a PSU engineered in the US, then populated largely with components of your choosing, not ones made by the same company. You can build a PC with a different manufacturer for each individual part, and the standards glue will ensure that everything works harmoniously.
This importance carries across to the realm of software – after all, it means nothing if your parts communicate at a hardware level then refuse to act harmoniously at a software one. This enables a variety of operating systems to exist, from the juggernauts like Windows and Linux, to more niche products like BeOS.
There are, of course, myriad different points in the continuum between closed and open standards. Let’s take graphics as an example, which is currently the focus of my ire.
When the first 3D accelerator cards launched, and the very term GPU didn’t exist, they required specific programming for each brand of chip. 3dfx had Glide and Rendition had RRedline, for example, and games needed to be specifically coded for each of these in order to enable hardware support.
This provided a significant barrier to what was a very nascent market. Not only did coding for 3D accelerators target a minimal subset of the PC gaming audience, but this was further fragmented into different APIs. Even then, when the concept of AAA and Hollywood sized budgets didn’t exist, it took significant developer time, and hence money, to code in support of different APIs. If it wasn’t for intensely curious genius coders like John Carmack writing 3D support into iconic games like Quake, then 3D would have had a much rougher road to acceptance.
Some people won’t even remember companies like 3dfx or Rendition, because ultimately this insistence on proprietary development sowed the seeds of their demise. Despite both companies gaining early footholds in the market, they ended up losing to some upstart newcomers that focused on developing for standardised APIs – like OpenGL, which had its roots in the professional rendering arena, and DirectX, which was to many a Windows-specific OpenGL analog – and a major play to shore up Windows as the PC gaming platform by acting as a middleman between the 3D hardware and the games themselves.
The upstarts in question were Nvidia and ATI, both of whom spent many years effectively implementing the entire OpenGL pipeline in silicon, which was dubbed Hardware Transform and Lighting. This enabled developers to write their games in OpenGL or Direct3D and know that such 3D accelerators would be able to understand their code, no matter who made them. 3dfx had a workaround known as a Glide wrapper, which translated between languages, but the growth of these standards-compliant cards proved even more compelling to developers and consumers than the raw market share that 3dfx enjoyed as a pioneer of the industry.
With the introduction of DirectX 8, which brought with it the concept of programmable shading engines, Nvidia and ATI solidified their positions in the market, with the GeForce 3 GPU and Radeon 8500 VPU (Visual Processing Unit – eventually ATI just gave up and started using GPU as well) respectively, and effectively turned gaming graphics into a two-horse race that was made possible by both players’ support of a common API. The pairing of programmable hardware with DirectX also enabled Microsoft to minimise any threat from Linux, as OpenGL was effectively stagnant while DirectX went through a massive phase of evolution to support the advances in GPU design.
A happy medium
For consumers this meant competition focused on price/performance, quality-of-life improvements like anti-aliasing and anisotropic filtering. For developers it meant an end to the decision about which subset of graphics cards they would support. Having a consistent API meant that games could reach the widest possible audience – the only real prerequisite was a GPU that supported the version of DirectX that was used by the developers, and unlike coding for different APIs, games could easily straddle generations of DirectX, which in turn meant that gamers weren’t compelled to upgrade every time a new card emerged.
Even though the work done on DirectX created one of the most enduring threats to PC gaming, namely Microsoft’s Xbox consoles and the relegation of Windows gaming to second class citizen in the Microsoft priority list, it also ensured its survival, by allowing games to target the broadest range of systems and in turn ensuring that one GPU manufacturer didn’t come to dominate the landscape.
This somewhat harmonious focus on DirectX functionality lasted for quite some time, but slowly we have been seeing proprietary ‘features’ creeping back into the GPU market. Sometimes they are niche add-ons, like some anti-aliasing technologies, but the most prominent of these features have come from Nvidia. Not only did the company acquire physics accelerator manufacturer AGEIA, but it has pushed its CUDA GPGPU technology heavily to the detriment of the standards-based OpenCL tech.
After some befuddling initial wins for CUDA, such as its use as the exclusive GPU acceleration on the initial versions of the Mercury engine used by Adobe Premiere, it has never really hit critical mass for consumers (where, like gaming, it is insanity for developers to artificially preclude a large chunk of potential software sales by requiring a specific brand of GPU). It has made more wins in high performance and enterprise software, but we are seeing more and more products return to OpenCL (which has become even more compelling now that Intel’s Integrated offerings feature support).
PhysX has certainly been used in a wide range of games, but ultimately it is only ever employed as a tool for visual effects, such as increased debris and really floppy cloth. The initial promise of physics acceleration, and its ability to enable developers to bake high level physics processing into the DNA of the games themselves, has never happened – because, again, this would exclude a fairly large number of potential customers.
ATI and its current owner AMD have always been big advocates of open standards, yet despite the criticism levelled over the years at Nvidia, it is now starting down the path of proprietary technology as well. There have been some nods at open technologies, like the hair rendering TressFX technology that was deployed in Tomb Raider. Unfortunately this just ended up running poorly on AMD cards and even worse on Nvidia ones, but you could at least run it across different manufacturer’s cards, unlike PhysX.