For the last few years, there’s been an ongoing debate about the benefits and advantages (or lack thereof) surrounding DirectX 12. It hasn’t helped any that the argument has been bitterly partisan, with Nvidia GPUs often showing minimal benefits or even performance regressions, while AMD cards have often shown significant performance increases.
[H]ardOCP recently compared AMD and Nvidia performance in Ashes of the Singularity, Battlefield 1, Deus Ex: Mankind Divided, Hitman, Rise of the Tomb Raider, Sniper Elite 4, and Tom Clancy’s The Division. Bear in mind that this was specifically designed as a high-end comparison that would compare the two APIs in GPU-limited scenarios at high resolutions and detail levels, and with a Core i7-6700K clocked at 4.7GHz powering the testbed. The GTX 1080 Ti was tested in 4K, while the less-powerful 1080 and RX 480 were tested in 1440p. Before you squawk about comparing the GTX 1080 and the RX 480, keep in mind that each GPU was only compared against itself in DX11 versus DX12.
The answer to whether DirectX 12 was better or worse than DirectX 11 boils down to “It depends.” Specifically, it depends on whether you’re using an AMD or an Nvidia GPU, and it depends on the game itself. AMD GPUs were less likely to show a performance delta between the two APIs, while Nvidia cards still tended to tilt towards DX11 overall. [H]ardOCP notes in its conclusion that DX11 is still the better overall API option, but that DX12 support has improved from both companies, performance deltas between the two APIs have dropped, and in a few cases, DX12 pulls out strong wins.
Why DirectX 12 hasn’t transformed gaming
A few years ago, when low-overhead APIs like DirectX 12 and Vulkan hadn’t been released and even Mantle was in its infancy, there were a lot of overconfident predictions made about how these upcoming APIs would be fundamentally transformative to gaming, unleash the latent power in all of our computers, and transform the gaming industry. The truth, thus far, has been more prosaic. How much a game benefits from DirectX 12 depends on what kind of CPU you’re testing it on, how GPU-limited your quality settings are, how much experience the developer has in the API to start with, and whether the title was developed from the ground up to take advantage of DX12, or if its support for the API was patched in at a later date.
And the components you choose can have a significant impact on what kind of scaling you see. Consider the graph below, from TechSpot, which compares a variety of CPUs while using the Fury X.
Intel’s Core i7-6700K barely twitches, while the Core i3-6100T has an average frame rate 1.14x higher and a minimum frame rate less than half as high. AMD’s FX-6350 and FX-8370 both see average frame rates rise by nearly 27%, but, again, minimum frame rates drop severely.
A similar point is demonstrated below, with a graph of Hitman results. The 6700K is capable of driving the Fury X almost as fast in DX11 as it is in DX12, while the FX-8370 improved enormously.
One reason why we see things playing out the way they do is because the goal and performance-improving functions of low-overhead APIs have been misunderstood. It’s been known for years that Nvidia GPUs are often faster with lower-end Intel or AMD CPUs (pre-Ryzen) than AMD’s own GPUs are. Part of the reason for this is because Nvidia’s own DX11 drivers implement multi-threading, whereas AMD’s do not. That’s one reason why, in games like Ashes of the Singularity, AMD’s GPU performance skyrocketed so much in DX12. But fundamentally, DX12, Vulkan, and Mantle are methods of compensating for weak single-threaded performance (or for spreading out a workload more evenly so it isn’t bottlenecked by a single thread).
This article from Eurogamer is older, but it still makes an important point — the improvements to performance shown by Mantle and DX12 come from allowing the CPU to process more draw calls per second. If the GPU is already saturated with all the processing it can handle, stuffing more draw calls into the pipe isn’t going to improve anything.
Now, having said all this, was there any point to DirectX 12 at all? Absolutely yes. Games, as a category of applications, have been among the slowest to embrace and benefit from multi-core processors. Even today, the number of games that can scale above four cores is quite small. Giving lower-end CPUs the freedom to utilize their resources more effectively can absolutely pay dividends for consumers on lower-end hardware. DirectX 12 is also still fairly new, with just a handful of supporting titles. It’s not unusual for a new API to take several years to find its feet and for developers to begin supporting it as a primary option. Game engines have to be developed to work well with it. Developers have to become comfortable using it. AMD, NV, and Intel need to release drivers that use it more effectively, and in some cases, may make hardware changes to their own GPUs to make low-latency APIs run more efficiently.
Neither the fact that DX12’s gains against DX11 are less dramatic than many would prefer nor its limited adoption at this point in time are unusual for a new API that makes as many fundamental changes as DX12 makes relative to DX11. How those changes will shape games of the future remains to be seen.