NVIDIA GeForce RTX 3080 with AMD Ryzen 3900XT vs. Intel Core i9-10900K 113

NVIDIA GeForce RTX 3080 with AMD Ryzen 3900XT vs. Intel Core i9-10900K

(113 Comments) »

Conclusion

After hundreds of benchmark runs, we have much better insight into the fight between AMD and Intel for the gaming performance crown. Using the currently fastest graphics card—the NVIDIA GeForce RTX 3080 released today and with which you are finally able to game at fluid framerates in 4K—with both AMD and Intel, the differences are marginal, especially at 4K.

For our test setup, we decided to pick the optimum settings for AMD: Infinity Fabric running at its maximum stable frequency of 1866 MHz. This significantly helps improve the latencies in Ryzen processors and is the recommended way to operate for maximum performance. The second key ingredient is that the memory has to tick at the same rate as Infinity Fabric to activate "linked" mode, which further reduces latencies because memory and IF data paths no longer have to buffer data between clock cycles. For our Intel Core i9-10900K setup, we used the same memory settings even though Intel is much less susceptible to sub-optimal memory configurations due to their monolithic CPU design. A third result comes in the form of our battle-tested Core i9-9900K, overclocked to 5 GHz all-core. It has served us well through hundreds of graphics card reviews; I was always curious how it holds up against these newer processors.

As we've seen in our in-depth CPU reviews of the Core i9-10900K and Ryzen 9 3900XT, manual overclocking using the multiplier really has no substantial benefits for gaming these days. Manufacturers have become better and better at squeezing the last bits of performance out of their products. That's why we run the i9-10900K and 3900XT at stock; it's simply higher gaming performance at much lower heat and power.

Looking at the summary results, there's quite a big difference between Intel and AMD at 1080p Full HD: the Core i9-10900K is 10% faster than the AMD Ryzen 3900XT on average. 10% is a big gap, especially considering you effectively lose money you spent on your graphics card because the CPU choice reduces FPS by 10%. On the other hand, in terms of averages we're talking about 188 FPS vs. 172 FPS. Big deal? Maybe not when looking at it like that.

Things get better for AMD at 1440p. Here. the Ryzen is only 7% slower than the i9-10900K—as resolution is increased, the bottleneck shifts further and further from the CPU to the GPU. For each frame, the CPU workload is roughly similar, no matter the actual resolution. This means higher FPS are more CPU intensive in the same title, while lower FPS give the CPU room to breathe as the GPU is working as hard as it can. You also have to realize that resolutions like 1080p and 1440p are not exactly what the RTX 3080 was designed for—the RTX 3080 is a 4K card, as that's the resolution you need to ensure the card can run unconstrained on most CPUs.

Results at 4K are highly interesting. Here, the difference between AMD and Intel blurs—with just 1% between AMD and Intel, I would call them "equal", no way you'd be able to subjectively notice any difference. This is great news if you're looking to build a powerful 4K gaming rig with the GeForce RTX 3080. No matter whether you pick AMD or Intel, everything will run great. It will be interesting to see how that balance of power changes with upcoming titles that are optimized for the next-gen consoles, which are based on an 8-core CPU using AMD's Ryzen architecture. Our current selection of titles shows a hint that newer games with more modern engines are slightly less susceptible to the CPU bottleneck, but the differences between games are still huge.

Take a look at Anno 1800, for example. With Ryzen, it is always completely CPU-limited: 52 FPS, no matter the resolution. On Intel, it is bottlenecked by the CPU at 1080p, but even 1440p gets around that a little bit, and 4K is even better. Not by much, though, as Anno seems much more dependent on the CPU than the GPU even at the highest settings. Quite the opposite is happening with Red Dead Redemption 2. Thanks to a modern Vulkan-based engine that achieves outstanding eye candy (= high GPU requirements), we only see marginal differences between AMD and Intel, no matter the resolution. Last but not least, Civilization VI is worth a mention. This strategy title scales extremely well with multiple cores and is the only win for AMD in our long list of games. Here, the Ryzen 9 3900XT can inch past the Core i9-10900K with a paper thin lead.

After finishing the "normal at stock" testing, I got curious about how much of the performance difference can be attributed to the PCI-Express link speed advantage Ryzen has over the i9-10900K. Remember, Intel's Comet Lake CPUs still max out at PCI-Express Gen 3. The NVIDIA GeForce RTX 3080 introduces PCIe 4.0 support, so the assumption could be that Ampere will work better on AMD CPUs than on Intel, because the former support the faster PCI-Express Gen 4 mode. That's what the second AMD data point is for, colored in light red. However, we see only minor differences that average out to 1% vs. the Gen 4 results, no matter the resolution. Even the RTX 3080 isn't anywhere close to saturating the PCI-Express Gen 3 interface so that Gen 4 could provide an increase in performance. Still, it got me curious, and I ran many additional tests and present them in our NVIDIA GeForce RTX 3080 PCI-Express Scaling article, which just went live, too.

While traditional multiplier-based overclocking on both AMD and Intel usually results in negative scaling for gaming, Intel's Core i9-10900K is additionally constrained by its power limit. In order to achieve a 125 W TDP rating for their 10-core Comet Lake processor, Intel had to cap long-term power to 125 W, with short bursts of up to 250 W able to exceed that limit for a few seconds. Especially for multiplier-locked Intel CPUs, lifting the power limit is a great way to easily unlock more performance. The dark blue bar "10900K @ Max PL" shows the Intel CPU running with all its power restrictions removed. Surprisingly, there are no noteworthy gains due to the unlock. Here and there, it's a percent or two, really not all that spectacular. The underlying reason is that even if games load a CPU with many threads running in parallel, these are very light threads that aren't running heavy calculations and are often idle, waiting for results from other threads, or for things to happen in the render loop.

So the choice between Intel and AMD becomes extremely tough if you use the GeForce RTX 3080 as intended—4K gaming. Here, the Ryzen blunts Intel's advantage to almost naught, and AMD sweetens the deal with two additional cores that should help in specific productivity use cases. The situation will be similar for more affordable processors, too. PCIe Gen 4 is a forward-looking feature that might make a difference in the future, but not today. If however, you find yourself gaming at lower resolutions with higher refresh rates, Intel is your pick—just look at the higher frame rates Intel offers at these lower resolutions, which your high refresh-rate monitor can benefit from.
Discuss(113 Comments)
View as single page
Jun 12th, 2024 12:38 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts