How to set up smartphones and PCs. Informational portal
  • home
  • Mistakes
  • Overview of modern processors. How to choose a central processor, and why you need it

Overview of modern processors. How to choose a central processor, and why you need it

The first quad-core processor was released in the fall of 2006. They became the Intel Core 2 Quad model, based on the Kentsfield core. At the time, bestsellers such as The Elder Scrolls 4: Oblivion and Half-Life 2: Episode One were considered popular games. The "killer of all gaming computers" Crysis has not appeared yet. And the DirectX 9 API with shader model 3.0 was in use.

How to choose a processor for a gaming PC. We study the effect of processor dependence in practice

But it's the end of 2015. On the market, in the desktop segment, there are 6- and 8-core central processors, but 2- and 4-core models are still considered popular. Gamers are raving about the PC versions of GTA V and The Witcher 3: Wild Hunt, and yet there is no gaming graphics card in nature capable of delivering a comfortable level of FPS in 4K resolution at maximum graphics quality settings in Assassin's Creed Unity. In addition, the release of the Windows 10 operating system took place, which means that the era of DirectX 12 has officially begun. As you can see, a lot of water has flowed under the bridge in nine years. Therefore, the question of choosing a central processor for a gaming computer is more relevant than ever.

The essence of the problem

There is such a thing as the effect of processor dependence. It can appear in absolutely any computer game. If the performance of the video card rests on the capabilities of the central chip, then they say that the system is processor-dependent. It must be understood that there is no single scheme by which the strength of this effect can be determined. It all depends on the features of a particular application, as well as the selected graphics quality settings. However, in absolutely any game, tasks such as organizing polygons, lighting and physics calculations, artificial intelligence modeling, and many other actions fall on the “shoulders” of the central processor. Agree, there is plenty of work.

The most difficult thing is to choose a central processor for several graphics adapters at once

In processor-dependent games, the number of frames per second can depend on several parameters of the "stone": architecture, clock speed, number of cores and threads, as well as cache size. The main purpose of this material is to identify the main criteria that affect the performance of the graphics subsystem, as well as to form an understanding of which central processor is suitable for a particular discrete video card.

Frequency

How to identify processor dependence? The most effective way is empirically. Since the CPU has several parameters, let's analyze them one by one. The first characteristic, which most often pay close attention to, is the clock frequency.

The clock frequency of the central processors has not been growing for quite a long time. At first (in the 80s and 90s), it was the increase in megahertz that led to a frenzied increase in the overall level of performance. Now the frequency of AMD and Intel CPUs is frozen in the delta of 2.5-4 GHz. Everything below is too budget and not quite suitable for a gaming computer; anything above is already overclocking. This is how processor lines are formed. For example, there is an Intel Core i5-6400 running at 2.7GHz ($182) and there is a Core i5-6500 running at 3.2GHz ($192). These processors have the same absolutely all characteristics, except for the clock frequency and price.

Overclocking has long become a "weapon" of marketers. For example, only a lazy motherboard manufacturer does not brag about the excellent overclocking potential of their products.

On sale you can find chips with an unlocked multiplier. It allows you to independently overclock the processor. At Intel, such "stones" have the letters "K" and "X" in the name. For example, Core i7-4770K and Core i7-5690X. Plus, there are separate models with an unlocked multiplier: Pentium G3258, Core i5-5675C and Core i7-5775C. AMD processors are marked in a similar way. So, hybrid chips in the name have the letter "K". There is a line of FX processors (AM3+ platform). All "stones" included in it have a free multiplier.

Modern AMD and Intel processors support automatic overclocking. In the first case, it is called Turbo Core, in the second - Turbo Boost. The essence of its work is simple: with proper cooling, the processor during operation increases its clock frequency by several hundred megahertz. For example, the Core i5-6400 operates at a speed of 2.7 GHz, but with the active Turbo Boost technology, this parameter can permanently increase to 3.3 GHz. That is exactly 600 MHz.

It is important to remember: the higher the clock speed, the hotter the processor! So you need to take care of high-quality cooling of the “stone”

I'll take the NVIDIA GeForce GTX TITAN X video card - the most powerful single-chip gaming solution of our time. And the Intel Core i5-6600K processor is a mainstream model equipped with an unlocked multiplier. Then I'll fire up Metro: Last Light, one of the most CPU-intensive games of our day. The graphics quality settings in the application are selected in such a way that the number of frames per second each time rests on the performance of the processor, but not the video card. In the case of the GeForce GTX TITAN X and Metro: Last Light - the maximum graphics quality, but without anti-aliasing. Next, I will measure the average FPS level in the range from 2 GHz to 4.5 GHz in Full HD, WQHD and Ultra HD resolutions.

Processor dependency effect

The most noticeable effect of processor dependence, which is logical, is manifested in light modes. So, in 1080p, as the frequency increases, the average FPS also steadily increases. The results were very impressive: when the speed of the Core i5-6600K increased from 2 GHz to 3 GHz, the number of frames per second in Full HD resolution increased from 70 FPS to 92 FPS, that is, by 22 frames per second. With an increase in frequency from 3 GHz to 4 GHz - another 13 FPS. Thus, it turns out that the processor used, with the given graphics quality settings, was able to “pump” the GeForce GTX TITAN X in Full HD only from 4 GHz - it was from this mark that the number of frames per second with an increase in the CPU frequency stopped growing.

As the resolution increases, the effect of processor dependence becomes less noticeable. Namely, the number of frames stops growing, starting from 3.7 GHz. Finally, in Ultra HD resolution, we almost immediately ran into the potential of the graphics adapter.

There are many discrete graphics cards. It is customary in the market to catalog these devices in three segments: Low-end, Middle-end and High-end. Captain Evidence suggests that different processors with different frequencies are suitable for different performance graphics adapters.

The dependence of performance in games on the frequency of the central processor

Now I'll take the GeForce GTX 950 video card - a representative of the upper Low-end segment (or lower Middle-end), that is, the absolute opposite of the GeForce GTX TITAN X. The device belongs to the entry level, however, it is able to provide a decent level of performance in modern games in Full HD resolution. As you can see from the graphs below, the processor, operating at a frequency of 3 GHz, “pumps” the GeForce GTX 950 in both Full HD and WQHD. The difference with the GeForce GTX TITAN X is visible to the naked eye.

It is important to understand that the less load falls on the “shoulders” of the video card, the higher the frequency of the central processor should be. It is irrational to purchase, for example, an adapter of the GeForce GTX TITAN X level and use it in games at a resolution of 1600x900 pixels.

Video cards of the Low-end level (GeForce GTX 950, Radeon R7 370) will have enough of a central processor operating at a frequency of 3 GHz. Middle-end adapters (Radeon R9 280X, GeForce GTX 770) - 3.4-3.6 GHz. High-end flagship video cards (Radeon R9 Fury, GeForce GTX 980 Ti) - 3.7-4 GHz. Productive bundles SLI/CrossFire - 4-4.5 GHz

Architecture

In reviews devoted to the release of one or another generation of central processors, the authors continually state that the difference in performance in x86 calculations is a meager 5-10% year after year. This is a kind of tradition. Neither AMD nor Intel has seen any serious progress for a long time, and phrases like " keep sitting on my Sandy Bridge, wait for next year»become winged. As I said, in games, the processor also has to process a large amount of data. In this case, a reasonable question arises: to what extent is the effect of processor dependence observed in systems with different architectures?

For both AMD and Intel chips, you can define a list of modern architectures that are still popular. They are relevant, on a global scale, the difference in performance between them is not so big.

Let's take a couple of chips - Core i7-4790K and Core i7-6700K - and make them work at the same frequency. Processors based on the Haswell architecture are known to have appeared in the summer of 2013, and Skylake solutions in the summer of 2015. That is, exactly two years have passed since the update of the “so” processor line (this is how Intel calls crystals based on completely different architectures).

Impact of architecture on game performance

As you can see, there is no difference between Core i7-4790K and Core i7-6700K running at the same frequencies. Skylake is ahead of Haswell only in three games out of ten: in Far Cry 4 (by 12%), in GTA V (by 6%) and in Metro: Last Light (by 6%) - that is, in all the same processor-dependent applications. However, 6% is mere trifles.

Comparison of processor architectures in games (NVIDIA GeForce GTX 980)

A few platitudes: it is obvious that it is better to assemble a gaming computer based on the most modern platform. After all, not only the performance of the chips themselves is important, but also the functionality of the platform as a whole.

Modern architectures with a few exceptions have the same performance in computer games. Owners of processor families Sandy Bridge, Ivy Bridge and Haswell can feel quite calm. With AMD, the situation is similar: various variations of the modular architecture (Bulldozer, Piledriver, Steamroller) in games have approximately the same level of performance

Cores and Threads

The third and perhaps the determining factor that limits the performance of a video card in games is the number of CPU cores. It's no coincidence that a growing number of games have a quad-core CPU in their minimum system requirements. Vivid examples include such modern hits as GTA V, Far Cry 4, The Witcher 3: Wild Hunt, and Assassin's Creed Unity.

As I said at the very beginning, the first quad-core processor appeared nine years ago. Now there are 6- and 8-core solutions on sale, but 2- and 4-core models are still in use. I will give a table of markings for some popular AMD and Intel lines, dividing them depending on the number of "heads".

AMD hybrid processors (A4, A6, A8 and A10) are sometimes referred to as 8-, 10- and even 12-core. It's just that the company's marketers add elements of the built-in graphic module to the computing units. Indeed, there are applications that can use heterogeneous computing (when x86 cores and embedded video process the same information together), but this scheme is not used in computer games. The computational part performs its task, the graphic - its own.

Some Intel processors (Core i3 and Core i7) have a certain number of cores but double the number of threads. Hyper-Threading technology is responsible for this, which was first used in Pentium 4 chips. Threads and cores are slightly different things, but we'll talk about this a little later. In 2016, AMD will release processors based on the Zen architecture. For the first time, the "red" chips will acquire technology similar to Hyper-Threading.

In fact, the Core 2 Quad on the Kentsfield core is not a full-fledged quad-core. It is based on two Conroe crystals, divorced in one package under the LGA775

Let's do a little experiment. I took 10 popular games. I agree that such an insignificant number of applications is not enough to say with 100% certainty that the effect of processor dependence has been fully studied. However, the list included only hits that clearly demonstrate the trends in modern game development. The graphics quality settings were selected in such a way that the final results did not rest against the capabilities of the video card. For the GeForce GTX TITAN X, this is the maximum quality (without anti-aliasing) and Full HD resolution. The choice of such an adapter is obvious. If the processor can "pump" the GeForce GTX TITAN X, then it will cope with any other video card. The stand used the top Core i7-5960X for the LGA2011-v3 platform. Testing was carried out in four modes: when activating only 2 cores, only 4 cores, only 6 cores and 8 cores. Hyper-Threading multithreading technology was not involved. Plus, testing was carried out with two frequencies: at nominal 3.3 GHz and overclocked to 4.3 GHz.

Processor dependence in GTA V

GTA V is one of the few modern games that use all eight "crusts" of the processor. Therefore, it can be called the most processor-dependent. On the other hand, the difference between six and eight cores was not so impressive. Judging by the results, the two cores are very far behind other modes of operation. The game slows down, a large number of textures are simply not drawn. The stand with four cores shows noticeably better results. It lags only 6.9% behind the six-core one, and 11% behind the eight-core one. Whether in this case the game is worth the candle - you decide. However, GTA V clearly demonstrates how the number of processor cores affects the performance of the video card in games.

The vast majority of games behave in a similar way. In seven out of ten applications, the system with two cores turned out to be processor-dependent. That is, the FPS level was limited by the central processor. At the same time, in three out of ten games, the six-core bench showed an advantage over the quad-core one. True, the difference cannot be called significant. Far Cry 4 turned out to be the most radical game - it stupidly did not start on a system with two cores.

The increase from the use of six and eight cores in most cases turned out to be either too small, or there was none at all.

Processor dependence in The Witcher 3: Wild Hunt

The three games loyal to the dual-core system were The Witcher 3, Assassin's Creed Unity and Tomb Raider. In all modes, the same results were demonstrated.

For those who are interested, I will give a table with the full test results.

performance of multi-core systems in games

Four cores is the optimal number for today. At the same time, it is obvious that gaming computers should not be assembled with a dual-core processor. In 2015, just such a “stone” is the bottleneck in the system

We figured out the cores. The test results clearly show that in most cases four "heads" in a processor are better than two. At the same time, some Intel models (Core i3 and Core i7) can boast of supporting Hyper-Threading technology. Without going into details, I note that such chips have a certain number of physical cores and twice the number of virtual ones. In ordinary applications, Hyper-Threading is certainly useful. But how does this technology fare in games? This issue is especially relevant for the line of Core i3 processors - nominally dual-core solutions.

To determine the effectiveness of multithreading in games, I assembled two test benches: with a Core i3-4130 and a Core i7-6700K. In both cases, a GeForce GTX TITAN X graphics card was used.

Core i3 Hyper-Threading Efficiency

In almost all games, Hyper-Threading technology has affected the performance of the graphics subsystem. Naturally, for the better. In some cases, the difference has been enormous. For example, in The Witcher, the number of frames per second increased by 36.4%. True, in this game without Hyper-Threading, disgusting friezes were observed every now and then. I note that the Core i7-5960X did not notice such problems.

As for the quad-core Core i7 processor with Hyper-Threading, support for these technologies made itself felt only in GTA V and Metro: Last Light. That is, only two games out of ten. They also noticeably increased the minimum FPS. Overall, the Hyper-Threaded Core i7-6700K was 6.6% faster in GTA V and 9.7% faster in Metro: Last Light.

Hyper-Threading in Core i3 is really dragging, especially if the system requirements indicate a quad-core processor model. But in the case of Core i7, the increase in performance in games is not so significant.

Cache

We figured out the main parameters of the central processor. Each processor has a certain amount of cache. Today, up to four levels of this type of memory are used in modern integrated solutions. The cache of the first and second levels, as a rule, is determined by the architectural features of the chip. The cache of the third level from model to model can vary. I will give a small table for your reference.

So, the more productive Core i7 processors have 8 MB of cache in the third level, the slower Core i5 have 6 MB. Will these 2 MB affect performance in games?

The Broadwell family and some Haswell processors use 128 MB of eDRAM (Level 4 cache). In some games, it can seriously speed up the system.

It's very easy to check. To do this, you need to take two processors from the Core i5 and Core i7 lines, set the same frequency for them and disable Hyper-Threading technology. As a result, in the nine games tested, only F1 2015 showed a noticeable difference of 7.4%. The rest of the 3D entertainment did not respond in any way to the 2-MB cache deficit in the third level in the Core i5-6600K.

Impact of L3 cache on gaming performance

The difference in L3 cache between Core i5 and Core i7 processors in most cases does not affect system performance in modern games

AMD or Intel?

All tests discussed above were carried out with the participation of Intel processors. However, this does not mean at all that we do not consider AMD solutions as the basis for a gaming computer. Below are the results of testing using the FX-6350 chip used in the highest performing AMD AM3+ platform, using four and six cores. Unfortunately, I did not have an 8-core AMD "stone" at my disposal.

Comparison of AMD and Intel in GTA V

GTA V has already established itself as the most processor-intensive game. With the use of four cores in an AMD system, the average FPS level turned out to be higher than, for example, Core i3 (without Hyper-Threading). In addition, in the game itself, the image was rendered smoothly, without slowdowns. But in all other cases, the Intel cores turned out to be consistently faster. The difference between processors is significant.

Below is a table with full testing of the AMD FX processor.

Processor dependency in AMD system

There is no noticeable difference between AMD and Intel in only two games: The Witcher and Assassin's Creed Unity. In principle, the results lend themselves perfectly to logic. They reflect the real alignment of forces in the market of central processors. Intel cores are noticeably more powerful. Including in games. Four AMD cores compete with two Intel. At the same time, the average FPS is often higher for the latter. Six AMD cores compete with four Core i3 threads. Logically, eight "heads" of the FX-8000/9000 should impose a fight on the Core i5. Yes, AMD cores are absolutely deservedly called "semi-cores". These are the features of modular architecture.

The result is banal. For games, Intel solutions are better suited. However, among budget solutions (Athlon X4, FX-4000, A8, Pentium, Celeron), AMD products are preferable. Testing has shown that the slower four cores perform better in CPU-intensive games than the faster two Intel cores. In the middle and high price ranges (Core i3, Core i5, Core i7, A10, FX-6000, FX-8000, FX-9000), Intel solutions are already preferable

DirectX 12

As mentioned at the very beginning of the article, DirectX 12 became available for game developers with the release of Windows 10. You can get acquainted with a detailed overview of this API. The DirectX 12 architecture finally determined the direction of development of modern game development: developers began to need low-level programming interfaces. The main task of the new API is to rationally use the hardware capabilities of the system. This includes the use of all computational threads of the processor, and general-purpose calculations on the GPU, and direct access to the resources of the graphics adapter.

Windows 10 has just arrived. However, there are already applications in nature that support DirectX 12. For example, Futuremark has integrated the Overhead subtest into the benchmark. This preset is able to determine the performance of a computer system using not only the DirectX 12 API, but also AMD Mantle. The way the Overhead API works is simple. DirectX 11 imposes limits on the number of processor drawing commands. DirectX 12 and Mantle solve this problem by allowing more draw commands to be called. So, during the test, an increasing number of objects are displayed. Until the graphics adapter can no longer cope with their processing, and the FPS does not fall below 30 frames. For testing, I used a stand with a Core i7-5960X processor and a Radeon R9 NANO video card. The results turned out to be very interesting.

It is noteworthy that in patterns using DirectX 11, changing the number of CPU cores has almost no effect on the overall result. But with the use of DirectX 12 and Mantle, the picture changes dramatically. Firstly, the difference between DirectX 11 and low-level APIs turns out to be just cosmic (somewhere by an order of magnitude). Secondly, the number of "heads" of the central processor significantly affects the final result. This is especially noticeable when moving from two cores to four and from four to six. In the first case, the difference reaches almost a two-fold mark. At the same time, there are no special differences between six and eight cores and sixteen threads.

As you can see, the potential of DirectX 12 and Mantle (in the 3DMark benchmark) is simply huge. However, do not forget that we are dealing with synthetics, they do not play it. In reality, it makes sense to evaluate the profit from using the latest low-level APIs only in real computer entertainment.

The first PC games that support DirectX 12 are already on the horizon. These are Ashes of the Singularity and Fable Legends. They are in active beta testing. The other day colleagues from Anandtech

The world's leading PC processor manufacturer, Intel, has unveiled a new line of high-performance desktop processors, the X-series, at Computex in Taipei. The top chip has as many as 18 processing cores and a corresponding price tag: $2,000.

For extrovert gamers

In fact, a new platform is introduced: X-series processors will work with a new chipset, X299. It is intended mainly for gamers (especially those who would like to broadcast it to their online audience in high definition simultaneously with the game), professionals working with 3D graphics and video, software developers, as well as all those who are ready to part with solid sums in exchange for the possession of the "most-most" productive "iron".

At the same time, it is important to understand: the entry-level solutions included in the line, most likely (it will only be possible to say for sure based on the results of testing), will not provide significant superiority in games compared to the more affordable regular Core i3 or i5 without the prestigious “X-series” sticker. At least, if you do not combine them with the most productive video cards available on the market today.

Watch your hands

The structure of the new Intel line of chips is not the most obvious. The "youngest" processors "Kaby Lake-X" i5-7640X and i7-7740X in it use the same seventh-generation Core cores as the previously released mainstream Core i5 and i7 with Kaby Lake architecture. There are also four cores and four (i5) or eight (i7) data processing threads, two memory channels and 16 PCIe channels directly on the processor. The new X-chips differ in a "hotter" thermal package (up to 112 watts versus 91 watts for similar mass solutions) and a new 2066 socket - this is what the X299 chipset uses.

Higher than the ordinary representatives of the Core family, and clock speeds: i7-7740X has a base frequency of 4.3 GHz and TurboBoost frequency up to 4.5 GHz. The i7-7700K, at the same $399 price point, has a 100 MHz lower base clock, although the overclock is the same. Price equality should encourage gamers to prefer the new X-series because of their overclocking potential, which is more significant if only because of the lack of an integrated graphics core. True, for a motherboard based on the X299 chipset, you will have to pay more? than for a mass solution.

In X-chips at a higher level, Skylake-X replaces the Kaby Lake-X architecture, but this is not just a 6th generation Skylake chip in a new socket: Skylake-SP cores are used, which were also developed for the future generation of Xeon chips. Chips with Skylake-X architecture support Turbo Boost Max 3: the chip itself determines the cores capable of operating at the highest frequency, and when loaded on 1-2 cores, it loads them. The structure of the on-chip cache has been redesigned: the individual caches of each core have increased to 2 MB, while the chip cache common to all cores has been reduced. Intel says this will improve performance. At the same time, compared to Xeon, there will be a number of limitations: for example, only 4 memory channels instead of six.

The entry-level Skylake-X will feature the 6-core, 12-thread i7-7800X (3.5/4.0GHz) - starts at $389 but doesn't support Turbo Boost Max 3 and (officially, at least) clocked memory above 2400 MHz. A step above is the $599 8-core, 16-thread i7-7820X (3.6/4.3 and up to 4.5 (in Turbo Boost Max 3) GHz) with support for memory frequencies up to 2666 MHz. Finally, the $999 10-core i9-7900X (3.3/4.3/4.5GHz) has 44 PCIe lanes, Turbo Boost Max 3, and support for 2666MHz memory. All three of these processors have a thermal pack of 140 watts.

At the top of the new lineup are processors with a 165W TDP and 12, 14, 16, or 18 cores (twice as many threads), for which Intel is not yet ready to announce operating frequencies, in contrast to prices starting at $1,199. It is known that the “most-most” processor in the line will be called i9-7980XE (not just i9, but i9 Extreme) and will cost $1,999.

Free multiplier and other joys

All new Intel X-series chips have an unlocked multiplier, that is, they are initially positioned as a solution for overclockers. The X299 chipset designed for them supports Optane non-volatile memory, which can act both as a “RAM” and for data storage. It can connect up to three PCIe or NVMe SSDs, 8 SATA devices, and 10 Gen 1 USB 3.1 devices. Support for Thunderbolt 3 and USB 3.1 2nd generation was not built into the chipset, it will have to be implemented through additional controllers.

What about AMD?

Despite the fact that the new high-performance AMD Ryzen processors have made a lot of noise this spring, Intel is not going to give up positions and still prices its chips more expensive. So, a 16-thread Intel X-series chip will cost $599, while a similar 16-thread AMD Ryzen chip will cost $499. Yes, they are not identical, the Intel processor has twice as many memory channels, but the combination of motherboard + processor with In any case, Intel technologies will cost more than a similar solution from AMD. At the same time, AMD has no answer to the 18-core Intel Core i9 Extreme yet - the company has yet to achieve such a performance scaling of Zen cores.

Those who want to build a new computer for themselves in 2018 can make a big mistake when choosing a processor. Last year and the beginning of this year, major events took place in the processor industry, a lot has changed, new generations of CPUs are entering the scene.

Computer stores now have an abundance of processor models, old and new generations mixed up. And to buy a processor of previous generations means to seriously lose in money and in the life of the platform.

Generations of processors in 2018

A year ago, the market for desktop and mobile processors was, if not a revolution, then at least a strong shake-up. AMD, which has been lagging behind Intel for many years in terms of processor performance, has released processors on a completely new architecture:

  • Ryzen 3 1200/1300X/2200G
  • Ryzen 5 1400/1500X/1600/1600X/2400G
  • Ryzen 7 1700/1700X/1800X
  • Ryzen Threadripper 1900X/1920X/1950X

The first three lines use an AM4 socket, Threadripper is a premium TR4. These are new AMD platforms that will live for at least a few more years. They use the latest RAM standard - DDR4, and also support PCIe 3.0, NVMe SSD, and other modern features.

Ryzen performed so well against the background of Intel processors that it also updated the platform in the fall of 2017, releasing 8th generation Coffee Lake processors:

  • Core i3 8100/8350k
  • Core i5 8400/8600k
  • Core i7 8700k
  • Core i9 7900X/7920X/7960X/7980XE

As with AMD, the first three lines use the LGA1151-2 desktop platform, and the last one uses the LGA2066 premium platform. And just like that, they use DDR4, PCIe 3.0, and everything else.

When assembling a new computer, it is on these platforms that you need to focus. But now the stores are filled with processors of the previous generations, for sockets AM3, AM3 +, LGA1150, LGA2011. It makes no sense to buy them, for a number of reasons:

  1. They use the outdated DDR3 RAM standard, with lower frequencies and volumes, with high power consumption. It will not be possible to transfer it to a new computer in a few years, you will have to buy a new one.
  2. New processors from those that already exist, and those that will be, do not work on these sockets. After 3-4 years, it will not be possible to simply change the processor to two newer generations, you will still have to buy a motherboard and RAM.
  3. They don't have PCIe 3.0, NVMe SSD support, etc.
  4. Processors of previous generations are much weaker than the latest ones, this is especially noticeable for AMD.

The LGA1151 socket of the first revision looks a little better against their background, which does not support 8th generation Intel processors (Coffee Lake), but works with previous generations: Kaby Lake and Skylake. This platform already uses DDR4 and other innovations, but is also no longer supported, and will have to be changed when upgrading the processor.

Buying Kaby Lake and Skylake processors is now simply unprofitable, because for the same price you get fewer cores and less frequency than in the case of Coffee Lake. For example, the old Core i5 with 4 cores is equivalent to the current Core i3 with the same 4 cores, and the current i5 already has 6 cores. The Core i7 8700k can run 12 threads at once, compared to the 8 threads of the Core i7 7700k/6700k.

So it’s best to limit your choice of processor when building a new computer to Ryzen and Coffee Lake models, especially since new programs are increasingly using many cores. Then the assembled computer will be relevant for at least 5 years.

How much money to spend on a processor?

Conventionally, processors can be divided into several categories based on their price and performance.

  • Ultra-budget (low-end) - Intel Celeron and Pentium, as well as AMD A6/A8/A10/A12/Athlon. As a rule, these are 2 cores without HT and with a low frequency. The cost is up to 4,000 rubles.
  • Office (low-middle) - this includes Intel Core i3 and old i5, the latest Pentium with HT (each core is like a double, i.e. 2 cores are seen as 4), along with AMD Ryzen 3/5 with SMT (same , as HT). From 2 to 4 (8 thanks to SMT) cores, the price is from 4,000 to 12,000 rubles.
  • The middle segment (middle) - here you can already count on 6 cores in the latest Intel Core i5 and 6 (12) cores in AMD Ryzen 5. Price range: 12.000-20.000 rubles.
  • Top (top) - the most powerful processors for platforms LGA1151 and AM4, have 6(12)-8(16) cores. These are Intel Core i7 and AMD Ryzen 7. From 20,000 to 30,000 rubles.
  • Premium segment (HEDT) - processors for workstations using separate sockets - LGA2066 and TR4, and with the number of cores from 8(16) to 18(36). This includes everything that is more expensive than 30,000 rubles, and the most powerful models can cost about 140,000 rubles.

There are two approaches to spending on a processor: buy cheaper and upgrade after a few years, or immediately choose at least an average price and performance. However, the first approach is relevant, for the most part, only in the case of AMD processors - this company rarely changes sockets, so you can install the latest processor in a 3-5-year-old motherboard. To do this, you only need to update the BIOS.

Intel, on the other hand, changes sockets much more often, and most likely, after Coffee Lake, this will happen again. Therefore, it makes no sense to take an Intel processor “overhand”. The only option is not to immediately spend a lot of money on a powerful processor, but to take the minimum suitable one, for example, Core i3. And in 4 years, take a used Core i7 at a much lower price. However, we must remember that then when replacing the processor, the platform will already be outdated.

If you need performance right now, then it is better to immediately invest in top or premium models. Having bought such a processor, you can not experience a lack of power and cores for 5-7 years. So, in 2018, computers based on 2012 Core i7 processors remain very fast in operation, and the lack of performance is felt only in heavy tasks like video encoding and compilation.

On the other hand, it is not uncommon for processor power to be wasted - it turns out that they just spent extra money on it. To prevent this from happening, it is better to proceed from the tasks for which the computer is bought. After all, even low-end processors are not bad in themselves - for some tasks they are quite enough for convenient work.

Which processor to choose for ...

…computer games

Although recently more and more games are being created with an eye on multi-core, for the vast majority of new products, 4 cores are still more than enough. Here, high frequency and fast work with RAM are much more relevant. Therefore, AMD Ryzen processors, where the emphasis is on multi-core, in games, as a rule, do not shine even against the background of previous generations of Intel Core. However, the gap is small.

To comfortably play in conjunction with a sufficiently powerful video card in most games, a 4-core Intel Core i3 8100 processor is suitable, but a Core i3 8350k with a frequency of 4 GHz is better. If we take the 6-core Core i5 8400/8600k, then there will be a good supply of cores for games in the next 5 years. Well, with a Core i7 with 6 (12) cores, the supply will be even greater. Intel processors are also good here because k-models can be overclocked somewhere up to 5 GHz, with good cooling.

Does it make sense for games to take AMD Ryzen processors? Yes, if we are talking about how to play and do something else at the same time - for example, record and encode video. The Ryzen 5/7 lag behind Intel processors in games is rarely felt, but at the same time, older Ryzen have many cores, which are also multiplied by 2 by SMT technology - i.e. we are talking about formulas 6(12) and 8(16). An excellent start to the future.

It makes no sense to buy premium processors from both companies for games. A large number of cores results in a reduced frequency, which is bad for games.

Well, office and low-end processors will fit for games of past years, as well as light toys without graphic frills. At the same time, it is not even necessary to buy a separate video card - the integrated video core can handle it. Especially when it comes to Ryzen 3 2200G and Ryzen 5 2400G, their video core is equal in power to the Nvidia GeForce 1030 video card.

…Internet and office tasks

Here, as in the case of games, you need a high frequency and a fairly powerful core, and the number of cores is not so important. Therefore, the office segment of processors is 2 (4) cores or full-fledged 4 with a frequency of up to 4 GHz. However, ultra-budget Intel processors with 2 cores are quite enough to work on the Internet and with office programs. Even the cheapest Pentiums are equipped with powerful HD530 video cores - with hardware acceleration in the Internet browser and the office suite, the processor does not suffer from the load.

AMD looks worse here - for such tasks it is reasonable to take only the younger Ryzen 3 with 4 cores or Ryzen 5 with 4 (8) cores, this is already an office segment. Ultra-budget Athlon and A-series are hopelessly outdated and weak even for the office.

The Internet and work with documents are those tasks for which it makes no sense to spend money on top-end or HEDT processors. Even if many office and Internet applications are used at once, the power of the middle segment is more than enough. These are Intel Core i5 with 6 cores and AMD Ryzen 5 with formula 6(12). Exception: intensive work with large and complex tables, top processors will come in handy here.

… video and 3D work

The area where there is not much processor power. Despite the fact that when working with video and 3D graphics, a significant part of the operations is transferred to the video card, it is very inconvenient to work without a powerful processor. It all depends on the budget - if it allows, then it is better to take Intel Core i7 and i9 HEDT processors on the LGA2066 socket, or AMD Threadripper on the TR4 socket. At the same time, AMD processors are more profitable, because they are more powerful than Intel processors of equal price.

Also a good option is the top-end Intel Core i7 and AMD Ryzen 7 processors with 6(12) and 8(16) cores. Well, for fans who cannot afford expensive hardware, we can recommend the AMD Ryzen 5 1600/1600X with its 6 (12) cores, which is in the middle segment and outperforms the Core i7 of previous generations in terms of power.

Office processors and low-end for working with video and 3D can only be used out of desperation. Such heavy tasks on such weak processors will cause great inconvenience, bordering on suffering.

…programming

Building program source codes also requires a powerful processor - the more cores and the higher the frequency, the more convenient it is for the programmer to work. Premium AMD Threadripper and Intel Core i9 processors give it maximum productivity. However, the top AMD Ryzen 7 and Intel Core i7 also show excellent results. In compilation, the lack of cores can sometimes be compensated for by the frequency, and it is higher for top processors than for HEDT.

The average Ryzen 5 1600/1600X is also suitable for programming, but its price-oriented Core i5 counterparts already lack cores for fast compilation. Of course, if necessary, you can also work on office processors like Core i3 and Ryzen 3, but there is no need to talk about high speed when compiling large projects.

Final theses

  • AMD platforms last longer, their processors can be upgraded after years.
  • Don't overpay for power that will almost never be used.
  • A new computer with Intel processors should only be on the Coffee Lake generation.
  • AMD Ryzen 5 can compete with top processors in heavy tasks.
  • To work with video, 3D, compilation, you should take the most powerful tops and HEDT.

More on the site:

Best PC processors of 2018 updated: March 29, 2018 by: alex ferman

The first quad-core processor was released in the fall of 2006. They became the Intel Core 2 Quad model, based on the Kentsfield core. At the time, bestsellers such as The Elder Scrolls 4: Oblivion and Half-Life 2: Episode One were considered popular games. The "killer of all gaming computers" Crysis has not appeared yet. And the DirectX 9 API with shader model 3.0 was in use.

How to choose a processor for a gaming PC. We study the effect of processor dependence in practice

But it's the end of 2015. On the market, in the desktop segment, there are 6- and 8-core central processors, but 2- and 4-core models are still considered popular. Gamers are raving about the PC versions of GTA V and The Witcher 3: Wild Hunt, and yet there is no gaming graphics card in nature capable of delivering a comfortable level of FPS in 4K resolution at maximum graphics quality settings in Assassin's Creed Unity. In addition, the release of the Windows 10 operating system took place, which means that the era of DirectX 12 has officially begun. As you can see, a lot of water has flowed under the bridge in nine years. Therefore, the question of choosing a central processor for a gaming computer is more relevant than ever.

The essence of the problem

There is such a thing as the effect of processor dependence. It can appear in absolutely any computer game. If the performance of the video card rests on the capabilities of the central chip, then they say that the system is processor-dependent. It must be understood that there is no single scheme by which the strength of this effect can be determined. It all depends on the features of a particular application, as well as the selected graphics quality settings. However, in absolutely any game, tasks such as organizing polygons, lighting and physics calculations, artificial intelligence modeling, and many other actions fall on the “shoulders” of the central processor. Agree, there is plenty of work.

The most difficult thing is to choose a central processor for several graphics adapters at once

In processor-dependent games, the number of frames per second can depend on several parameters of the "stone": architecture, clock speed, number of cores and threads, as well as cache size. The main purpose of this material is to identify the main criteria that affect the performance of the graphics subsystem, as well as to form an understanding of which central processor is suitable for a particular discrete video card.

Frequency

How to identify processor dependence? The most effective way is empirically. Since the CPU has several parameters, let's analyze them one by one. The first characteristic, which most often pay close attention to, is the clock frequency.

The clock frequency of the central processors has not been growing for quite a long time. At first (in the 80s and 90s), it was the increase in megahertz that led to a frenzied increase in the overall level of performance. Now the frequency of AMD and Intel CPUs is frozen in the delta of 2.5-4 GHz. Everything below is too budget and not quite suitable for a gaming computer; anything above is already overclocking. This is how processor lines are formed. For example, there is an Intel Core i5-6400 running at 2.7GHz ($182) and there is a Core i5-6500 running at 3.2GHz ($192). These processors have the same absolutely all characteristics, except for the clock frequency and price.

Overclocking has long become a "weapon" of marketers. For example, only a lazy motherboard manufacturer does not brag about the excellent overclocking potential of their products.

On sale you can find chips with an unlocked multiplier. It allows you to independently overclock the processor. At Intel, such "stones" have the letters "K" and "X" in the name. For example, Core i7-4770K and Core i7-5690X. Plus, there are separate models with an unlocked multiplier: Pentium G3258, Core i5-5675C and Core i7-5775C. AMD processors are marked in a similar way. So, hybrid chips in the name have the letter "K". There is a line of FX processors (AM3+ platform). All "stones" included in it have a free multiplier.

Modern AMD and Intel processors support automatic overclocking. In the first case, it is called Turbo Core, in the second - Turbo Boost. The essence of its work is simple: with proper cooling, the processor during operation increases its clock frequency by several hundred megahertz. For example, the Core i5-6400 operates at a speed of 2.7 GHz, but with the active Turbo Boost technology, this parameter can permanently increase to 3.3 GHz. That is exactly 600 MHz.

It is important to remember: the higher the clock speed, the hotter the processor! So you need to take care of high-quality cooling of the “stone”

I'll take the NVIDIA GeForce GTX TITAN X video card - the most powerful single-chip gaming solution of our time. And the Intel Core i5-6600K processor is a mainstream model equipped with an unlocked multiplier. Then I'll fire up Metro: Last Light, one of the most CPU-intensive games of our day. The graphics quality settings in the application are selected in such a way that the number of frames per second each time rests on the performance of the processor, but not the video card. In the case of the GeForce GTX TITAN X and Metro: Last Light - the maximum graphics quality, but without anti-aliasing. Next, I will measure the average FPS level in the range from 2 GHz to 4.5 GHz in Full HD, WQHD and Ultra HD resolutions.

Processor dependency effect

The most noticeable effect of processor dependence, which is logical, is manifested in light modes. So, in 1080p, as the frequency increases, the average FPS also steadily increases. The results were very impressive: when the speed of the Core i5-6600K increased from 2 GHz to 3 GHz, the number of frames per second in Full HD resolution increased from 70 FPS to 92 FPS, that is, by 22 frames per second. With an increase in frequency from 3 GHz to 4 GHz - another 13 FPS. Thus, it turns out that the processor used, with the given graphics quality settings, was able to “pump” the GeForce GTX TITAN X in Full HD only from 4 GHz - it was from this mark that the number of frames per second with an increase in the CPU frequency stopped growing.

As the resolution increases, the effect of processor dependence becomes less noticeable. Namely, the number of frames stops growing, starting from 3.7 GHz. Finally, in Ultra HD resolution, we almost immediately ran into the potential of the graphics adapter.

There are many discrete graphics cards. It is customary in the market to catalog these devices in three segments: Low-end, Middle-end and High-end. Captain Evidence suggests that different processors with different frequencies are suitable for different performance graphics adapters.

The dependence of performance in games on the frequency of the central processor

Now I'll take the GeForce GTX 950 video card - a representative of the upper Low-end segment (or lower Middle-end), that is, the absolute opposite of the GeForce GTX TITAN X. The device belongs to the entry level, however, it is able to provide a decent level of performance in modern games in Full HD resolution. As you can see from the graphs below, the processor, operating at a frequency of 3 GHz, “pumps” the GeForce GTX 950 in both Full HD and WQHD. The difference with the GeForce GTX TITAN X is visible to the naked eye.

It is important to understand that the less load falls on the “shoulders” of the video card, the higher the frequency of the central processor should be. It is irrational to purchase, for example, an adapter of the GeForce GTX TITAN X level and use it in games at a resolution of 1600x900 pixels.

Video cards of the Low-end level (GeForce GTX 950, Radeon R7 370) will have enough of a central processor operating at a frequency of 3 GHz. Middle-end adapters (Radeon R9 280X, GeForce GTX 770) - 3.4-3.6 GHz. High-end flagship video cards (Radeon R9 Fury, GeForce GTX 980 Ti) - 3.7-4 GHz. Productive bundles SLI/CrossFire - 4-4.5 GHz

Architecture

In reviews devoted to the release of one or another generation of central processors, the authors continually state that the difference in performance in x86 calculations is a meager 5-10% year after year. This is a kind of tradition. Neither AMD nor Intel has seen any serious progress for a long time, and phrases like " keep sitting on my Sandy Bridge, wait for next year»become winged. As I said, in games, the processor also has to process a large amount of data. In this case, a reasonable question arises: to what extent is the effect of processor dependence observed in systems with different architectures?

For both AMD and Intel chips, you can define a list of modern architectures that are still popular. They are relevant, on a global scale, the difference in performance between them is not so big.

Let's take a couple of chips - Core i7-4790K and Core i7-6700K - and make them work at the same frequency. Processors based on the Haswell architecture are known to have appeared in the summer of 2013, and Skylake solutions in the summer of 2015. That is, exactly two years have passed since the update of the “so” processor line (this is how Intel calls crystals based on completely different architectures).

Impact of architecture on game performance

As you can see, there is no difference between Core i7-4790K and Core i7-6700K running at the same frequencies. Skylake is ahead of Haswell only in three games out of ten: in Far Cry 4 (by 12%), in GTA V (by 6%) and in Metro: Last Light (by 6%) - that is, in all the same processor-dependent applications. However, 6% is mere trifles.

Comparison of processor architectures in games (NVIDIA GeForce GTX 980)

A few platitudes: it is obvious that it is better to assemble a gaming computer based on the most modern platform. After all, not only the performance of the chips themselves is important, but also the functionality of the platform as a whole.

Modern architectures with a few exceptions have the same performance in computer games. Owners of processor families Sandy Bridge, Ivy Bridge and Haswell can feel quite calm. With AMD, the situation is similar: various variations of the modular architecture (Bulldozer, Piledriver, Steamroller) in games have approximately the same level of performance

Cores and Threads

The third and perhaps the determining factor that limits the performance of a video card in games is the number of CPU cores. It's no coincidence that a growing number of games have a quad-core CPU in their minimum system requirements. Vivid examples include such modern hits as GTA V, Far Cry 4, The Witcher 3: Wild Hunt, and Assassin's Creed Unity.

As I said at the very beginning, the first quad-core processor appeared nine years ago. Now there are 6- and 8-core solutions on sale, but 2- and 4-core models are still in use. I will give a table of markings for some popular AMD and Intel lines, dividing them depending on the number of "heads".

AMD hybrid processors (A4, A6, A8 and A10) are sometimes referred to as 8-, 10- and even 12-core. It's just that the company's marketers add elements of the built-in graphic module to the computing units. Indeed, there are applications that can use heterogeneous computing (when x86 cores and embedded video process the same information together), but this scheme is not used in computer games. The computational part performs its task, the graphic - its own.

Some Intel processors (Core i3 and Core i7) have a certain number of cores but double the number of threads. Hyper-Threading technology is responsible for this, which was first used in Pentium 4 chips. Threads and cores are slightly different things, but we'll talk about this a little later. In 2016, AMD will release processors based on the Zen architecture. For the first time, the "red" chips will acquire technology similar to Hyper-Threading.

In fact, the Core 2 Quad on the Kentsfield core is not a full-fledged quad-core. It is based on two Conroe crystals, divorced in one package under the LGA775

Let's do a little experiment. I took 10 popular games. I agree that such an insignificant number of applications is not enough to say with 100% certainty that the effect of processor dependence has been fully studied. However, the list included only hits that clearly demonstrate the trends in modern game development. The graphics quality settings were selected in such a way that the final results did not rest against the capabilities of the video card. For the GeForce GTX TITAN X, this is the maximum quality (without anti-aliasing) and Full HD resolution. The choice of such an adapter is obvious. If the processor can "pump" the GeForce GTX TITAN X, then it will cope with any other video card. The stand used the top Core i7-5960X for the LGA2011-v3 platform. Testing was carried out in four modes: when activating only 2 cores, only 4 cores, only 6 cores and 8 cores. Hyper-Threading multithreading technology was not involved. Plus, testing was carried out with two frequencies: at nominal 3.3 GHz and overclocked to 4.3 GHz.

Processor dependence in GTA V

GTA V is one of the few modern games that use all eight "crusts" of the processor. Therefore, it can be called the most processor-dependent. On the other hand, the difference between six and eight cores was not so impressive. Judging by the results, the two cores are very far behind other modes of operation. The game slows down, a large number of textures are simply not drawn. The stand with four cores shows noticeably better results. It lags only 6.9% behind the six-core one, and 11% behind the eight-core one. Whether in this case the game is worth the candle - you decide. However, GTA V clearly demonstrates how the number of processor cores affects the performance of the video card in games.

The vast majority of games behave in a similar way. In seven out of ten applications, the system with two cores turned out to be processor-dependent. That is, the FPS level was limited by the central processor. At the same time, in three out of ten games, the six-core bench showed an advantage over the quad-core one. True, the difference cannot be called significant. Far Cry 4 turned out to be the most radical game - it stupidly did not start on a system with two cores.

The increase from the use of six and eight cores in most cases turned out to be either too small, or there was none at all.

Processor dependence in The Witcher 3: Wild Hunt

The three games loyal to the dual-core system were The Witcher 3, Assassin's Creed Unity and Tomb Raider. In all modes, the same results were demonstrated.

For those who are interested, I will give a table with the full test results.

performance of multi-core systems in games

Four cores is the optimal number for today. At the same time, it is obvious that gaming computers should not be assembled with a dual-core processor. In 2015, just such a “stone” is the bottleneck in the system

We figured out the cores. The test results clearly show that in most cases four "heads" in a processor are better than two. At the same time, some Intel models (Core i3 and Core i7) can boast of supporting Hyper-Threading technology. Without going into details, I note that such chips have a certain number of physical cores and twice the number of virtual ones. In ordinary applications, Hyper-Threading is certainly useful. But how does this technology fare in games? This issue is especially relevant for the line of Core i3 processors - nominally dual-core solutions.

To determine the effectiveness of multithreading in games, I assembled two test benches: with a Core i3-4130 and a Core i7-6700K. In both cases, a GeForce GTX TITAN X graphics card was used.

Core i3 Hyper-Threading Efficiency

In almost all games, Hyper-Threading technology has affected the performance of the graphics subsystem. Naturally, for the better. In some cases, the difference has been enormous. For example, in The Witcher, the number of frames per second increased by 36.4%. True, in this game without Hyper-Threading, disgusting friezes were observed every now and then. I note that the Core i7-5960X did not notice such problems.

As for the quad-core Core i7 processor with Hyper-Threading, support for these technologies made itself felt only in GTA V and Metro: Last Light. That is, only two games out of ten. They also noticeably increased the minimum FPS. Overall, the Hyper-Threaded Core i7-6700K was 6.6% faster in GTA V and 9.7% faster in Metro: Last Light.

Hyper-Threading in Core i3 is really dragging, especially if the system requirements indicate a quad-core processor model. But in the case of Core i7, the increase in performance in games is not so significant.

Cache

We figured out the main parameters of the central processor. Each processor has a certain amount of cache. Today, up to four levels of this type of memory are used in modern integrated solutions. The cache of the first and second levels, as a rule, is determined by the architectural features of the chip. The cache of the third level from model to model can vary. I will give a small table for your reference.

So, the more productive Core i7 processors have 8 MB of cache in the third level, the slower Core i5 have 6 MB. Will these 2 MB affect performance in games?

The Broadwell family and some Haswell processors use 128 MB of eDRAM (Level 4 cache). In some games, it can seriously speed up the system.

It's very easy to check. To do this, you need to take two processors from the Core i5 and Core i7 lines, set the same frequency for them and disable Hyper-Threading technology. As a result, in the nine games tested, only F1 2015 showed a noticeable difference of 7.4%. The rest of the 3D entertainment did not respond in any way to the 2-MB cache deficit in the third level in the Core i5-6600K.

Impact of L3 cache on gaming performance

The difference in L3 cache between Core i5 and Core i7 processors in most cases does not affect system performance in modern games

AMD or Intel?

All tests discussed above were carried out with the participation of Intel processors. However, this does not mean at all that we do not consider AMD solutions as the basis for a gaming computer. Below are the results of testing using the FX-6350 chip used in the highest performing AMD AM3+ platform, using four and six cores. Unfortunately, I did not have an 8-core AMD "stone" at my disposal.

Comparison of AMD and Intel in GTA V

GTA V has already established itself as the most processor-intensive game. With the use of four cores in an AMD system, the average FPS level turned out to be higher than, for example, Core i3 (without Hyper-Threading). In addition, in the game itself, the image was rendered smoothly, without slowdowns. But in all other cases, the Intel cores turned out to be consistently faster. The difference between processors is significant.

Below is a table with full testing of the AMD FX processor.

Processor dependency in AMD system

There is no noticeable difference between AMD and Intel in only two games: The Witcher and Assassin's Creed Unity. In principle, the results lend themselves perfectly to logic. They reflect the real alignment of forces in the market of central processors. Intel cores are noticeably more powerful. Including in games. Four AMD cores compete with two Intel. At the same time, the average FPS is often higher for the latter. Six AMD cores compete with four Core i3 threads. Logically, eight "heads" of the FX-8000/9000 should impose a fight on the Core i5. Yes, AMD cores are absolutely deservedly called "semi-cores". These are the features of modular architecture.

The result is banal. For games, Intel solutions are better suited. However, among budget solutions (Athlon X4, FX-4000, A8, Pentium, Celeron), AMD products are preferable. Testing has shown that the slower four cores perform better in CPU-intensive games than the faster two Intel cores. In the middle and high price ranges (Core i3, Core i5, Core i7, A10, FX-6000, FX-8000, FX-9000), Intel solutions are already preferable

DirectX 12

As mentioned at the very beginning of the article, DirectX 12 became available for game developers with the release of Windows 10. You can get acquainted with a detailed overview of this API. The DirectX 12 architecture finally determined the direction of development of modern game development: developers began to need low-level programming interfaces. The main task of the new API is to rationally use the hardware capabilities of the system. This includes the use of all computational threads of the processor, and general-purpose calculations on the GPU, and direct access to the resources of the graphics adapter.

Windows 10 has just arrived. However, there are already applications in nature that support DirectX 12. For example, Futuremark has integrated the Overhead subtest into the benchmark. This preset is able to determine the performance of a computer system using not only the DirectX 12 API, but also AMD Mantle. The way the Overhead API works is simple. DirectX 11 imposes limits on the number of processor drawing commands. DirectX 12 and Mantle solve this problem by allowing more draw commands to be called. So, during the test, an increasing number of objects are displayed. Until the graphics adapter can no longer cope with their processing, and the FPS does not fall below 30 frames. For testing, I used a stand with a Core i7-5960X processor and a Radeon R9 NANO video card. The results turned out to be very interesting.

It is noteworthy that in patterns using DirectX 11, changing the number of CPU cores has almost no effect on the overall result. But with the use of DirectX 12 and Mantle, the picture changes dramatically. Firstly, the difference between DirectX 11 and low-level APIs turns out to be just cosmic (somewhere by an order of magnitude). Secondly, the number of "heads" of the central processor significantly affects the final result. This is especially noticeable when moving from two cores to four and from four to six. In the first case, the difference reaches almost a two-fold mark. At the same time, there are no special differences between six and eight cores and sixteen threads.

As you can see, the potential of DirectX 12 and Mantle (in the 3DMark benchmark) is simply huge. However, do not forget that we are dealing with synthetics, they do not play it. In reality, it makes sense to evaluate the profit from using the latest low-level APIs only in real computer entertainment.

The first PC games that support DirectX 12 are already on the horizon. These are Ashes of the Singularity and Fable Legends. They are in active beta testing. The other day colleagues from Anandtech

As a rule, processors are tested in tandem with top-end video cards of the 1080 Ti or Titan X level. They show the capabilities of the "stones" well, but do not answer the question of what to take for simpler systems. We ordered in "Citylink" three "stones" based on Coffee Lake and prepared a computer for 1070 Ti Strix.

test stand

Let's start with the computer. The basis was ASUS TUF Z730-Pro, a board from the middle segment, but with the right power system, a good set of ports and a flexible BIOS. Why TUF and not Strix? We wanted to take a break from backlighting and get a decent set of technologies, high-quality sound chip piping, DTS support and fan control.

Specifications ASUS TUF Z730-PRO GAMING
Chipset: Intel Z370
socket: socket 1151
Form factor: ATX (305 x 244) cm
RAM: 4x DIMM, DDR4-4000, up to 64 GB
PCI Slots: 3x PCIEx16, 3x PCIEx1
Disk subsystem: 2x M.2, 6x SATA III 6Gb/s
Sound subsystem: 7.1 HD (Realtek ALC887)
Net: 1 Gb Ethernet (Intel I219V)
Panelinput/output: PS/2, DVI-D, HDMI, RJ45, 2x USB 3.1 Type-A, 4x USB 3.0, 2x USB 2.0, Optical S/PDIF, 5x audio 3.5mm
Price for February 2018: 11,500 rubles ($205)

CBO DeepCool MAELSTROM 120K was installed to cool the "stones". It is suitable for both top-end i5 and i7, and for i3. It turned out to be hot at Intel and reaches 71 ° C under load.

The case is spacious, with a pair of turntables, and is designed for dual liquid cooling radiators. Note that the standard complete fans are on the front panel and that to assemble without a CBO, you will either have to rearrange one of the turntables or buy an additional one.

1070 Ti was taken by ASUS Strix. This series has been talked about more than once, so we will only note the important points. The card is cooled by an aluminum radiator with three fans, the main elements are glued with thermal pads, and the processor takes 1962 MHz against 1683 at the reference and stays within 53°C.

And finally, Seasonic was sent to provide power at 650 W - cold and with a huge efficiency. Anticipating comments in the spirit of “why such an expensive PSU?” Let’s say right away. The computer would run on FSP for 2500 rubles, but we rely on reliability and stability. Whoever does not like this option - we do not insist.

CPU

And now for the tests. We got a pre-top system with a budget of about 100 thousand rubles. “Approximately”, because the price of a video card is recommended, and if you don’t focus on quality, flexibility, and maximum frequencies, you can save on the chipset, memory, and power supply. But that's not the point. Let's see which processor is suitable for such a computer.

So, there are three "stones" on hand - i3-8350K, i5-8600K and i7-8700K. All of them were tested in stock and in total passed seven gaming and thirteen processor tests, including both synthetic and real applications. Outcome is interesting.

CPU Core i7-8700K Core i5-8600K Core i3-8350K
microarchitecture coffee lake coffee lake coffee lake
Process technology 14 nm 14 nm 14 nm
socket LGA1151 LGA1151 LGA1151
cores/threads 6/12 6/6 4/4
L3 cache 12 MB 9 MB 8 MB
Frequency 3.7-4.7 GHz 3.6-4.3 GHz 4 GHz
memory channels 2 2 2
Memory type DDR4-2666 DDR4-2666 DDR4-2666
PCI Express lanes 16 16 16
Thermal package (TDP) 95 W 95 W 91 W
February 2018 price 28,000 rubles ($500) 19,390 rubles ($345) 11,210 rubles ($200)

With the 1070 Ti, there is not much difference in games. And this means that for the first time in a long time, i3 can be bought for purely gaming systems, even with powerful video cards.

The conclusion from this is simple. For a gaming computer up to 80-100 thousand rubles, the Core i3 is enough. Older processors are worth buying if you are interested in work tasks. Which model to take - decide for yourself, we gave processor tests and alignment.

Once again, the choice in favor of the i3 applies only to systems with graphics cards of the 1080 level. With a Ti or Titan X, the older Core i5 with i7 will go ahead. However, this can be compensated by overclocking. All processors are overclocked, and we squeezed 4.4 GHz out of the same i3, and 4.7 GHz out of the i7.

CPU tests
3ds Max 2017
Scene rendering (V-Ray), s, (less is better)
Core i7-8700K Core i5-8600K Core i3-8350K
180 239 387
Photoshop CS6
Overlay filters, s, (less is better)
135 164 216
Media Coder .264
Video encoding MPEG2 ->MPEG4 (H.264) , (less is better)
113 163 183
Cinebench R15
1543 1059 678
7zip
Rate, MIPS
43138 29197 18764
WinRar 5.10
Archiving speed, KB/s
19533 10318 6903
Corona 1.3
129 212 343
V-Ray Benchmark
Render time, s, (less is better)
82 114 182
Zbrush 4R7 P3
Render time (Best, 4x SS), s, (less is better)
94 132 200
x265 benchmark
Encoding time, s, (less is better)
39 45 71
CPU tests
SPECwpc 2.1
performance index
Core i7-8700K Core i5-8600K Core i3-8350K
Media and Entertainment 3,45 2,84 2,65
product development 2,31 1,81 1,67
SVPmark 3.0.3
performance index
decode video 36 27 18
Vector Search 3,34 2,53 1,6
frame composition 6,27 5,88 4,42
Geekbench 4.2.0
performance index
Multi-core CPU 26940 22573 15785
AES (multi-core) 15421 16771 16743
Game tests
Battlefield 1
Core i7-8700K Core i5-8600K Core i3-8350K
2560x1440
high 102 102 102
Ultra 91 92 91
1920x1080
high 141 139 137
Ultra 126 124 125
Total War: WARHAMMER II
Core i7-8700K Core i5-8600K Core i3-8350K
2560x1440
high 72 72 72
Ultra 55 55 56
1920x1080
high 113 113 113
Ultra 81 80 82
For Honor
Core i7-8700K Core i5-8600K Core i3-8350K
2560x1440
high 105 105 105
Very High 81 81 81
1920x1080
high 167 166 167
Very High 129 129 129
Tom Clancy's Ghost Recon: Wildlands
Core i7-8700K Core i5-8600K Core i3-8350K
2560x1440
Very High 67 66 67
Ultra 44 45 45
1920x1080
Very High 89 89 90
Ultra 57 58 58
DiRT 4
Core i7-8700K Core i5-8600K Core i3-8350K
2560x1440
high 163 136 134
Ultra 111 97 96
1920x1080
high 204 170 170
Ultra 147 135 133
PLAYERUNKNOWN'S BATTLEGROUNDS
Core i7-8700K Core i5-8600K Core i3-8350K
2560x1440
high 104 106 98
Ultra 71 71 71
1920x1080
high 141 142 143
Ultra 113 104 109
Mass Effect: Andromeda
Core i7-8700K Core i5-8600K Core i3-8350K
2560x1440
high 94 98 96
Ultra 65 64 64
1920x1080
high 100 102 100
Ultra 96 95 96

Top Related Articles