How to set up smartphones and PCs. Informational portal

What is a GPU and how does it work. Graphic processors in solving modern IT problems

Graphics processing units (GPUs) are a prime example of how a technology designed for graphics processing tasks has expanded into the unrelated realm of high performance computing. Today's GPUs are at the heart of many of the most complex projects in machine learning and data analysis. In our overview article, we'll explore how Selectel customers are using GPU hardware and think about the future of data science and computing devices with faculty from the Yandex School of Data Analysis.

GPUs have changed a lot in the last ten years. In addition to the huge increase in performance, there was a division of devices by type of use. So, video cards for home gaming systems and virtual reality installations are allocated in a separate direction. Powerful highly specialized devices appear: for server systems, one of the leading accelerators is NVIDIA Tesla P100, designed specifically for industrial use in data centers. In addition to the GPU, research is being actively conducted in the field of creating a new type of processor that mimics the work of the brain. An example is the Kirin 970 single-chip platform with its own neuromorphic processor for tasks related to neural networks and pattern recognition.

This situation makes us think about the following questions:

  • Why has the field of data analysis and machine learning become so popular?
  • How did GPUs come to dominate the data-intensive hardware market?
  • What research in the field of data analysis will be the most promising in the near future?

Let's try to deal with these issues in order, starting with the first simple video processors and ending with modern high-performance devices.

The era of the GPU

First, let's remember what a GPU is. The Graphics Processing Unit is a graphics processing unit widely used in desktop and server systems. A distinctive feature of this device is its focus on massive parallel computing. Unlike graphic processors, the architecture of another computing module, the CPU (Central Processor Unit), is designed for sequential data processing. If the number of cores in a conventional CPU is measured in tens, then in a GPU they number in the thousands, which imposes restrictions on the types of instructions executed, but ensures high computational performance in tasks involving parallelism.

First steps

The development of video processors in the early stages was closely related to the growing need for a separate computing device for processing two and three-dimensional graphics. Before the advent of separate video controller circuits in the 70s, image output was carried out through the use of discrete logic, which affected increased power consumption and large printed circuit board sizes. Specialized microcircuits made it possible to single out the development of devices designed to work with graphics in a separate direction.

The next revolutionary event was the emergence of a new class of more complex and multifunctional devices - video processors. In 1996, 3dfx Interactive released the Voodoo Graphics chipset, which quickly took over 85% of the dedicated video market and became the leader in 3D graphics at the time. After a series of unsuccessful decisions by the company's management, among which was the purchase of video card manufacturer STB, 3dfx lost the lead to NVIDIA and ATI (later AMD), and in 2002 declared bankruptcy.

General GPU computing

In 2006, NVIDIA announced the release of the GeForce 8 series product line, which launched a new class of devices designed for general graphics processing unit (GPGPU) computing. During development, NVIDIA came to understand that more cores running at a lower frequency are more efficient for parallel workloads than a smaller number of faster cores. Next-generation video processors have provided parallel computing support not only for processing video streams, but also for problems related to machine learning, linear algebra, statistics, and other scientific or commercial tasks.

Recognized leader

Differences in the initial assignment of tasks to the CPU and GPU have led to significant differences in the architecture of devices - high frequency versus multi-core. For graphics processors, this laid the foundation for computing potential, which is being fully realized at the present time. Video processors with an impressive number of weaker computing cores do an excellent job with parallel computing. The central processor, historically designed to work with sequential tasks, remains the best in its field.

For example, let's compare the values ​​in the performance of the central and graphic processors when performing a common task in neural networks - multiplying high-order matrices. Let's choose the following devices for testing:

  • CPU. Intel Xeon E5-2680 v4 - 28 threads with HyperThreading, 2.4 GHZ;
  • GPU. NVIDIA GTX 1080 - 2560 CUDA Cores, 1607 Mhz, 8GB GDDR5X.

Let's use an example of computing matrix multiplication on CPU and GPU in Jupyter Notebook:

In the code above, we measure the time it took to compute matrices of the same order on the CPU or GPU (“Execution Time”). The data can be presented in the form of a graph, on which the horizontal axis represents the order of the multiplied matrices, and the vertical one shows the execution time in seconds:

The graph line highlighted in orange shows the time it takes to create data in conventional RAM, transfer it to GPU memory, and then perform calculations. The green line shows the time it takes to calculate data that was already generated in the video card's memory (without transferring from RAM). Blue represents the counting time on the CPU. Matrices of order less than 1000 elements are multiplied on the GPU and CPU in almost the same time. The performance difference shows up well with matrices larger than 2000 by 2000, when the computation time on the CPU jumps to 1 second, while the GPU stays close to zero.

More complex and practical tasks are more efficiently solved on a device with GPUs than without them. Since the problems that our customers solve on GPU hardware are very diverse, we decided to find out what the most popular use cases are.

Who in Selectel lives well with a GPU?

The first option that immediately comes to mind and turns out to be a correct guess is mining, however it is curious to note that some use it as an auxiliary way to load the equipment to the “maximum”. In the case of renting a dedicated server with video cards, the time free from workloads is used for the production of cryptocurrencies that do not require specialized installations (farms) to obtain them.

Already classical to some extent, tasks related to graphics processing and rendering invariably find their place on Selectel servers with graphics accelerators. The use of high-performance equipment for such tasks allows you to get a more efficient solution than organizing dedicated workstations with video cards.

During the conversation with our clients, we also got to know representatives of the Yandex School of Data Analysis, which uses the power of Selectel to organize test learning environments. We decided to learn more about what students and teachers are doing, what areas of machine learning are now popular and what the future holds for the industry after young specialists join the ranks of leading organizations or launch their own startups.

data science

Perhaps, among our readers there are those who would not hear the phrase "neural networks" or "machine learning". Throwing out the marketing variations on these words, the bottom line is a nascent and promising data science.

The modern approach to working with data includes several main areas:

  • Big data (Big Data). The main problem in this area is the enormous amount of information that cannot be processed on a single server. From the point of view of infrastructure support, it is required to solve the problems of creating cluster systems, scalability, fault tolerance, and distributed data storage;
  • Resource-intensive tasks (Machine learning, deep learning, and others). In this case, the question of using high-performance computing that requires a large amount of RAM and processor resources is raised. In such tasks, systems with graphics accelerators are actively used.

The boundary between these two directions is gradually blurring: the main tools for working with big data (Hadoop, Spark) are introducing support for computing on the GPU, and machine learning tasks cover new areas and require larger amounts of data. Teachers and students of the School of Data Analysis will help us to understand in more detail.

It is difficult to overestimate the importance of competent work with data and the appropriate implementation of advanced analytical tools. This is not even about big data, their “lakes” or “rivers”, but about intelligent interaction with information. What is happening now is a unique situation: we can collect a wide variety of information and use advanced tools and services for in-depth analysis. Businesses implement such technologies not only to get advanced analytics, but also to create a unique product in any industry. It is the last point that largely shapes and stimulates the growth of the data analysis industry.

New direction

Everywhere we are surrounded by information: from the logs of Internet companies and banking transactions to evidence in experiments at the Large Hadron Collider. The ability to work with this data can bring millions in profits and provide answers to fundamental questions about the structure of the universe. Therefore, data analysis has become a separate area of ​​​​research among the business and scientific community.

The School of Data Analysis trains the best specialized specialists and scientists, who in the future will become the main source of scientific and industrial developments in this field. The development of the industry affects us as an infrastructure provider as more and more customers request server configurations for data analysis tasks.

The specifics of the tasks facing our customers determine what equipment we should offer customers and in what direction we should develop our product line. Together with Stanislav Fedotov and Oleg Ivchenko, we interviewed students and teachers of the School of Data Analysis and found out what technologies they use to solve practical problems.

Data analysis technologies

During the training, students from the basics (basic higher mathematics, algorithms and programming) reach the most advanced areas of machine learning. We collected information on those that use GPU servers:

  • Deep Learning;
  • Reinforcement learning;
  • computer vision;
  • Automatic word processing.

Students use specialized tools in their study assignments and research. Some libraries are designed to bring data to the desired form, others are designed to work with a specific type of information, such as text or images. Deep learning is one of the most complex areas in data analysis that actively uses neural networks. We decided to find out which frameworks teachers and students use to work with neural networks.

The presented tools have different support from the creators, but nevertheless, they continue to be actively used for educational and working purposes. Many of them require high-performance hardware to process tasks in adequate time.

Further development and projects

Like any science, the direction of data analysis will change. The experience that students receive today will undoubtedly form the basis of future developments. Therefore, it is worth noting separately the high practical orientation of the program - some students, during or after their studies, begin to train at Yandex and apply their knowledge already on real services and services (search, computer vision, speech recognition, and others).

We talked about the future of data analysis with teachers from the School of Data Analysis, who shared with us their vision of the development of data science.

According to Vlad Shakhuro, teacher of the course "Image and Video Analysis", the most interesting tasks in computer vision are security in crowded places, driving an unmanned vehicle and creating an application using augmented reality. To solve these problems, it is necessary to be able to qualitatively analyze video data and develop, first of all, algorithms for detecting and tracking objects, recognizing a person by face, and three-dimensional reconstruction of the observed scene. Teacher Victor Lempitsky, the leader of the Deep Learning course, separately singles out autoencoders in his direction, as well as generative and adversarial networks.

One of the mentors of the School of Data Analysis shares his opinion on the spread and start of mass use of machine learning:

“Machine learning has gone from being the domain of a few obsessive researchers to being just another tool for the average developer. Previously (for example, in 2012), people wrote low-level code to train convolutional networks on a pair of video cards. Now, anyone can in a matter of hours:

  • download the weights of an already trained neural network (for example, in keras);
  • use it to make a solution for your task (fine-tuning, zero-shot learning);
  • embed it in your website or mobile app (tensorflow / caffe 2).

Many large companies and startups have already benefited from such a strategy (for example, Prisma), but there are still more tasks to be discovered and solved. And maybe this whole machine / deep learning story will someday become as commonplace as python or excel is now ”

No one can accurately predict the technology of the future today, but when there is a certain vector of movement, you can understand what should be studied now. And there are a lot of opportunities for this in the modern world.

Opportunities for beginners

The study of data analysis is limited by high requirements for students: extensive knowledge in the field of mathematics and algorithms, the ability to program. Truly serious machine learning tasks already require specialized equipment. And for those who want to learn more about the theoretical component of data science, the School of Data Analysis, together with the Higher School of Economics, launched an online course "".

Instead of a conclusion

The growth of the GPU market is supported by the growing interest in the capabilities of such devices. The GPU is used in home gaming, rendering and video processing tasks, and where general high-performance computing is required. The practical application of data mining tasks will penetrate deeper and deeper into our daily lives. And the execution of such programs is most effectively carried out with the help of the GPU.

We thank our clients, as well as teachers and students of the School of Data Analysis for the joint preparation of the material, and invite our readers to get to know them better.

And for experienced and sophisticated in the field of machine learning, data analysis and not only, we offer you to look at Selectel for renting server equipment with graphics accelerators: from simple GTX 1080 to Tesla P100 and K80 for the most demanding tasks.

What do we look at first when choosing a smartphone? Cost aside for a moment, the first thing we choose, of course, is the screen size. Then we are interested in the camera, the amount of RAM, the number of cores and the frequency of the processor. And here everything is simple: the more, the better, and the less, the worse, respectively. However, modern devices also use a graphics processor, also known as a GPU. What it is, how it works and why it is important to know about it, we will describe below.

GPU (Graphics Processing Unit) is a processor designed exclusively for graphics processing operations and floating point calculations. It primarily exists to ease the work of the main processor when it comes to resource-intensive games or applications with 3D graphics. When you play a game, the GPU is responsible for creating graphics, colors, and textures, while the CPU can handle artificial intelligence or game mechanics.

The architecture of the GPU is not much different from that of the CPU, however, it is more optimized for efficient graphics work. If you force the GPU to do any other calculations, it will show itself from the worst side.


Video cards that are connected separately and operate at high powers exist only in laptops and desktop computers. If we are talking about Android devices, then we are talking about integrated graphics and what we call SoC (System-on-a-Chip). For example, the processor has an integrated Adreno 430 GPU. The memory that it uses for its work is system memory, while video cards in desktop PCs are allocated memory that is available only to them. True, there are hybrid chips.

While a processor with multiple cores runs at high speeds, a GPU has many processor cores running at low speeds and only doing vertex and pixel calculations. Vertex processing mostly revolves around the coordinate system. The GPU handles geometric tasks by creating a three-dimensional space on the screen and allowing objects to move around in it.

Pixel processing is a more complex process that requires a lot of computing power. At this point, the GPU overlays various layers, applies effects, does everything to create complex textures and realistic graphics. After both processes are processed, the result is transferred to the screen of your smartphone or tablet. All this happens millions of times a second while you are playing a game.


Of course, this story about the work of the GPU is very superficial, but it is enough to get the right general idea and be able to keep up a conversation with comrades or an electronics seller, or understand why your device got so hot during the game. Later, we will definitely discuss the advantages of certain GPUs in working with specific games and tasks.

According to AndroidPit

In this article, you can get an explanation that the GPU in a computer is a graphics processor, or, as it is convenient for many to say, a video card. It can be built-in or discrete. Depending on it, you can choose the necessary cooling and decent nutrition.

Integrated GPU

The integrated video card is located on the motherboard or in the processor. Just because it's a GPU in a computer doesn't mean you have to run demanding games or high-quality movies. The fact is that video cards of this type are designed to work with simple applications that do not require large resources. In addition, they do not consume a large amount of energy.

As for the amount of memory, the integrated GPU in the computer uses the amount and frequency of RAM to work.

Most users use cards of this type only to install drivers on a discrete graphics card.

Discrete GPU

Discrete type of GPU in a computer - what is it? Unlike an integrated graphics processor, discrete graphics cards are a separate module that consists of the processor itself, several heatsinks, cooling coolers, memory chips, capacitors, and, in the case of increased power, water cooling.

Such video cards can be both gaming and office. For example, at the manufacturer Invidia, they differ in output series. Here the GT630 is an office model, and the GTX660 is called a gaming one. The first number indicates the generation of the GPU, and the next two indicate the series. Numbering up to 50 series indicates that the equipment is office, and from 50 to 90 are game cards. Moreover, the larger the number, the more productive the chip is used in the video card. The prefix in the form of the letter "X" means a presentation to the gaming category, since such video cards have overclocking potential. They also require a separate additional power supply, because their resources consume a lot of energy. Now there is a general idea that it is a GPU in a computer.

As for Radeon, their identification system is very simple. In a four-digit system, the first digit is the generation, the second is the series, and the last two digits indicate the model sequence. It is they who are responsible for the distinction between office and discrete representatives.

Normal temperature of the GPU in the computer

For normal operation, the processor must be maintained at an optimal temperature, and for each component it is different. As for the GPU, its operating temperature usually does not exceed 65 degrees. The chip can withstand heating up to 90 degrees, but it is better not to allow this, otherwise the components of the video chip are destroyed.

Several components of the video card are responsible for the normal temperature - these are thermal paste, coolers, radiators and the power system.

Thermal paste needs to be changed regularly as it hardens over time and loses its cooling function. Its replacement does not take much time - just remove the remnants of the old paste and carefully apply a new one.

Another way to lower the temperature of the GPU in your computer is to choose the right coolers. Any gaming video card is equipped with coolers from one to three pieces. The more fans, the better the radiators will be cooled. As far as office representatives are concerned, manufacturers place on boards mostly only heatsinks or one cooler.

Power for the GPU

Integrated GPUs do not require additional power, but discrete GPUs require a more powerful power supply. Office graphics cards will function normally with a 450 watt unit. Removable graphics accelerators will require a power supply of more than 500 watts. With proper selection, you can fully unlock the potential of the video card. Moreover, the cooling system of a discrete graphics card will function better with enough electricity.

Nutrition plays an important role. Without a graphics acceleration processor, it is impossible to display an image on the screen. To see how the video card is displayed in the system, just go to the control panel and open the "Display adapters" tab. If the message "Device not recognized" is displayed, then you need to install drivers for your graphics processor. After installing the drivers, the card model will be correctly displayed in the system.

CPU and GPU are very similar. They are both made of millions of transistors, capable of performing thousands of operations per second, lend themselves to . But what is the difference between cpu and gpu?

What is a CPU?

CPU (Central Processing Unit) is the central processing unit, in other words, the “brain” of the computer. This is a collection of several million transistors that can perform complex calculations. A standard processor has one to four cores clocked at 1 to 4 GHz, though recently .

The CPU is a fairly powerful device that can perform any task on a computer. The number of cores and clock speed of the CPU is one of the key

What is a GPU?

GPU (Graphics Processing Unit) is a specialized type of microprocessor that is optimized for graphics display and specific tasks. The clock speed of the GPU is significantly lower than the CPU, but it usually has more cores.

What is the difference between CPU and GPU?

The GPU can only do a fraction of the many CPU operations, but it does so at an incredible speed. The GPU uses hundreds of real-time cores to display thousands of pixels on a monitor. This allows complex game graphics to be displayed smoothly.

However, CPUs are more flexible than GPUs. Central processing units have a larger instruction set, so they can perform a wider range of tasks. CPUs operate at higher maximum frequencies and can control the input and output of all computer components. CPUs are capable of working with virtual memory, which is needed for modern operating systems, while GPUs are not.

A bit about GPU computing

Even though GPUs are the best for video rendering, they are technically capable of doing more. Graphics data processing is just one type of repetitive and highly parallel task. Other tasks such as bitcoin mining or password cracking rely on the same types of large datasets and mathematical operations. This is why many people use the GPU for "non-graphical" purposes.

Outcome

CPUs and GPUs have similar purposes but are optimized for different computing tasks. This is the difference between CPU and GPU. To work properly and efficiently, a computer must have both types of microprocessors.

Hello friends.

Do you like to play realistic games on your computer? Or watch a movie in a quality that clearly shows every little thing? So, you must imagine what gpu is in a computer. Do you know anything about him? My article will help you get rid of this misunderstanding ;-).


GPU is not a graphics card

A combination of letters unknown to many implies the concept of "graphics processing unit", which in our language means a graphics processor. It is he who is responsible for reproducing the picture on your hardware, and the better its characteristics, the better the image will be.

Always thought that these functions performs? Of course, you are right, but it is a complex device, and its main component is just a graphics processor. It can also exist autonomously from the vidyuhi. We'll talk about this a little later.

GPU: not to be confused with CPU

Despite the similarity of abbreviations, do not confuse the subject of our conversation with (Central Processor Unit). Yes, they are similar, both in name and function. The latter can also reproduce graphics, however, it is weaker in this matter. However, they are completely different devices.

They differ in architecture. The CPU is a multi-purpose device that is responsible for all the processes in the computer. To do this, he needs several, with the help of which he sequentially processes one task after another.

In turn, the GPU was originally designed as a specialized device designed to perform graphics rendering, processing textures and complex images at high speed. For such purposes, it was equipped with a multi-threaded structure and many cores so that it could work with large amounts of information at a time, and not sequentially.

In view of this advantage, leading manufacturers of video adapters have released models in which GPUs can become an advanced replacement for the central one. The nVidia brand calls such a device the GTX 10xx, while its main competitor AMD calls it the RX.

Types of graphics processors

So that you can navigate the GPU market, I suggest that you familiarize yourself with the types of this device:

  • Discrete. Included in the video adapter. Connects to the motherboard through a dedicated connector (most often PCIe or AGP). Has its own RAM. Are you a demanding gamer or do you work with complex graphics editors? Take a discrete model.

  • Integrated (IGP). It used to be soldered into the motherboard, now it is built into the central processor. Initially, it was not suitable for playing realistic games and heavy graphics programs, but new models cope with these tasks. Still, keep in mind that such chips are somewhat slower, because they do not have personal RAM and access the CPU memory.

  • Hybrid graphics processing. This is 2 in 1, that is, when both the first type and the second type of GPU are installed on the computer. Depending on the tasks performed, either one or the other is included in the work. However, there are laptops in which 2 types of devices can work at once.
  • external type. As you might guess, this is a graphics processor located outside the computer. Most often, this model is chosen by laptop owners who find it difficult to cram a discrete video card into their hardware, but really want to get decent graphics.

How to choose?

When choosing a video adapter for yourself, pay attention to the following characteristics:

  • Clock frequency. Specified in megahertz. The higher the number, the more information per second the device can process. True, not only it affects its performance. Architecture also matters.
  • The number of computing blocks. They are designed to process tasks - shaders responsible for vertex, geometric, pixel and universal calculations.

  • The speed of filling (fillrate). This parameter can tell how fast the GPU can draw the picture. It is divided into 2 types: pixel (pixel fill rate) and texture (texel rate). The first one is affected by the number of ROP blocks in the processor structure, and the second - by texture units (TMU).

Usually in the latest GPU models there are fewer first blocks. They write the pixels calculated by the video adapter into buffers and mix them, which is cleverly called blending. TMUs perform sampling and filtering of textures and other information required for scene alignment and general calculations.

geometric blocks

Previously, no one paid attention to them, because virtual games had simple geometry. This parameter began to be taken into account after the appearance of tessellation in DirectX 11. Don't understand what I mean? Let's go in order.

It is an environment (set of tools) for writing games. To help you navigate the topic, I will say that the latest version of the product is the 12th, which was released in 2015.

Tessellation is the division of the plane into parts to fill them with new information, which increases the realism of the game.

Thus, if you want to plunge headlong into the atmosphere of Metro 2033, Crysis 2, HAWX 2, etc., consider the number of geometric blocks when choosing a GPU.

Memory

Ready to get a new video card? So, you need to take into account a few more characteristics of the RAM:

  • Volume. The importance of RAM is somewhat overrated, as not only its capacity, but also its type and properties affect the card's performance.
  • Tire width. This is a more significant parameter. The wider, the more information the memory can send to the chip and vice versa in a certain time. A minimum of 128 bits is required to play games.
  • Frequency. It also determines the throughput of the RAM. But keep in mind that memory with a 256-bit bus and a frequency of 800 (3200) MHz works more efficiently than with 128 bits at 1000 (4000) MHz.
  • Type of. I will not burden you with unnecessary information, but I will only name the types that are optimal for today - these are GDDR 3 and 5 generations.

A word about cooling

Thinking of installing a powerful chip? Immediately take care of additional cooling in the form of radiators, coolers, and if you are going to regularly squeeze all the juice out of the device, you can think about the liquid system.

In general, keep an eye on the temperature of the vidyuhi. The program can help you with this. GPU-Z etc., which, in addition to this parameter, will tell you everything about the device.

Of course, modern video cards are equipped with a protective system that seems to prevent overheating. Different models have different temperature limits. On average, it is 105 ° C, after which the adapter turns itself off. But it is better to save an expensive device and provide auxiliary cooling.

Top Related Articles