How to set up smartphones and PCs. Informational portal
  • home
  • Iron
  • How the T1 chip works in the Touch Bar and why it is needed. Chip key in a car: everything you need to know about it Chips come in several varieties

How the T1 chip works in the Touch Bar and why it is needed. Chip key in a car: everything you need to know about it Chips come in several varieties

Today, almost everyone has a phone, player, computer, tablet or something else that, one way or another, contains integrated circuits or chips. We have long been accustomed to these things and often don’t even think about how much work and engineering was invested in the creation of one such chip, the first test sample, so that the conveyor and robotic systems would then multiply it into tens, hundreds of thousands and even millions of copies. In this article I will talk about the difficult path the microprocessor manufacturing industry has gone through, how it managed to survive and what main stages ordinary quartz sand goes through in order to one day turn into the silicon heart of your iPad, video card or mobile phone.

For those schoolchildren who want to gain a deeper understanding of the wonders of modern technology, there is an Olympiad.

A Brief History of Economics in Economics

Not knowing history means always being a child.
Cicero

The 20th century in the consciousness of mankind will remain one of the outstanding centuries. This is the century of the widespread introduction of electricity, grand discoveries, bloody wars, unprecedented revolutions in industry and, of course, the century that prepared humanity for the transition to the information society, with all its pros and cons. The basis of this society is a very simple device - a transistor, which allows you to amplify, generate and convert electrical signals.

In 1928, Julius Edgar Lilienfeld in Germany registered a patent on the operating principle of a field-effect transistor, and already in 1934, German physicist Oskar Heil patented a field-effect transistor, but the MOS (metal-oxide-semiconductor) transistor was manufactured only in 1960. During the Second World War, there was an urgent need for the use of fast calculating machines that could encrypt and decipher orders sent to the troops, and also, more importantly, decipher and select the keys to enemy directives (a striking example is the British "Colossus") . And in the post-war years, work on various elements of electronic machines continued, and in 1947, William Shockley, John Bardeen and Walter Brattain at Bell Labs first created a working bipolar transistor, for which in 1956 they received the Nobel Prize in Physics “for research on semiconductors and the discovery of the transistor effect ". Of course, field-effect transistors operate on much simpler physical principles (the voltage applied to the gate either allows current to flow or not), but making such a field-effect transistor is much more difficult than a bipolar one (it took years to develop the theory of operation of such a device), which determined the primacy of the latter in material performance.

A copy of the world's first working transistor

The further invention of integrated circuits (1958 by Jack Kilby and Robert Noyce) actually predetermined the development of the microelectronics industry. A few years later, Gordon Moore, preparing his next speech as head of R&D (research and development) at Fairchild Semiconductor, noticed an interesting empirical fact: the number of transistors in a microprocessor chip doubles every two years. In July 1968, Moore and Robert Noyce left the company they had created, Fairchild Semiconductors, and founded Intel Corporation, which has become one of the titans of the modern microprocessor industry.

Moore's law, or rather empirical rule, to which adjustments have to be made today

Strictly speaking, Moore's law is not a law, it is just an empirical observation that we periodically must make changes and additions that will describe the current situation in the industry.

Within a very short period of time, in some 20-30 years, microprocessors and the industry for their production (refining quartz sand, growing monocrystalline silicon, creating processors in clean rooms, etc.) became a kind of economy within an economy. In addition to the well-known Moore's law, there is another observation: the cost of factories for the production of microchips grows exponentially as the complexity of the produced microcircuits becomes more complex. Let's take a simple example: an Intel factory that produces chips using the 45 nm process technology (i.e., the size of one transistor is 45 nm) costs approximately $4 billion. A similar factory, but operating on a 32 nm process technology, will cost $5.5 billion. At the same time, one factory should pay for itself in an average of 3-4 years. For comparison, the market value of Intel itself in 2008 was $128 billion.

Companies with microchip production technologies using relevant technical processes

How to create a microchip. Theory

Most theories are just a translation of old thoughts into new terminology.
Grigory Landau

As we already understood, transistors come in two main types: field-effect and bipolar. Today, bipolar transistors have given way to field-effect transistors. So how does a field-effect transistor work?!

The field effect transistor consists of 3 main elements: drain ( drain), source ( source) and shutter ( gate). The metal gate is separated from the current-carrying channel between the source and drain using a so-called high-k material (or material with high dielectric constant). This material allows, firstly, to reliably isolate the gate from the channel through which current flows, and, secondly, to reduce the geometric dimensions of an individual microchip element. Hafnium oxide or silicide, as well as compounds based on them, are currently used as such materials.

The principle of operation of a field-effect transistor is to create a certain potential difference between the gate and the silicon monocrystal, depending on the sign of the applied voltage, the current between the drain and the source either flows or not, i.e. electrons from the source are deflected by the gate's electric field and do not reach the drain. This is precisely the basis of what we are accustomed to calling microelectronics.

On the left is a schematic diagram of a field-effect transistor, on the right is a micrograph of a section of a field-effect transistor obtained using a transmission electron microscope.

The next question that every reader will want to ask is: how to create 3 nm thick layers, “stick on” drains, sources and gates, to ultimately get a microprocessor? This procedure consists of several stages. The first stage consists of special preparation of quartz sand - its reduction with coke in arc furnaces, where thousands of amperes of electric current heat the surrounding space to a temperature of about 1800°C, resulting in the formation of so-called technical silicon:

3SiCl 4 + 2H 2 + Si = 4SiHCl 3

After going through a few more stages, we obtain high-purity silicon, purified from foreign impurities and containing only 1 foreign atom per billions of silicon atoms:

2SiHCl 3 = SiH 2 Cl 2 + SiCl 4

2SiH 2 Cl 2 = SiH 3 Cl + SiHCl 3

2SiH 3 Cl = SiH 4 + SiH 2 Cl 2

SiH 4 = Si + 2H 2

After such purification, the silicon is melted in special furnaces, and then a huge single crystal is grown using the Czochralski method, pulling it out of the melt at a speed of several millimeters per minute. The resulting column, weighing more than 100 kg, is sawn into thousands of thin (only 1 mm thick) plates - “wafers”. Next, each such wafer is polished to a mirror finish, and only then they begin to form tens and hundreds of chips on the substrate using the lithography process.

On the left is a schematic diagram of the lithographic process, on the right is the wavelength of the laser used and the characteristic size of the transistor.

Immediately before the start of the lithographic process, a thin layer of oxide is formed on the wafer, and an even thinner layer of high k material is deposited using magnetron sputtering at high temperatures. Next, a small amount of photosensitive polymer is dripped onto the substrate while rotating, which forms another thin layer on the surface. Such a polymer is capable of changing its properties under the influence of ultraviolet radiation. The “wafer” is then placed under a special lens system, behind which there is a photomask and a UV laser source. Now the robotic system passes over the substrate hundreds of times and leaves “prints” on it. After this process is completed, the wafer is placed in a solvent, under the influence of which the illuminated areas of the polymer are dissolved and removed from the plate. Thus, a three-dimensional relief is formed on the substrate, the “cavities” in such a relief are filled with certain substances, and the lithographic process (i.e., exposing the plate under a laser beam) is repeated several dozen more times. In total, several hundred different technological stages are required to “print” a chip, most of which are carried out in super-clean rooms.

So, layer by layer, a superb three-dimensional composition of copper conductors and transistors emerges on one side of the wafer, which after a short period of time will be cut out of the wafer and become the heart of the computer.

When the individual elements of the transistor are formed layer by layer, it is the turn to “grow” the contacts

Until recently, the lithographic process was simple, since the wavelength of the radiation was less than or comparable to the size of the individual “printed” elements on the substrate. At the turn of the 21st century, leading microprocessor manufacturing companies crossed the so-called diffraction limit, i.e. Using a laser with a wavelength of 248 nm, they began to produce chips, the individual elements of which had only 190, 130, 90 nm, which would have been unthinkable using classical optics. Accordingly, innovative approaches to mask design were developed and implemented (for example, so-called phase-shift masks), and the computing power of computers began to be used to design microchips and take into account the wave nature of light. For example, we want to print an element in the form of two joined letters T and we ask the computer to help us. What the computer draws will be slightly different from what we intended. But the structure of the mask will differ even more, and the printed structure on the substrate will barely resemble the intended one. But what can we do, we are working at the edge of human capabilities and have already deceived the nature and wave properties of light several times.

On the left is the difference between a conventional mask and a mask using a phase shift; on the right is a clear example of a geometric discrepancy between the desired and actually obtained pattern on the substrate

"There's a lot of room down there." Practice

You cannot have a true idea of ​​something that has not been experienced.
Voltaire Francois Marie Arouet

About 3-4 years ago, fate decreed that an Asus G2S laptop fell into my hands. My happiness lasted exactly until last winter, when, out of the blue, artifacts (various image distortions) began to appear on the screen, especially when launching toys or “powerful” applications that actively work with the video chip. As a result, it turned out that this was the problem. For almost the entire G2 gaming line, Nvidia supplied video chips with defects (detachment of contacts between the crystal itself and the substrate), which was discovered only after a couple of years of intensive work. The solution was clear - replacing the video chip. But what to do with the old one?! The answer to this question came extremely quickly... A day later, the old video chip lay under the diamond wheel of a microtome (a device for fine cutting of materials and samples).

About the benefits of polishing

To my deep regret, the microtome cut the chip rather roughly, although without replacing chips and cracks on the silicon chip itself. Therefore, we then had to grind and polish the cut surface for a long time and persistently so that it would take on the desired appearance. The benefits of polishing are visible to the naked eye, or rather to the armed eye, but only with an optical microscope:

On the left are photos before polishing, on the right are after. Top row of photos - magnification 50x, bottom - 100x

After polishing (photos on the right), copper contacts connecting the individual chip structures are already visible at 50x magnification. Before polishing, they, of course, are also visible through the dust and crumbs formed after cutting, but it is unlikely that individual contacts will be visible.

Electron microscopy

Optical microscopy provides 100-200 times magnification, but this cannot be compared with the 100,000 or even 1,000,000 times of magnification that an electron microscope can produce (theoretically, for TEM, the resolution is tenths and even hundredths of an angstrom, but due to some In real life, such a resolution is not achieved). In addition, the chip is manufactured using a 90 nm process technology, and it is quite problematic to see individual elements of an integrated circuit using optics; again, the diffraction limit interferes. But electrons, coupled with certain types of detection (for example, SE2 - secondary electrons) allow us to visualize the difference in the chemical composition of the material and, thus, look into the very silicon heart of our patient, namely, to see the drain/source, but more on that below.

Printed circuit board

So let's get started. The first thing we see is the printed circuit board on which the silicon crystal itself is mounted. It is attached to the laptop motherboard using BGA soldering. BGA - Ball Grid Array - an array of tin balls with a diameter of about 500 microns, placed in a certain way, which perform the same role as the legs of the processor, i.e. provide communication between the electronic components of the motherboard and the microchip. Of course, no one manually places these balls on a PCB board; this is done by a special machine that rolls the balls over a “mask” with holes of the appropriate size.

BGA soldering

The board itself is made of textolite and has 8 layers of copper, which are connected in a certain way to each other. A crystal is mounted on such a substrate using some analogue of a BGA, let’s call it “mini”-BGA. These are the same tin balls that connect a small piece of silicon to a printed circuit board, only the diameter of these balls is much smaller, less than 100 microns, which is comparable to the thickness of a human hair.

Comparison of BGA and mini-BGA soldering (in each microphoto there is a regular BGA on the bottom, a “mini” BGA on top)

To increase the strength of the printed circuit board, it is reinforced with fiberglass. These fibers are clearly visible in micrographs obtained using a scanning electron microscope.

Textolite is a real composite material consisting of a matrix and reinforcing fiber

The space between the crystal and the printed circuit board is filled with many “balls”, which, apparently, serve as a heat sink and prevent the crystal from moving from its “correct” position.

Many spherical particles fill the space between the chip and the printed circuit board

Strapping element. SMD components

The beauty of using a microtome is that, unlike other cutting tools, it allows you to accurately cut one of the strapping elements, which, judging by the layered structure, is an SMD (Surface-mount device, i.e. a device that is mounted directly on the surface printed circuit board) with a solid-state capacitor. Both optical and electron microscopy showed a similarly streaky result.

Separate logical elements of modern computer technology

The barely noticeable difference in contrast in the above microphoto is the very same drains/sources that help you and me work at the computer, play computer games, watch movies, listen to music, etc. The size of the structures is, according to my calculations, about 114 nm, taking into account ~10% in the scale and calculations, as well as the features of lithography, this figure agrees very well with the declared technical process. Now we can sleep peacefully, knowing that such giants as Intel, Nvidia, AMD actually produce microchips in which individual elements can be 90, 60, 45, or even 32 nm.

Nvidia 8600M GT microchip internals

Conclusion

Much of what I saw inside the video chip amazed me. The cutaway solid state capacitor is simply amazing. Of course, publications from Intel, photos found on the Internet using search engines, beautiful pictures and animations are a great thing that allows you to quickly obtain the required information and knowledge. However, when you personally cut a chip, study it without looking up from the monitor screen for hours, and see that the technical process is really 90 nm, that someone was able to create, calculate this entire design down to the smallest detail, then at that moment you feel joy and pride for humanity, which created such a perfect product.

Computer technology, one way or another, has been developing over the past 60-70 years. During this time, it has overcome a difficult path from military computers the size of a house to the iPad, from punch cards to Windows 7. This industry itself has created a market for itself and an entire era - the information era. Today, the information technology industry (not only the production of computer components) is one of the fastest growing segments of the world economy.

There is no doubt that the information age, which we have already entered, will push the development of computer technology, accelerate the pace of innovation and the introduction of increasingly advanced technologies. In the near future, we will see a transition from silicon to carbon, as the basis of computer technology, from electrons to photons, as a carrier of information. All this will make it possible to reduce the weight of devices several times, increase productivity many times over, develop new embedded systems and completely immerse a person in the digital world with its advantages and disadvantages.

The principle of operation of keys with a chip

Welcome to the VOXKEY workshop website.

We specialize in the professional production of car ignition keys and solve various problems related to car diagnostics in Orel.

By contacting us, you can count on comprehensive consultation and prompt resolution on any issue related to our activities.

You can see the list of our services at .

In the meantime, let's try to understand the wording.

What is a chip key for a car, how does it work and why is it all needed?

In this article we will talk a little about the principles of operation of the immobilizer system, give some useful tips and try to answer the most frequently asked questions.

Let's start with the principle of operation of the immobilizer system.

Quite simply, an immobilizer is an electronic system that works in conjunction with the engine control unit and gives it permission to start the engine or prohibits it.

Thus, the engine will only start if the “correct” key is in the ignition switch.

How is the key identified? To do this, the key itself contains an electronic component - a transponder (chip). It contains an electronic code, by reading which the immobilizer system understands whether it is “its” key or not.

Many car owners are not even aware of the presence of a chip in their car ignition key.

You can only be sure of this if the key is a piece of metal. If the key has a plastic head, then it is very likely that there is a chip in it! Considering also the fact that immobilizer systems built on this principle began to appear in cars since 1995.

Chips come in several varieties.

The carbon chip is very small in size, but nevertheless contains a number of electronic components that are hermetically sealed with carbon.

Glass chip, in the form of a miniature glass flask. Currently they are extremely rare. It contains the same set of components as the carbon chip, but due to the larger antenna of the transceiver it works much better in low temperature conditions. We recommend these chips for installation in autostart systems. At their cost, which is slightly higher than the price of a carbon chip, they work much more stable.

The next variety is battery-powered or battery-free emulator chips. Found everywhere in ignition keys with a radio channel (buttons), it is a board with a microcircuit and a program recorded in it, which emulates the chip during operation.

One of the most common misconceptions is that owners believe that without a battery, such a key will not start the car. This is completely untrue! The battery in the key is only needed to operate the buttons on it and remotely open/close the doors. The chip is independent of power supply and works perfectly without a battery.

The system operates over an extremely short distance. Therefore, it is almost impossible to intercept exchange data.

In modern cars, almost all of them have an ignition key that is not ordinary, it is a so-called chip key. What is it, how to change it. Recently someone received a very interesting letter on the blog, I won’t retell it, but a person asks - how does a chip key work? The question seemed interesting to me and I decided to write an article on this topic...


Indeed, there are no boards, contact groups, etc. on the outside of the key to attach it to any reader on the car. There is a key itself that is inserted into the keyhole, but this is not a contact group! So what is the working principle?

If we go into technical details...

The car's ignition switch has a specific frame that is connected directly to the immobilizer unit. When the ignition is turned on, the unit sends a pulse to this frame and goes into reading mode, that is, it begins to listen to the response from the chip key. In turn (from the impulse), the chip-key is charged and begins to transmit the code sewn into it, transmitting it to this immobilizer frame. The immobilizer frame accepts the code, and if everything is normal, it allows you to start the engine.

If you just...

It's very easy to present your work. Probably everyone (or many) have intercoms at their entrances. We approach and present a special key fob, the intercom reads it and opens the door. This is an exaggerated example of a chip key and car immobilizer.

It should be noted that without this chip key the immobilizer will not allow the car to start! It blocks various car functions:

— in some cars the immobilizer is located in the ignition switch itself and blocks various functions in the lock.

- for others, the immobilizer is built into the dashboard and opens certain circuits of the car (for example, a fuel pump circuit)

— for others, the immobilizer unit is located in the engine compartment, and with the help of amplifiers it can block both the lock and the chains at the same time.

As you can see, the device and operating principle of the chip key seems simple, but effective. However, now many alarm systems with auto start disable the standard immobilizer (in particular, using an additional key that is placed in the car panel), I personally do not recommend doing this. Because the car becomes easier prey for car thieves.

And now a short video version of the article

Fair, not overpriced and not underestimated. There should be prices on the Service website. Necessarily! without asterisks, clear and detailed, where technically possible - as accurate and concise as possible.

If spare parts are available, up to 85% of complex repairs can be completed in 1-2 days. Modular repairs require much less time. The website shows the approximate duration of any repair.

Warranty and responsibility

A guarantee must be given for any repairs. Everything is described on the website and in the documents. The guarantee is self-confidence and respect for you. A 3-6 month warranty is good and sufficient. It is needed to check quality and hidden defects that cannot be detected immediately. You see honest and realistic terms (not 3 years), you can be sure that they will help you.

Half the success in Apple repair is the quality and reliability of spare parts, so a good service works directly with suppliers, there are always several reliable channels and your own warehouse with proven spare parts for current models, so you don’t have to waste extra time.

Free diagnostics

This is very important and has already become a rule of good manners for the service center. Diagnostics is the most difficult and important part of the repair, but you don't have to pay a penny for it, even if you don't repair the device based on its results.

Service repairs and delivery

A good service values ​​your time, so it offers free delivery. And for the same reason, repairs are carried out only in the workshop of a service center: they can be done correctly and according to technology only in a prepared place.

Convenient schedule

If the Service works for you, and not for itself, then it is always open! absolutely. The schedule should be convenient to fit in before and after work. Good service works on weekends and holidays. We are waiting for you and working on your devices every day: 9:00 - 21:00

The reputation of professionals consists of several points

Company age and experience

Reliable and experienced service has been known for a long time.
If a company has been on the market for many years and has managed to establish itself as an expert, people turn to it, write about it, and recommend it. We know what we are talking about, since 98% of incoming devices in the service center are restored.
Other service centers trust us and refer complex cases to us.

How many masters in areas

If there are always several engineers waiting for you for each type of equipment, you can be sure:
1. there will be no queue (or it will be minimal) - your device will be taken care of right away.
2. you give your Macbook for repair to an expert in the field of Mac repairs. He knows all the secrets of these devices

Technical literacy

If you ask a question, a specialist should answer it as accurately as possible.
So that you can imagine what exactly you need.
They will try to solve the problem. In most cases, from the description you can understand what happened and how to fix the problem.

About the most powerful Japanese supercomputer for research in the field of nuclear physics. Now in Japan they are creating an exascale supercomputer Post-K - the Japanese will be one of the first to launch a machine with such computing power.

Commissioning is scheduled for 2021.

Last week, Fujitsu spoke about the technical characteristics of the A64FX chip, which will form the basis of the new “machine”. Let's tell you more about the chip and its capabilities.

A64FX Specifications

Post-K is expected to have nearly ten times the computing power of the world's most powerful supercomputer, IBM Summit (as of June 2018).

The supercomputer owes this performance to the A64FX chip based on Arm architecture. This chip consists of 48 cores for computing operations and four cores for managing them. All of them are evenly divided into four groups - Core Memory Groups (CMG).

Each group has 8 MB L2 cache. It is interfaced with the memory controller and NoC ("network on chip") interface. The NoC connects various CMGs with PCIe and Tofu controllers. The latter is responsible for communicating the processor with the rest of the system. The Tofu controller has ten ports with a throughput of 12.5 GB/s.

The chip circuit looks like this:

The total amount of HBM2 memory in the processor is 32 gigabytes, and its bandwidth is 1024 GB/s. Fujitsu says the processor's floating-point performance reaches 2.7 teraflops for 64-bit operations, 5.4 teraflops for 32-bit operations and 10.8 teraflops for 16-bit operations.

The creation of Post-K is monitored by the editors of the Top500 resource, who compile a list of the most powerful computing systems. According to their estimates, more than 370 thousand A64FX processors are used in the supercomputer to achieve performance of one exaflop.

The device will be the first to use vector extension technology called Scalable Vector Extension (SVE). It differs from other SIMD architectures in that it does not limit the length of vector registers, but sets an acceptable range for them. SVE supports vectors from 128 to 2048 bits in length. This way, any program can be run on other processors that support SVE, without the need for recompilation.

With SVE (since it is a SIMD function), the processor can simultaneously perform calculations on multiple data sets. Here is an example of one such instruction for the NEON function, which has been used for vector calculations in other Arm processor architectures:

Vadd.i32 q1, q2, q3
It adds the four 32-bit integers from the 128-bit register q2 with the corresponding numbers in the 128-bit register q3 and writes the resulting array to q1. The C equivalent of this operation looks like this:

For(i = 0; i< 4; i++) a[i] = b[i] + c[i];
Additionally, SVE supports the autovectorization function. The automatic vectorizer analyzes loops in the code and, if possible, uses vector registers to execute them itself. This improves code performance.

For example, a function in C:

Void vectorize_this(unsigned int *a, unsigned int *b, unsigned int *c) ( unsigned int i; for(i = 0; i< SIZE; i++) { a[i] = b[i] + c[i]; } }
It will be compiled as follows (for a 32-bit Arm processor):

104cc: ldr.w r3, ! 104d0: ldr.w r1, ! 104d4: cmp r4, r5 104d6: add r3, r1 104d8: str.w r3, ! 104dc: bne.n 104cc
If you use autovectorization, it will look like this:

10780: vld1.64 (d18-d19), 10784: adds r6, #1 10786: cmp r6, r7 10788: add.w r5, r5, #16 1078c: vld1.32 (d16-d17), 10790: vadd. i32 q8, q8, q9 10794: add.w r4, r4, #16 10798: vst1.32 (d16-d17), 1079c: add.w r3, r3, #16 107a0: bcc.n 10780
Here the SIMD registers q8 and q9 are loaded with data from the arrays pointed to by r5 and r4. The vadd instruction then adds four 32-bit integer values ​​at a time. This increases the amount of code, but it processes much more data each iteration of the loop.

Who else is building exascale supercomputers?

The creation of exascale supercomputers is not limited to Japan. For example, work is also underway in China and the USA.

In China, they are creating Tianhe-3. Its prototype is already being tested at the National Supercomputing Center in Tianjin. The final version of the computer is scheduled to be completed in 2020.


/ photo O01326 / Supercomputer Tianhe-2 - predecessor of Tianhe-3

Tianhe-3 is based on Chinese Phytium processors. The device contains 64 cores, has a performance of 512 gigaflops and a memory bandwidth of 204.8 GB/s.

A working prototype was also created for a car from the Sunway series. It is being tested at the National Supercomputing Center in Jinan. According to the developers, about 35 applications are currently running on the computer - these are biomedical simulators, applications for processing big data, and programs for studying climate change. Work on the computer is expected to be completed in the first half of 2021.

As for the United States, the Americans plan to create their own exascale computer by 2021. The project is called Aurora A21, and is being worked on by the US Department of Energy's Argonne National Laboratory, as well as Intel and Cray.

This year, researchers have already

Best articles on the topic