How to set up smartphones and PCs. Informational portal
  • home
  • Reviews
  • Binary system codes. The meaning of binary code - why computers work with ones and zeros

Binary system codes. The meaning of binary code - why computers work with ones and zeros

Bit depth of binary code, Conversion of information from continuous to discrete form, Universality of binary coding, Uniform and non-uniform codes, Computer Science 7th grade Bosova, Computer Science 7th grade

1.5.1. Converting information from continuous to discrete form
To solve his problems, a person often has to transform existing information from one form of representation to another. For example, when reading aloud, information is converted from discrete (text) form to continuous (sound). During a dictation in a Russian language lesson, on the contrary, information is transformed from a continuous form (the teacher’s voice) into a discrete one (students’ notes).
Information presented in discrete form is much easier to transmit, store or automatically process. Therefore, in computer technology, much attention is paid to methods for converting information from continuous to discrete form.
Discretization of information is the process of converting information from a continuous form of representation to a discrete one.
Let's look at the essence of the information sampling process using an example.
Meteorological stations have recorders for continuous recording of atmospheric pressure. The result of their work is barograms - curves showing how pressure has changed over long periods of time. One of these curves, drawn by the device during seven hours of observation, is shown in Fig. 1.9.

Based on the information received, you can build a table containing the instrument readings at the beginning of measurements and at the end of each hour of observation (Fig. 1.10).

The resulting table does not give a completely complete picture of how the pressure changed during the observation period: for example, the highest pressure value that occurred during the fourth hour of observation is not indicated. But if you tabulate the pressure values ​​observed every half hour or 15 minutes, the new table will give a more complete picture of how the pressure changed.
Thus, we converted information presented in continuous form (barogram, curve) into discrete form (table) with some loss of accuracy.
In the future, you will become familiar with ways to discretely represent audio and graphic information.

Chains of three binary symbols are obtained by complementing two-digit binary codes on the right with the symbol 0 or 1. As a result, the code combinations of three binary symbols are 8 - twice as many as those of two binary symbols:
Accordingly, a four-bit binary allows you to get 16 code combinations, a five-bit one - 32, a six-bit one - 64, etc. The length of the binary chain - the number of characters in the binary code - is called the bit depth of the binary code.
Note that:
4 = 2 * 2,
8 = 2 * 2 * 2,
16 = 2 * 2 * 2 * 2,
32 = 2 * 2 * 2 * 2 * 2 etc.
Here, the number of code combinations is the product of a certain number of identical factors equal to the bit depth of the binary code.
If the number of code combinations is denoted by the letter N, and the bit depth of the binary code by the letter i, then the identified pattern in general form will be written as follows:
N = 2 * 2 * ... * 2.
i factors
In mathematics, such products are written as:
N = 2 i.
Entry 2 i is read as follows: “2 to the i-th power.”

Task. The leader of the Multi tribe instructed his minister to develop a binary and translate all important information into it. What size binary will be required if the alphabet used by the Multi tribe contains 16 characters? Write down all code combinations.
Solution. Since the Multi tribe alphabet consists of 16 characters, they need 16 code combinations. In this case, the length (bit depth) of the binary code is determined from the ratio: 16 = 2 i. Hence i = 4.
To write down all code combinations of four 0s and 1s, we use the diagram in Fig. 1.13: 0000, 0001, 0010, 0011, 0100, 0101, 0110,0111,1000,1001,1010,1011,1100,1101,1110,1111.

1.5.3. The versatility of binary coding
At the beginning of this section, you learned that, represented in continuous form, can be expressed using symbols in some natural or formal language. In turn, characters of an arbitrary alphabet can be converted to binary. Thus, using binary code, any natural and formal languages, as well as images and sounds, can be represented (Fig. 1.14). This means the universality of binary coding.
Binary codes are widely used in computer technology, requiring only two states of an electronic circuit - “on” (this corresponds to the number 1) and “off” (this corresponds to the number 0).
Simplicity of technical implementation is the main advantage of binary coding. The disadvantage of binary coding is the large length of the resulting code.

1.5.4. Uniform and non-uniform codes
There are uniform and non-uniform codes. Uniform codes in code combinations contain the same number of symbols, uneven ones contain a different number.
Above we looked at uniform binary codes.
An example of a non-uniform code is Morse code, in which a sequence of short and long signals is defined for each letter and number. So, the letter E corresponds to a short signal (“dot”), and the letter Ш corresponds to four long signals (four “dashes”). Uneven allows you to increase the speed of message transmission due to the fact that the most frequently occurring symbols in the transmitted information have the shortest code combinations.

The information that this symbol gives is equal to the entropy of the system and is maximum in the case when both states are equally probable; in this case, the elementary symbol conveys information 1 (two units). Therefore, the basis of optimal encoding will be the requirement that elementary characters in the encoded text occur on average equally often.

Let us present here a method for constructing a code that satisfies the stated condition; This method is known as the Shannon-Fano code. Its idea is that the encoded symbols (letters or combinations of letters) are divided into two approximately equally probable groups: for the first group of symbols, 0 is placed in the first place of the combination (the first character of the binary number representing the symbol); for the second group - 1. Next, each group is again divided into two approximately equally probable subgroups; for symbols of the first subgroup, zero is placed in second place; for the second subgroup - one, etc.

Let us demonstrate the principle of constructing the Shannon-Fano code using the material of the Russian alphabet (Table 18.8.1). Let's count the first six letters (from “-” to “t”); summing up their probabilities (frequencies), we get 0.498; all other letters (from “n” to “sf”) will have approximately the same probability of 0.502. The first six letters (from “-” to “t”) will have a binary 0 in the first place. The remaining letters (from “n” to “f”) will have a one in the first place. Next, we again divide the first group into two approximately equally probable subgroups: from “-” to “o” and from “e” to “t”; for all letters of the first subgroup in the second place we will put zero, and of the second subgroup - one. We will continue the process until exactly one letter remains in each division, which will be encoded with a certain binary number. The mechanism for constructing the code is shown in Table 18.8 .2, and the code itself is given in table 18.8.3.

Table 18.8.2.

Binary signs

Table 18.8.3

Using Table 18.8.3, you can encode and decode any message.

As an example, let’s write the phrase “information theory” in binary code.

01110100001101000110110110000

0110100011111111100110100

1100001011111110101100110

Note that there is no need to separate the letters from each other with a special sign, since decoding is performed unambiguously even without this. You can verify this by decoding the following phrase using Table 18.8.2:

10011100110011001001111010000

1011100111001001101010000110101

010110000110110110

(“encoding method”).

However, it should be noted that any encoding error (random confusion of 0 and 1 characters) with such a code is disastrous, since decoding all text following the error becomes impossible. Therefore, this coding principle can be recommended only in cases where errors in encoding and transmitting a message are practically eliminated.

A natural question arises: is the code we have compiled, in the absence of errors, really optimal? In order to answer this question, let's find the average information per elementary symbol (0 or 1) and compare it with the maximum possible information, which is equal to one binary unit. To do this, we first find the average information contained in one letter of the transmitted text, i.e., entropy per letter:

,

where is the probability that the letter will take a certain state (“-”, o, e, a,..., f).

From the table 18.8.1 we have

(two units per letter of text).

Using table 18.8.2, we determine the average number of elementary symbols per letter

Dividing the entropy by, we obtain information per elementary symbol

(two units).

Thus, the information per character is very close to its upper limit of 1, and the code we have chosen is very close to the optimal one. Remaining within the confines of the task of encoding letters, we cannot achieve anything better.

Note that in the case of encoding simply binary numbers of letters, we would have an image of each letter with five binary characters and the information for one character would be

(two units),

i.e., noticeably less than with optimal letter coding.

However, it should be noted that coding “by letter” is not economical at all. The fact is that there is always a dependence between adjacent letters of any meaningful text. For example, after a vowel in the Russian language there cannot be “ъ” or “ь”; “I” or “yu” cannot appear after hissing ones; after several consonants in a row, the probability of a vowel increases, etc.

We know that when dependent systems are combined, the total entropy is less than the sum of the entropies of the individual systems; therefore, the information conveyed by a piece of connected text is always less than the information per character times the number of characters. Taking this circumstance into account, a more economical code can be constructed if you encode not each letter individually, but entire “blocks” of letters. For example, in a Russian text it makes sense to encode entirely some frequently occurring combinations of letters, such as “tsya”, “ayet”, “nie”, etc. The encoded blocks are arranged in descending order of frequency, like the letters in the table. 18.8.1, and binary coding is carried out according to the same principle.

In some cases, it turns out to be reasonable to encode not even blocks of letters, but entire meaningful pieces of text. For example, to relieve the telegraph during the holidays, it is advisable to encode entire standard texts with conventional numbers, such as:

“Congratulations on the New Year, I wish you good health and success in your work.”

Without dwelling specifically on block coding methods, we will limit ourselves to formulating Shannon’s theorem related here.

Let there be a source of information and a receiver connected by a communication channel (Fig. 18.8.1).

The productivity of the information source is known, i.e. the average number of binary information units coming from the source per unit of time (numerically it is equal to the average entropy of the message produced by the sources per unit of time). Let, in addition, the channel capacity be known, i.e. the maximum amount of information (for example, binary characters 0 or 1) that the channel is capable of transmitting in the same unit of time. The question arises: what should the channel capacity be for it to “cope” with its task, that is, for information to arrive from the source to the receiver without delay?

The answer to this question is given by Shannon's first theorem. Let us formulate it here without proof.

Shannon's 1st theorem

If the communication channel capacity is greater than the entropy of the information source per unit time

then it is always possible to encode a sufficiently long message so that it is transmitted by a communication channel without delay. If, on the contrary,

then the transfer of information without delay is impossible.

Computers don't understand words and numbers the way people do. Modern software allows the end user to ignore this, but at the lowest levels your computer operates on a binary electrical signal that has only two states: whether there is current or not. To "understand" complex data, your computer must encode it in binary format.

The binary system is based on two digits, 1 and 0, corresponding to on and off states that your computer can understand. You are probably familiar with the decimal system. It uses ten digits, from 0 to 9, and then moves on to the next order to form two-digit numbers, with each number being ten times larger than the previous one. The binary system is similar, with each digit being twice as large as the previous one.

Counting in binary format

In binary expression, the first digit is equivalent to 1 in the decimal system. The second digit is 2, the third is 4, the fourth is 8, and so on - doubling each time. Adding all these values ​​will give you the number in decimal format.

1111 (in binary) = 8 + 4 + 2 + 1 = 15 (in decimal)

Accounting for 0 gives us 16 possible values ​​for four binary bits. Move 8 bits and you get 256 possible values. This takes up a lot more space to represent since four decimal digits gives us 10,000 possible values. Of course, binary code takes up more space, but computers understand binary files much better than the decimal system. And for some things, like logic processing, binary is better than decimal.

It should be said that there is another basic system that is used in programming: hexadecimal. Although computers do not operate in hexadecimal format, programmers use it to represent binary addresses in a human-readable format when writing code. This is because two digits of a hexadecimal number can represent an entire byte, meaning they replace eight digits in binary. The hexadecimal system uses the numbers 0-9, as well as the letters A through F, to create an additional six digits.

Why do computers use binary files?

Short answer: hardware and laws of physics. Every character in your computer is an electrical signal, and in the early days of computing, measuring electrical signals was much more difficult. It made more sense to distinguish only the "on" state, represented by a negative charge, and the "off" state, represented by a positive charge.

For those who don't know why "off" is represented by a positive charge, it is because electrons have a negative charge, and more electrons mean more current with a negative charge.

Thus, early room-sized computers used binary files to create their systems, and although they used older, bulkier equipment, they worked on the same fundamental principles. Modern computers use what is called transistor to perform calculations with binary code.

Here is a diagram of a typical transistor:

Essentially, it allows current to flow from the source to the drain if there is current in the gate. This forms a binary key. Manufacturers can make these transistors incredibly small—down to 5 nanometers, or the size of two strands of DNA. This is how modern processors work, and even they can suffer from problems distinguishing between on and off states (though this is due to their unrealistic molecular size being subject to the weirdness of quantum mechanics).

Why only binary system

So you might be thinking, “Why only 0 and 1? Why not add another number? Although this is partly due to the traditions of creating computers, at the same time, adding another digit would mean the need to distinguish another state of the current, not just “off” or “on”.

The problem here is that if you want to use multiple voltage levels, you need a way to easily perform calculations on them, and current hardware capable of this is not viable as a replacement for binary calculations. For example, there is a so-called triple computer, developed in the 1950s, but development stopped there. Ternary logic more efficient than binary, but there is not yet an effective replacement for the binary transistor, or at least no transistor on the same tiny scale as binary.

The reason we can't use ternary logic comes down to how transistors are connected in a computer and how they are used for mathematical calculations. The transistor receives information at two inputs, performs an operation, and returns the result to one output.

Thus, binary mathematics is easier for a computer than anything else. Binary logic is easily converted to binary systems, with True and False corresponding to On and Off states.

A binary truth table running on binary logic will have four possible outputs for each fundamental operation. But, since triple gates use three inputs, the triple truth table would have 9 or more. While the binary system has 16 possible operators (2^2^2), the ternary system would have 19683 (3^3^3). Scaling becomes an issue because while trinity is more efficient, it is also exponentially more complex.

Who knows? In the future, we may well see ternary computers as binary logic faces miniaturization challenges. For now, the world will continue to operate in binary mode.

Binary code represents text, computer processor instructions, or other data using any two-character system. Most commonly, it is a system of 0s and 1s that assigns a pattern of binary digits (bits) to each symbol and instruction. For example, a binary string of eight bits can represent any of 256 possible values ​​and can therefore generate many different elements. Reviews of binary code from the global professional community of programmers indicate that this is the basis of the profession and the main law of the functioning of computer systems and electronic devices.

Deciphering the binary code

In computing and telecommunications, binary codes are used for various methods of encoding data characters into bit strings. These methods can use fixed-width or variable-width strings. There are many character sets and encodings for converting to binary code. In fixed-width code, each letter, number, or other character is represented by a bit string of the same length. This bit string, interpreted as a binary number, is usually displayed in code tables in octal, decimal, or hexadecimal notation.

Binary Decoding: A bit string interpreted as a binary number can be converted to a decimal number. For example, the lowercase letter a, if represented by the bit string 01100001 (as in standard ASCII code), can also be represented as the decimal number 97. Converting binary code to text is the same procedure, just in reverse.

How it works

What does binary code consist of? The code used in digital computers is based on which there are only two possible states: on. and off, usually denoted by zero and one. While in the decimal system, which uses 10 digits, each position is a multiple of 10 (100, 1000, etc.), in the binary system, each digit position is a multiple of 2 (4, 8, 16, etc.). A binary code signal is a series of electrical pulses that represent numbers, symbols, and operations to be performed.

A device called a clock sends out regular pulses, and components such as transistors are turned on (1) or off (0) to transmit or block the pulses. In binary code, each decimal number (0-9) is represented by a set of four binary digits or bits. The four basic operations of arithmetic (addition, subtraction, multiplication, and division) can be reduced to combinations of fundamental Boolean algebraic operations on binary numbers.

A bit in communication and information theory is a unit of data equivalent to the result of a choice between two possible alternatives in the binary number system commonly used in digital computers.

Binary code reviews

The nature of code and data is a basic part of the fundamental world of IT. This tool is used by specialists from the global IT “behind the scenes” - programmers whose specialization is hidden from the attention of the average user. Reviews of binary code from developers indicate that this area requires a deep study of mathematical fundamentals and extensive practice in the field of mathematical analysis and programming.

Binary code is the simplest form of computer code or programming data. It is entirely represented by a binary digit system. According to reviews of binary code, it is often associated with machine code because binary sets can be combined to form source code that is interpreted by a computer or other hardware. This is partly true. uses sets of binary digits to form instructions.

Along with the most basic form of code, a binary file also represents the smallest amount of data that flows through all the complex, end-to-end hardware and software systems that process today's resources and data assets. The smallest amount of data is called a bit. The current strings of bits become code or data that is interpreted by the computer.

Binary number

In mathematics and digital electronics, a binary number is a number expressed in the base-2 number system, or binary numeric system, which uses only two characters: 0 (zero) and 1 (one).

The base-2 number system is a positional notation with a radius of 2. Each digit is referred to as a bit. Due to its simple implementation in digital electronic circuits using logical rules, the binary system is used by almost all modern computers and electronic devices.

Story

The modern binary number system as the basis for binary code was invented by Gottfried Leibniz in 1679 and presented in his article "Binary Arithmetic Explained". Binary numbers were central to Leibniz's theology. He believed that binary numbers symbolized the Christian idea of ​​creativity ex nihilo, or creation out of nothing. Leibniz tried to find a system that would transform verbal statements of logic into purely mathematical data.

Binary systems that predate Leibniz also existed in the ancient world. An example is the Chinese binary system I Ching, where the divination text is based on the duality of yin and yang. In Asia and Africa, slotted drums with binary tones were used to encode messages. The Indian scholar Pingala (circa 5th century BC) developed a binary system to describe prosody in his work Chandashutrema.

The inhabitants of the island of Mangareva in French Polynesia used a hybrid binary-decimal system until 1450. In the 11th century, the scientist and philosopher Shao Yong developed a method of organizing hexagrams that corresponds to the sequence 0 to 63, as represented in a binary format, with yin being 0 and yang being 1. The order is also a lexicographical order in blocks of elements selected from a two-element set.

New time

In 1605, discussed a system in which the letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as subtle variations of type in any random text. It is important to note that it was Francis Bacon who supplemented the general theory of binary coding with the observation that this method can be used with any objects.

Another mathematician and philosopher named George Boole published a paper in 1847 called “Mathematical Analysis of Logic,” which described the algebraic system of logic known today as Boolean algebra. The system was based on a binary approach, which consisted of three basic operations: AND, OR and NOT. This system did not become operational until an MIT graduate student named Claude Shannon noticed that the Boolean algebra he was learning was similar to an electrical circuit.

Shannon wrote a dissertation in 1937 that made important findings. Shannon's thesis became the starting point for the use of binary code in practical applications such as computers and electrical circuits.

Other forms of binary code

Bitstring is not the only type of binary code. A binary system in general is any system that allows only two options, such as a switch in an electronic system or a simple true or false test.

Braille is a type of binary code widely used by blind people to read and write by touch, named after its creator Louis Braille. This system consists of grids of six points each, three per column, in which each point has two states: raised or recessed. Different combinations of dots can represent all letters, numbers, and punctuation marks.

American Standard Code for Information Interchange (ASCII) uses a 7-bit binary code to represent text and other characters in computers, communications equipment, and other devices. Each letter or symbol is assigned a number from 0 to 127.

Binary coded decimal or BCD is a binary coded representation of integer values ​​that uses a 4-bit graph to encode decimal digits. Four binary bits can encode up to 16 different values.

In BCD-encoded numbers, only the first ten values ​​in each nibble are valid and encode the decimal digits with zeros after nines. The remaining six values ​​are invalid and may cause either a machine exception or unspecified behavior, depending on the computer's implementation of BCD arithmetic.

BCD arithmetic is sometimes preferred over floating point number formats in commercial and financial applications where complex number rounding behavior is undesirable.

Application

Most modern computers use a binary code program for instructions and data. CDs, DVDs, and Blu-ray Discs represent audio and video in binary form. Telephone calls are carried digitally in long-distance and mobile telephone networks using pulse code modulation and in voice over IP networks.

Binary code is a form of recording information in the form of ones and zeros. This is positional with a base of 2. Today, binary code (the table presented a little below contains some examples of writing numbers) is used in all digital devices without exception. Its popularity is explained by the high reliability and simplicity of this form of recording. Binary arithmetic is very simple, and accordingly, it is easy to implement at the hardware level. components (or, as they are also called, logical) are very reliable, since they operate in only two states: logical one (there is current) and logical zero (no current). Thus, they compare favorably with analog components, the operation of which is based on transient processes.

How is binary notation composed?

Let's figure out how such a key is formed. One bit of binary code can contain only two states: zero and one (0 and 1). When using two bits, it becomes possible to write four values: 00, 01, 10, 11. A three-bit entry contains eight states: 000, 001 ... 110, 111. As a result, we find that the length of the binary code depends on the number of bits. This expression can be written using the following formula: N =2m, where: m is the number of digits, and N is the number of combinations.

Types of binary codes

In microprocessors, such keys are used to record various processed information. The width of the binary code can significantly exceed its built-in memory. In such cases, long numbers occupy several storage locations and are processed using several commands. In this case, all memory sectors that are allocated for multi-byte binary code are considered as a single number.

Depending on the need to provide this or that information, the following types of keys are distinguished:

  • unsigned;
  • direct integer character codes;
  • signed inverses;
  • sign additional;
  • Gray code;
  • Gray Express code;
  • fractional codes.

Let's take a closer look at each of them.

Unsigned binary code

Let's figure out what this type of recording is. In unsigned integer codes, each digit (binary) represents a power of two. In this case, the smallest number that can be written in this form is zero, and the maximum can be represented by the following formula: M = 2 n -1. These two numbers completely define the range of the key that can be used to express such a binary code. Let's look at the capabilities of the mentioned recording form. When using this type of unsigned key, consisting of eight bits, the range of possible numbers will be from 0 to 255. A sixteen-bit code will have a range from 0 to 65535. In eight-bit processors, two memory sectors are used to store and write such numbers, which are located in adjacent destinations . Special commands provide work with such keys.

Direct integer signed codes

In this type of binary key, the most significant bit is used to record the sign of the number. Zero corresponds to a plus, and one corresponds to a minus. As a result of the introduction of this digit, the range of encoded numbers shifts to the negative side. It turns out that an eight-bit signed integer binary key can write numbers in the range from -127 to +127. Sixteen-bit - in the range from -32767 to +32767. Eight-bit microprocessors use two adjacent sectors to store such codes.

The disadvantage of this form of recording is that the sign and digital bits of the key must be processed separately. The algorithms of programs working with these codes turn out to be very complex. To change and highlight sign bits, it is necessary to use mechanisms for masking this symbol, which contributes to a sharp increase in the size of the software and a decrease in its performance. In order to eliminate this drawback, a new type of key was introduced - a reverse binary code.

Signed reverse key

This form of recording differs from direct codes only in that the negative number in it is obtained by inverting all the bits of the key. In this case, the digital and sign bits are identical. Thanks to this, algorithms for working with this type of code are significantly simplified. However, the reverse key requires a special algorithm to recognize the first-digit character and calculate the absolute value of the number. As well as restoring the sign of the resulting value. Moreover, in the reverse and forward codes of numbers, two keys are used to write zero. Despite the fact that this value does not have a positive or negative sign.

Signed two's complement binary number

This type of record does not have the listed disadvantages of previous keys. Such codes allow direct summation of both positive and negative numbers. In this case, no analysis of the sign bit is carried out. All this is made possible by the fact that complementary numbers are a natural ring of symbols, rather than artificial formations such as forward and backward keys. Moreover, an important factor is that it is extremely easy to perform complement calculations in binary codes. To do this, just add one to the reverse key. When using this type of sign code, consisting of eight digits, the range of possible numbers will be from -128 to +127. A sixteen-bit key will have a range from -32768 to +32767. Eight-bit processors also use two adjacent sectors to store such numbers.

Binary two's complement code is interesting because of its observable effect, which is called the sign propagation phenomenon. Let's figure out what this means. This effect is that in the process of converting a single-byte value into a double-byte one, it is enough to assign the values ​​of the sign bits of the low byte to each bit of the high byte. It turns out that you can use the most significant bits to store the signed one. In this case, the value of the key does not change at all.

Gray code

This form of recording is essentially a one-step key. That is, in the process of transition from one value to another, only one bit of information changes. In this case, an error in reading data leads to a transition from one position to another with a slight time shift. However, obtaining a completely incorrect result of the angular position with such a process is completely excluded. The advantage of such code is its ability to mirror information. For example, by inverting the most significant bits, you can simply change the counting direction. This happens thanks to the Complement control input. In this case, the output value can be either increasing or decreasing for one physical direction of axis rotation. Since the information recorded in the Gray key is exclusively encoded in nature, which does not carry real numerical data, before further work it is necessary to first convert it into the usual binary form of recording. This is done using a special converter - the Gray-Binar decoder. This device is easily implemented using elementary logic elements in both hardware and software.

Gray Express Code

Gray's standard one-step key is suitable for solutions that are represented as numbers, two. In cases where it is necessary to implement other solutions, only the middle section is cut out from this form of recording and used. As a result, the one-step nature of the key is preserved. However, in this code, the beginning of the numeric range is not zero. It is shifted by the specified value. During data processing, half the difference between the initial and reduced resolution is subtracted from the generated pulses.

Representation of a fractional number in fixed-point binary key

In the process of work, you have to operate not only with whole numbers, but also with fractions. Such numbers can be written using direct, reverse and complementary codes. The principle of constructing the mentioned keys is the same as that of integers. Until now, we believed that the binary comma should be to the right of the least significant digit. But that's not true. It can be located to the left of the most significant digit (in this case, only fractional numbers can be written as a variable), and in the middle of the variable (mixed values ​​can be written).

Binary floating point representation

This form is used to write or vice versa - very small. Examples include interstellar distances or the sizes of atoms and electrons. When calculating such values, one would have to use very large binary code. However, we do not need to take into account cosmic distances with millimeter precision. Therefore, the fixed-point notation form is ineffective in this case. An algebraic form is used to display such codes. That is, the number is written as a mantissa multiplied by ten to a power that reflects the desired order of the number. You should know that the mantissa should not be greater than one, and a zero should not be written after the decimal point.

Binary calculus is believed to have been invented in the early 18th century by German mathematician Gottfried Leibniz. However, as scientists recently discovered, long before the Polynesian island of Mangareva, this type of arithmetic was used. Despite the fact that colonization almost completely destroyed the original number systems, scientists have restored complex binary and decimal types of counting. Additionally, cognitive scientist Nunez claims that binary coding was used in ancient China as early as the 9th century BC. e. Other ancient civilizations, such as the Mayans, also used complex combinations of decimal and binary systems to track time intervals and astronomical phenomena.

08. 06.2018

Blog of Dmitry Vassiyarov.

Binary code - where and how is it used?

Today I am especially glad to meet you, my dear readers, because I feel like a teacher who, at the very first lesson, begins to introduce the class to letters and numbers. And since we live in a world of digital technology, I will tell you what binary code is, which is their basis.

Let's start with the terminology and find out what binary means. For clarification, let’s return to our usual calculus, which is called “decimal”. That is, we use 10 digits, which make it possible to conveniently operate with various numbers and keep appropriate records.

Following this logic, the binary system provides for the use of only two characters. In our case, these are just “0” (zero) and “1” one. And here I want to warn you that hypothetically there could be other symbols in their place, but it is precisely these values, indicating the absence (0, empty) and the presence of a signal (1 or “stick”), that will help us further understand the structure of the binary code.

Why is binary code needed?

Before the advent of computers, various automatic systems were used, the operating principle of which was based on receiving a signal. The sensor is triggered, the circuit is closed and a certain device is turned on. No current in the signal circuit - no operation. It was electronic devices that made it possible to achieve progress in processing information represented by the presence or absence of voltage in a circuit.

Their further complication led to the emergence of the first processors, which also did their job, processing a signal consisting of pulses alternating in a certain way. We will not delve into the program details now, but the following is important for us: electronic devices turned out to be able to distinguish a given sequence of incoming signals. Of course, it is possible to describe the conditional combination this way: “there is a signal”; "no signal"; “there is a signal”; “there is a signal.” You can even simplify the notation: “there is”; "No"; "There is"; "There is".

But it is much easier to denote the presence of a signal with a unit “1”, and its absence with a zero “0”. Then we can use a simple and concise binary code instead: 1011.

Of course, processor technology has stepped far forward and now chips are able to perceive not just a sequence of signals, but entire programs written with specific commands consisting of individual characters.

But to record them, the same binary code is used, consisting of zeros and ones, corresponding to the presence or absence of a signal. Whether he exists or not, it doesn’t matter. For a chip, any of these options is a single piece of information, which is called a “bit” (bit is the official unit of measurement).

Conventionally, a symbol can be encoded as a sequence of several characters. Two signals (or their absence) can describe only four options: 00; 01;10; 11. This encoding method is called two-bit. But it can also be:

  • Four-bit (as in the example in the paragraph above 1011) allows you to write 2^4 = 16 symbol combinations;
  • Eight-bit (for example: 0101 0011; 0111 0001). At one time it was of greatest interest to programming because it covered 2^8 = 256 values. This made it possible to describe all decimal digits, the Latin alphabet and special characters;
  • Sixteen-bit (1100 1001 0110 1010) and higher. But records with such a length are already for modern, more complex tasks. Modern processors use 32 and 64-bit architecture;

Frankly, there is no single official version, but it so happened that it was the combination of eight characters that became the standard measure of stored information called a “byte”. This could be applied even to one letter written in 8-bit binary code. So, my dear friends, please remember (if anyone didn’t know):

8 bits = 1 byte.

That's how it is. Although a character written with a 2 or 32-bit value can also nominally be called a byte. By the way, thanks to binary code we can estimate the volume of files measured in bytes and the speed of information and Internet transmission (bits per second).

Binary encoding in action

To standardize the recording of information for computers, several coding systems have been developed, one of which, ASCII, based on 8-bit recording, has become widespread. The values ​​in it are distributed in a special way:

  • the first 31 characters are control characters (from 00000000 to 00011111). Serve for service commands, output to a printer or screen, sound signals, text formatting;
  • the following from 32 to 127 (00100000 – 01111111) Latin alphabet and auxiliary symbols and punctuation marks;
  • the rest, up to the 255th (10000000 – 11111111) – alternative, part of the table for special tasks and displaying national alphabets;

The decoding of the values ​​​​in it is shown in the table.

If you think that “0” and “1” are located in a chaotic order, then you are deeply mistaken. Using any number as an example, I will show you a pattern and teach you how to read numbers written in binary code. But for this we will accept some conventions:

  • We will read a byte of 8 characters from right to left;
  • If in ordinary numbers we use the digits of ones, tens, hundreds, then here (reading in reverse order) for each bit various powers of “two” are represented: 256-124-64-32-16-8- 4-2-1;
  • Now we look at the binary code of the number, for example 00011011. Where there is a “1” signal in the corresponding position, we take the values ​​of this bit and sum them up in the usual way. Accordingly: 0+0+0+32+16+0+2+1 = 51. You can verify the correctness of this method by looking at the code table.

Now, my inquisitive friends, you not only know what binary code is, but also know how to convert the information encrypted by it.

Language understandable to modern technology

Of course, the algorithm for reading binary code by processor devices is much more complicated. But you can use it to write down anything you want:

  • Text information with formatting options;
  • Numbers and any operations with them;
  • Graphic and video images;
  • Sounds, including those beyond our hearing range;

In addition, due to the simplicity of the “presentation”, various ways of recording binary information are possible:

  • By changing the magnetic field by ;
  • The advantages of binary coding are complemented by almost unlimited possibilities for transmitting information over any distance. This is the method of communication used with spacecraft and artificial satellites.

    So, today the binary number system is a language that is understood by most of the electronic devices we use. And what’s most interesting is that no other alternative is foreseen for now.

    I think that the information I have presented will be quite enough for you to get started. And then, if such a need arises, everyone will be able to delve deeper into an independent study of this topic.

    I will say goodbye and after a short break I will prepare for you a new article on my blog on some interesting topic.

    It's better if you tell me it yourself ;)

    See you soon.

    Best articles on the topic