How to set up smartphones and PCs. Informational portal
  • home
  • In contact with
  • What kind of processing can be attributed to encryption. The simplest methods of encrypting text

What kind of processing can be attributed to encryption. The simplest methods of encrypting text

Data encryption is extremely important to protect privacy. In this article, I will talk about different types and the encryption methods used to protect data today.

Did you know?
Back in the days of the Roman Empire, encryption was used by Julius Caesar to make letters and messages unreadable to the enemy. It played important role as a military tactic, especially during wars.

As the possibilities of the Internet continue to grow, more and more of our businesses are being hired online. Among these, the most important are Internet banking, online payment, emails, exchange of private and service messages, etc., which provide for the exchange of confidential data and information. If this data falls into the wrong hands, it can harm not only an individual user, but the entire online system business.

To prevent this from happening, some network security measures have been taken to protect the transmission of personal data. Chief among these are the data encryption and decryption processes known as cryptography. There are three main encryption methods used in most systems today: hashing, symmetric, and asymmetric encryption. V following lines, I'll cover each of these encryption types in more detail.

Encryption types

Symmetric encryption

With symmetric encryption, normal human readable data known as plain text, is encrypted (encrypted) so that it becomes unreadable. This scrambling of data is done with a key. Once the data is encrypted, it can be safely transferred to the receiver. At the recipient, the encrypted data is decoded using the same key that was used for encryption.

Thus, it is clear that the key is the most important part. symmetric encryption... It should be hidden from strangers, since everyone who has access to it will be able to decrypt private data. This is why this type of encryption is also known as a "secret key".

V modern systems ah, the key is usually a string of data that comes from a strong password, or from a completely random source. It is fed into symmetric encryption software which uses it to classify the input. Data scrambling is achieved using symmetric algorithm encryption such as Data Encryption Standard (DES), Encryption Advanced Standard (AES), or International Data Encryption Algorithm (IDEA).

Restrictions

The weakest link in this type of encryption is the security of the key, both in terms of storage and transmission of the authenticated user. If a hacker is able to get hold of this key, he can easily decrypt the encrypted data, destroying the whole point of encryption.

Another drawback is due to the fact that the software that processes the data cannot handle encrypted data. Therefore, to be able to use this software, the data must first be decoded. If the software itself is compromised, then an attacker can easily obtain the data.

Asymmetric encryption

An asymmetric encryption key works in a similar way symmetric key, in that it uses a key to encode transmitted messages... However, instead of using the same key, it uses a completely different key to decrypt this message.

The key used for encryption is available to anyone and everyone on the network. As such, it is known as the "public" key. On the other hand, the key used for decryption is kept secret and intended to be used privately by the user himself. Hence, it is known as the "private" key. Asymmetric encryption is also known as public key encryption.

Since, with this method, the secret key required to decrypt the message does not have to be transmitted every time, and it is usually known only to the user (receiver), the likelihood that a hacker will be able to decrypt the message is much lower.

Diffie-Hellman and RSA are examples of algorithms that use public key encryption.

Restrictions

Many hackers use man-in-the-middle as a form of attack to bypass this type of encryption. In asymmetric encryption, you are given a public key that is used to safe exchange data with another person or service. However, hackers use trickery networks to trick you into communicating with them while making you believe you are on a safe line.

To better understand this type of hacking, consider the two interacting parties Sasha and Natasha, and the hacker Sergey with the intent to intercept their conversation. First, Sasha sends a message over the network intended for Natasha, asking for her public key. Sergei intercepts this message and receives the public key associated with it, and uses it to encrypt and transmit a false message, Natasha, containing his public key instead of Sasha.

Natasha, thinking that this message came from Sasha, now encrypts it using Sergey's public key and sends it back. This message was again intercepted by Sergey, decrypted, changed (if desired), encrypted again using the public key that Sasha had originally sent, and sent back to Sasha.

Thus, when Sasha receives this message, he is made to believe that it came from Natasha, and continues to be unaware of foul play.

Hashing

The hashing technique uses an algorithm known as a hash function to generate special line from the given data known as the hash. This hash has the following properties:

  • the same data always produces the same hash.
  • it is impossible to generate raw data from the hash alone.
  • It is impractical to try different combinations of inputs in order to try to generate the same hash.

Thus, the main difference between hashing and the other two forms of data encryption is that once the data is encrypted (hashed), it cannot be received back in its original form (decrypted). This fact ensures that even if a hacker gets his hands on the hash, it will be useless to him, since he will not be able to decrypt the contents of the message.

Message Digest 5 (MD5) and Secure Hashing Algorithm (SHA) are two widely used hashing algorithms.

Restrictions

As mentioned earlier, it is nearly impossible to decrypt data from a given hash. However, this is only true if strong hashing is implemented. In the case of a weak implementation of the hashing technique, using a sufficient amount of resources and brute force attacks, a persistent hacker can find data that matches the hash.

Combination of encryption methods

As discussed above, each of these three encryption methods suffers from some disadvantages. However, when a combination of these methods is used, they form a reliable and highly effective system encryption.

Most often, private and public key techniques are combined and used together. The private key method allows for quick decryption, while the public key method offers more secure and more convenient way to transfer the secret key. This combination of techniques is known as the digital envelope. Encryption program Email PGP is based on the digital envelope technique.

Hashing is used as a means of checking the strength of a password. If the system stores the hash of the password, instead of the password itself, it will be more secure, since even if a hacker gets his hands on this hash, he will not be able to understand (read) it. During verification, the system will check the hash incoming password, and will see if the result is the same as what is stored. Thus, the actual password will only be visible in brief moments when it needs to be changed or checked, which will significantly reduce the likelihood of it falling into the wrong hands.

Hashing is also used to authenticate data with a secret key. The hash is generated using the data and this key. Therefore, only the data and hash are visible, and the key itself is not transmitted. This way, if changes are made to either the data or the hash, they will be easily detected.

In conclusion, it can be said that these techniques can be used to efficiently encode data into an unreadable format that can ensure that it remains secure. Most modern systems typically use a combination of these encryption techniques along with strong algorithms implementation to improve security. In addition to safety, these systems also provide many additional benefits such as verifying the user's identity and ensuring that the data received cannot be tampered with.

Sergey Panasenko,
head of the software development department of the company "Ankad",
[email protected]

Basic concepts

The process of converting open data into encrypted and vice versa is usually called encryption, and the two components of this process are called, respectively, encryption and decryption. Mathematically, this transformation is represented by the following dependencies that describe actions with the initial information:

С = Ek1 (M)

M "= Dk2 (C),

where M (message) - open information(often referred to in information security literature as " original text");
C (cipher text) - the ciphertext (or cryptogram) obtained as a result of encryption;
E (encryption) - an encryption function that performs cryptographic transformations on the original text;
k1 (key) - parameter of the function E, called the encryption key;
M "- information obtained as a result of decryption;
D (decryption) - a decryption function that performs reverse encryption cryptographic transformations over ciphertext;
k2 is the key used to decrypt information.

The concept of a "key" in the GOST 28147-89 standard (symmetric encryption algorithm) is defined as follows: this algorithm transformations. "In other words, the key is a unique element with which you can change the results of the encryption algorithm: the same source text will be encrypted in different ways when using different keys.

In order for the decryption result to coincide with the original message (that is, for M "= M), it is necessary simultaneous execution two conditions. First, the decryption function D must match the encryption function E. Second, the decryption key k2 must match the encryption key k1.

If a cryptographically strong encryption algorithm was used for encryption, then in the absence of the correct key k2, it is impossible to obtain M "= M. Cryptographic strength is the main characteristic of encryption algorithms and indicates, first of all, the degree of difficulty in obtaining the original text from encrypted without a key k2.

Encryption algorithms can be divided into two categories: symmetric and asymmetric encryption. For the former, the ratio of encryption and decryption keys is defined as k1 = k2 = k (i.e., functions E and D use the same encryption key). With asymmetric encryption, the encryption key k1 is calculated from the key k2 in such a way that reverse transformation impossible, for example, by the formula k1 = ak2 mod p (a and p are the parameters of the algorithm used).

Symmetric encryption

Symmetric encryption algorithms have their history since antiquity: it was this method of hiding information that was used by the Roman emperor Gaius Julius Caesar in the 1st century BC. e., and the algorithm invented by him is known as "Caesar's cryptosystem".

Currently, the most famous is the symmetric encryption algorithm DES (Data Encryption Standard), developed in 1977. Until recently, it was the "US standard", since the government of this country recommended using it to implement different systems data encryption. Despite the fact that DES was originally planned to be used no more than 10-15 years, attempts to replace it began only in 1997.

We will not consider DES in detail (in almost all books from the list additional materials eat it detailed description), and turn to more modern encryption algorithms. It should only be noted that the main reason for the change in the encryption standard is its relatively weak cryptographic strength, the reason for which is that the DES key length is only 56 significant bits... It is known that any cryptographically strong algorithm can be cracked by trying all possible variants of encryption keys (the so-called brute force attack). It is easy to calculate that a cluster of 1 million processors, each of which calculates 1 million keys per second, will check 256 DES keys in almost 20 hours. And since such computing power is quite real by today's standards, it is clear that a 56-bit key is too short and DES algorithm must be replaced with a stronger one.

Today, two modern cryptographically strong encryption algorithms are increasingly used: the domestic standard GOST 28147-89 and the new US crypto standard - AES (Advanced Encryption Standard).

Standard GOST 28147-89

The algorithm defined by GOST 28147-89 (Fig. 1) has an encryption key length of 256 bits. It encrypts information in blocks of 64 bits (such algorithms are called block algorithms), which are then split into two sub-blocks of 32 bits (N1 and N2). Subblock N1 is processed in a certain way, after which its value is added to the value of subblock N2 (addition is performed modulo 2, i.e. logical operation XOR - "exclusive or"), and then the subblocks are swapped. This transformation is performed a certain number of times ("rounds"): 16 or 32, depending on the mode of operation of the algorithm. In each round, two operations are performed.

The first is the key overlay. The content of the sub-block N1 is added modulo 2 with the 32-bit part of the key Kx. Full key encryption is represented as a concatenation of 32-bit subkeys: K0, K1, K2, K3, K4, K5, K6, K7. In the encryption process, one of these subkeys is used, depending on the round number and the mode of operation of the algorithm.

The second operation is table replacement. After the key is applied, the N1 sub-block is divided into 8 parts of 4 bits, the value of each of which is replaced in accordance with the replacement table for this part of the sub-block. Then, the sub-block is bitwise shifted to the left by 11 bits.

Table replacements(Substitution box - S-box) are often used in modern encryption algorithms, so it is worth explaining how such an operation is organized. The output values ​​of the blocks are written to the table. A data block of a certain dimension (in our case, 4-bit) has its own numerical representation, which determines the number of the output value. For example, if the S-box looks like 4, 11, 2, 14, 15, 0, 8, 13, 3, 12, 9, 7, 5, 10, 6, 1 and a 4-bit block "0100" came to the input (value 4), then, according to the table, the output value will be 15, that is, "1111" (0 a 4, 1 a 11, 2 a 2 ...).

The algorithm, determined by GOST 28147-89, provides for four modes of operation: simple replacement, gamming, gamming with feedback and generation of simulators. They use the same encryption transformation described above, but since the purpose of the modes is different, this transformation is carried out in each of them in different ways.

In the mode simple replacement 32 rounds described above are performed to encrypt each 64-bit block of information. In this case, 32-bit subkeys are used in the following sequence:

K0, K1, K2, K3, K4, K5, K6, K7, K0, K1, etc. - in rounds 1 to 24;

K7, K6, K5, K4, K3, K2, K1, K0 - in rounds 25 to 32.

Decryption in this mode is carried out in exactly the same way, but with a slightly different sequence of using subkeys:

K0, K1, K2, K3, K4, K5, K6, K7 - in rounds 1 to 8;

K7, K6, K5, K4, K3, K2, K1, K0, K7, K6, etc. - in rounds 9 to 32.

All blocks are encrypted independently of each other, that is, the encryption result of each block depends only on its content (the corresponding block of the source text). If there are several identical blocks of the original (plain) text, the corresponding blocks of ciphertext will also be the same, which gives an additional useful information for a cryptanalyst trying to break the cipher. Therefore, this mode is mainly used to encrypt the encryption keys themselves (very often multi-key schemes are implemented, in which, for a number of reasons, the keys are encrypted on top of each other). To encrypt the information itself, there are two other modes of operation - gamma and feedback gamma.

V gamma mode each block plain text is added bitwise modulo 2 with a 64-bit cipher gamma block. The gamma of the cipher is a special sequence that is obtained as a result of certain operations with registers N1 and N2 (see Fig. 1).

1. In registers N1 and N2 their initial filling is written - a 64-bit value called synchro-burst.

2. The contents of registers N1 and N2 are encrypted (in in this case- sync transmissions) in the simple replacement mode.

3. The contents of the N1 register are added modulo (232 - 1) with the constant C1 = 224 + 216 + 28 + 24, and the result of the addition is written to the N1 register.

4. The contents of register N2 are added modulo 232 with the constant C2 = 224 + 216 + 28 + 1, and the result of the addition is written to register N2.

5. The contents of the registers N1 and N2 are fed to the output as a 64-bit cipher gamma block (in this case, N1 and N2 form the first gamma block).

If the next block of gamma is needed (that is, you need to continue encryption or decryption), you return to step 2.

For decryption, the gamma is generated the same way and then XOR is applied again to the ciphertext and gamma bits. Since this operation is reversible, in the case of a correctly worked out scale, the original text (table) is obtained.

Encryption and decryption in gamma mode

To generate the cipher gamut required for decryption, the user decrypting the cryptogram must have the same key and the same synchro-message value that were used to encrypt the information. Otherwise, it will not be possible to get the original text from the encrypted one.

In most implementations of the GOST 28147-89 algorithm, the sync message is not secret, but there are systems where the sync message is the same secret element as the encryption key. For such systems, the effective length of the algorithm key (256 bits) is increased by another 64 bits of the secret sync message, which can also be considered as a key element.

In the gamma mode with feedback, to fill the registers N1 and N2, starting from the 2nd block, not the previous gamma block is used, but the result of encryption of the previous plaintext block (Fig. 2). The first block in this mode is generated completely similar to the previous one.

Rice. 2. Generation of the cipher gamma in the feedback gamma mode.

Considering the mode generation of simulators, the concept of the subject of generation should be defined. An imitator is a cryptographic check sum, calculated using an encryption key and designed to check the integrity of messages. When generating a prefix, the following operations are performed: the first 64-bit block of the information array, for which the prefix is ​​calculated, is written into registers N1 and N2 and encrypted in the reduced mode of simple replacement (the first 16 rounds out of 32 are performed). The resulting result is summed modulo 2 with the next block of information, storing the result in N1 and N2.

The cycle repeats until last block information. The resulting 64-bit contents of registers N1 and N2, or part of it, obtained as a result of these transformations, is called a prefix. The size of the prefix is ​​chosen based on the required message reliability: if the length of the prefix is ​​r bits, the probability that the message change will remain unnoticed is 2-r. Most often, a 32-bit prefix is ​​used, that is, half of the register contents. This is sufficient, since, like any checksum, the simulator is designed primarily to protect against accidental distortion of information. To protect against deliberate data modification, other cryptographic techniques- primarily an electronic digital signature.

When exchanging information, the prefix serves as a kind of additional tool control. It is calculated for the plaintext when some information is encrypted and sent along with the ciphertext. After decryption, a new prefix value is calculated, which is compared with the sent one. If the values ​​do not match, it means that the ciphertext was distorted during transmission or incorrect keys were used during decryption. The prefix is ​​especially useful for checking the correct decryption of key information when using multi-key schemes.

The GOST 28147-89 algorithm is considered a very strong algorithm - at present, no more have been proposed for its disclosure. effective methods than the "brute force" method mentioned above. Its high security is achieved primarily due to the large key length - 256 bits. When using a secret sync message, the effective key length is increased to 320 bits, and the secret of the substitution table adds additional bits. In addition, cryptographic strength depends on the number of rounds of transformations, which, according to GOST 28147-89, should be 32 ( full effect dispersion of input data is achieved after 8 rounds).

AES standard

Unlike the GOST 28147-89 algorithm, which for a long time remained secret, American standard AES encryption designed to replace DES, was selected through an open competition, where all interested organizations and individuals could study and comment on the algorithms-applicants.

A competition to replace DES was announced in 1997 by the US National Institute of Standards and Technology (NIST - National Institute of Standards and Technology). Fifteen candidate algorithms were presented for the competition, developed both by organizations well-known in the field of cryptography (RSA Security, Counterpane, etc.) and by individuals. The results of the competition were announced in October 2000: the winner was the Rijndael algorithm, developed by two cryptographers from Belgium, Vincent Rijmen and Joan Daemen.

The Rijndael algorithm is not like most of the well-known symmetric encryption algorithms, the structure of which is called the "Feistel network" and is similar to the Russian GOST 28147-89. The peculiarity of the Feistel network is that the input value is divided into two or more subblocks, some of which are processed in each round according to a certain law, and then superimposed on unprocessed subblocks (see Fig. 1).

Unlike the domestic encryption standard, the Rijndael algorithm represents a data block in the form of a two-dimensional byte array of 4X4, 4X6 or 4X8 sizes (several fixed sizes encrypted block of information). All operations are performed on individual bytes of the array, as well as on independent columns and rows.

The Rijndael algorithm performs four transformations: BS (ByteSub) - table replacement of each byte of the array (Fig. 3); SR (ShiftRow) - shift of array rows (Fig. 4). During this operation, the first line remains unchanged, and the rest are cyclically byte shifted to the left by a fixed number of bytes, depending on the size of the array. For example, for a 4X4 array, rows 2, 3, and 4 are shifted by 1, 2, and 3 bytes, respectively. Next comes MC (MixColumn) - an operation on independent columns of an array (Fig. 5), when each column is a certain rule multiplied by a fixed matrix c (x). And finally, AK (AddRoundKey) - add a key. Each bit of the array is added modulo 2 with the corresponding bit of the round key, which, in turn, is calculated in a certain way from the encryption key (Fig. 6).


Rice. 3. Operation BS.

Rice. 4. Operation SR.

Rice. 5. Operation MC.

The number of rounds of encryption (R) in the Rijndael algorithm is variable (10, 12 or 14 rounds) and depends on the size of the block and the encryption key (there are also several fixed sizes for the key).

Decryption is performed using the following reverse operations. The table is inverted and the table replacement is performed on the inverse table (relative to the one used for encryption). The inverse of SR is to rotate rows to the right instead of to the left. The inverse operation for MC is multiplication according to the same rules by another matrix d (x) satisfying the condition: c (x) * d (x) = 1. Adding the AK key is the inverse of itself, since it only uses the XOR operation. These reverse operations are used for decryption in the reverse order to that used for encryption.

Rijndael has become the new standard for data encryption due to a number of advantages over other algorithms. First of all, it provides high speed encryption on all platforms: both in software and hardware implementation. It is distinguished by incomparable best opportunities parallelization of computations in comparison with other algorithms presented for the competition. In addition, the resource requirements for its operation are minimal, which is important when it is used in devices with limited computing capabilities.

The only disadvantage of the algorithm is its inherent unconventional scheme. The fact is that the properties of algorithms based on the Feistel network are well researched, and Rijndael, in contrast, may contain hidden vulnerabilities that can only be discovered after some time has passed since its widespread distribution.

Asymmetric encryption

As already noted, asymmetric encryption algorithms use two keys: k1 is the encryption key, or public, and k2 is the decryption key, or secret. Public key calculated from the secret: k1 = f (k2).

Asymmetric encryption algorithms are based on the use of one-way functions. By definition, a function y = f (x) is unidirectional if: it is easy to compute for all possible options x and for most possible values y it is rather difficult to calculate such a value of x, for which y = f (x).

An example of a one-way function is the multiplication of two large numbers: N = P * Q. By itself, such a multiplication - simple operation... However, the inverse function (decomposition of N into two large factors), called factorization, according to modern temporal estimates is rather complicated math problem... For example, factoring N of 664 bits at P? Q will require about 1023 operations, and to reverse x for the modular exponent y = ax mod p with known a, p, and y (with the same dimensions of a and p), you need to perform about 1026 operations. The last of these examples is called the Discrete Logarithm Problem (DLP), and such functions are often used in asymmetric encryption algorithms, as well as in algorithms used to create electronic digital signatures.

Another important class of functions used in asymmetric encryption are backdoor one-way functions. Their definition says that a function is unidirectional with a secret passage if it is unidirectional and there is a possibility of efficient calculation inverse function x = f-1 (y), that is, if the "secret passage" is known (a certain secret number, as applied to asymmetric encryption algorithms - the value of the secret key).

Backdoor unidirectional functions are used in the widely used asymmetric encryption algorithm RSA.

RSA Algorithm

Developed in 1978 by three authors (Rivest, Shamir, Adleman), it got its name from the first letters of the developers' surnames. The reliability of the algorithm is based on the complexity of factorizing large numbers and calculating discrete logarithms. Main parameter RSA algorithm is the system module N, according to which all calculations in the system are carried out, and N = P * Q (P and Q are secret random simple large numbers, usually of the same dimension).

The secret key of k2 is chosen randomly and must meet the following conditions:

1

where GCD is the greatest common divisor, i.e., k1 must be coprime with the value of the Euler function F (N), and the latter is equal to the number of positive integers in the range from 1 to N, coprime with N, and is calculated as F (N) = (P - 1) * (Q - 1).

The public key k1 is calculated from the relation (k2 * k1) = 1 mod F (N), and for this the generalized Euclidean algorithm (algorithm for calculating the greatest common divisor) is used. Data block M is encrypted using the RSA algorithm as follows: C = M [to the power of k1] mod N... Note that since in a real cryptosystem using RSA the number k1 is very large (at present, its dimension can be up to 2048 bits), direct computation of M [to the power of k1] unreal. To obtain it, a combination of multiple squaring M with multiplication of the results is used.

Inversion of this function is not feasible for large dimensions; in other words, it is impossible to find M from the known C, N, and k1. However, having the secret key k2, using simple transformations, we can calculate M = Ck2 mod N. Obviously, in addition to the secret key itself, it is necessary to ensure the secrecy of the parameters P and Q. If the attacker obtains their values, he will be able to calculate the secret key k2.

Which encryption is better?

The main disadvantage of symmetric encryption is the need to transfer keys "from hand to hand". This disadvantage is very serious, since it makes it impossible to use symmetric encryption in systems with an unlimited number of participants. However, the rest of symmetric encryption has some advantages, which are clearly visible against the background of serious disadvantages of asymmetric encryption.

The first of them is the low speed of performing encryption and decryption operations due to the presence of resource-intensive operations. Another "theoretical" drawback is that the cryptographic strength of asymmetric encryption algorithms has not been proven mathematically. This is primarily due to the discrete logarithm problem - so far it has not been possible to prove that its solution in an acceptable time is impossible. Unnecessary difficulties are also created by the need to protect public keys from substitution - by changing the public key of a legal user, an attacker will be able to encrypt an important message in his public key and subsequently easily decrypt it with his private key.

However, these shortcomings do not prevent the widespread use of asymmetric encryption algorithms. Cryptosystems exist today that support public key certification, as well as a combination of symmetric and asymmetric encryption algorithms. But this is already a topic for a separate article.

Additional sources of information

For those readers who are interested in encryption, the author recommends expanding their horizons with the help of the following books.

  1. Brassard J. "Modern cryptology".
  2. Petrov A. A. "Computer security: cryptographic methods of protection".
  3. Romanets Yu. V., Timofeev PA, Shangin VF "Information protection in modern computer systems".
  4. Sokolov A. V., Shangin V. F. "Information security in distributed corporate networks and systems".

A full description of encryption algorithms can be found in the following documents:

  1. GOST 28147-89. Information processing system. Cryptographic protection. Algorithm for cryptographic transformation. - M .: Gosstandart USSR, 1989.
  2. AES algorithm: http://www.nist.gov/ae.
  3. RSA Algorithm: http://www.rsasecurity.com/rsalabs/pkcs/pkcs-1.

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION FEDERAL STATE EDUCATIONAL INSTITUTION OF HIGHER PROFESSIONAL EDUCATION

"SOUTH FEDERAL UNIVERSITY"

TECHNOLOGICAL INSTITUTE OF SOUTHERN FEDERAL UNIVERSITY IN TAGANROG Faculty of Information Security Department of BIT Abstract on the topic

"Cryptography and types of encryption"

Art. gr. I-21

Completed by: V. I. Mishchenko Checked by: E. A. Maro Taganrog - 2012

Introduction

1. History of cryptography

1.1 The emergence of ciphers

1.2 Evolution of cryptography

2. Cryptanalysis

2.1 Features messages

2.2 Properties of natural text

2.3 Criteria for determining naturalness

3. Symmetric encryption

4. Asymmetric encryption

Conclusion

Introduction As part of the educational practice, I have chosen the topic "Cryptography and types of encryption". In the course of the work, issues such as the history of the emergence of cryptography, its evolution, and types of encryption were considered. I reviewed the existing encryption algorithms, as a result of which it can be noted that humanity does not stand still and constantly comes up with various ways of storing and protecting information.

The issue of protecting valuable information by modifying it, excluding its reading by an unfamiliar person, has disturbed the best human minds since ancient times. The history of encryption is almost the same age as the history of human speech. In addition, initially, writing itself was a cryptographic system, since in ancient societies only a select few possessed such knowledge. The sacred manuscripts of various ancient states are examples of this.

Since writing became widespread, cryptography has become a completely independent science. The first cryptographic systems can be found already at the beginning of our era. For example, Julius Caesar used a systematic code in his personal correspondence, which was later named after him.
Encryption systems were seriously developed in the era of the first and second world wars. From the early post-war period to the present time, the advent of modern computing devices hastened the creation and improvement of encryption methods.
Why has the issue of using encryption methods in computer systems (CS) become especially urgent in our time?
First, the scope of application of computer networks, such as the World Wide Web, has expanded, with the help of which huge volumes of information of a state, military, commercial and personal nature are transmitted, which does not give the possibility of access to it by third parties.
Secondly, the emergence of modern super-powerful computers, advanced technologies of network and neural computing makes it possible to discredit encryption systems that were considered completely safe yesterday.

1. History of cryptography With the very emergence of human civilization, it became necessary to transmit information to the right people so that it would not become known to outsiders. At first, people used only voice and gestures to broadcast messages.

With the advent of writing, the issue of ensuring the secrecy and authenticity of broadcast messages has become especially important. As a result, it was after the invention of writing that the art of cryptography arose, the method of "writing secretly" - a set of techniques designed to secretly transfer recorded messages from one initiate to another.

Humanity has come up with a considerable number of secret writing technologies, in particular, sympathetic ink that disappears soon after they write a text or are invisible from the beginning, “dissolving” valuable information in a large text with a completely “alien” meaning, preparing messages using strange incomprehensible symbols.

Encryption arose precisely as a practical subject that studies and develops methods for encrypting information, that is, when transferring messages, it does not hide the very fact of transmission, but makes the text of a message inaccessible for reading by uninitiated people. For this, the text of the message should be recorded in such a way that no person, with the exception of the addressees themselves, could familiarize themselves with its content.

The appearance of the first computers in the middle of the 20th century changed the situation dramatically - practical encryption made a huge leap forward in its development and such a term as "cryptography" significantly departed from its original meaning - "secret writing", "secret writing". Nowadays, this subject combines methods of protecting information of a completely heterogeneous nature, based on the transformation of data using secret algorithms, including algorithms that use various secret parameters.

1.1 The emergence of ciphers Some of the cryptographic systems have come down to us from deep antiquity. Most likely they were born simultaneously with writing in the 4th millennium BC. The methods of secret correspondence were independently invented in many ancient states, such as Egypt, Greece and Japan, but the detailed composition of cryptology in them is now unknown. Cryptograms are found even in ancient times, although due to the ideographic writing used in the ancient world in the form of stylized pictograms, they were rather primitive. The Sumerians seem to have used the art of secret writing.

Archaeologists have found a number of clay cuneiform tablets, in which the first record was often covered with a thick layer of clay, on which the second record was made. The appearance of such strange tablets could well be justified by both cryptography and disposal. Since the number of characters in ideographic writing numbered more than a thousand, memorizing them was a rather difficult task - there was no time for encryption. However, codes that appeared at the same time as dictionaries were very familiar in Babylon and the Assyrian state, and the ancient Egyptians used at least three encryption systems. With the origin of phonetic writing, writing was immediately simplified. In the ancient Semitic alphabet in the II millennium BC, there were only about 30 characters. They designated consonants, as well as some vowel sounds and syllables. The simplification of writing triggered the development of cryptography and encryption.

Even in the books of the Bible, we can find examples of encryption, although almost no one notices them. In the book of the prophet Jeremiah (22,23) we read: "... and the king of Sessach will drink after them." This king and such a kingdom did not exist - is it really the author's mistake? No, it is just that sometimes sacred Jewish manuscripts were encrypted with the usual replacement. Instead of the first letter of the alphabet, they wrote the last, instead of the second - the penultimate, and so on. This old way of cryptography is called atbash. Reading the word SESSAH with its help, in the original language we have the word BABYLON, and the whole meaning of the biblical manuscript can be understood even by those who do not blindly believe in the truth of the scripture.

1.2 Evolution of Cryptography The evolution of encryption in the twentieth century was very rapid, but completely uneven. Looking at the history of its development as a specific area of ​​human life, three fundamental periods can be distinguished.

Elementary. He only dealt with hand ciphers. It began in dense antiquity and ended only at the very end of the thirties of the twentieth century. During this time, cryptography has covered a long way from the magical art of prehistoric priests to the everyday applied profession of employees of secret agencies.

The subsequent period can be noted with the creation and widespread introduction into practice of mechanical, then electromechanical and, at the very end, electronic cryptographic devices, the creation of entire networks of encrypted communication.

The birth of the third period in the development of encryption is usually considered to be 1976, in which the American mathematicians Diffie and Hellman invented a fundamentally new way of organizing encrypted communication, which does not require preliminary provision of subscribers with secret keys - the so-called public key encryption. As a result, encryption systems began to emerge based on the method invented back in the 40s by Shannon. He proposed to create a cipher in such a way that its decryption would be equivalent to solving a complex mathematical problem that requires performing calculations that would exceed the capabilities of modern computer systems. This period of development of encryption is characterized by the emergence of completely automated encrypted communication systems in which any user owns his personal password for verification, stores it, for example, on a magnetic card or somewhere else, and presents it when logging into the system, and everything else happens automatically.

2. Cryptanalysis There is a huge gap between manual and computer-based encryption methods. Hand ciphers are very diverse and can be the most amazing. in addition, the messages they encrypt are pretty laconic and short. Therefore, they are hacked much more efficiently by humans than by machines. Computer ciphers are more stereotypical, mathematically very complex, and are designed to encrypt messages of a fairly large length. Of course, it's not even worth trying to solve them manually. Nevertheless, cryptanalysts play a leading role in this area as well, being the commanders of the cryptographic attack, despite the fact that the battle itself is only fought in hardware and software. The underestimation of this phenomenon led to the fiasco of the ciphers of the Enigma cipher machine during the Second World War.

The type of encryption and language of the message are almost always known. The alphabet and statistical features of cryptography may well suggest them. Nevertheless, information about the language and the type of cipher is often obtained from undercover sources. This situation is a bit like breaking into a safe: if the “burglar” does not know in advance the design of the safe to be cracked, which looks rather unlikely, he still quickly identifies it by its appearance, the corporate logo. In this regard, the unknown is only a key that needs to be unraveled. The difficulty lies in the fact that just like not all diseases can be cured with the same medicine, and for any of them there are specific means, so specific types of ciphers are broken only by their own methods.

2.1 Characteristics of Messages Messages, no matter how complex they may be, it is quite possible to imagine in the form of any order of symbols. These symbols must be taken from a predetermined set, for example, from the Russian alphabet or from a palette of colors (red, yellow, green). Different characters can appear in messages at different intervals. In this regard, the amount of information transmitted by different symbols may be different. In the understanding proposed by Shannon, the amount of information is determined by the average value of the number of possible questions with the answer choices YES and NO in order to predict the subsequent sign in the message. If the characters in the text are located in a sequence that does not depend on each other, then the average amount of information in such a message per character is equal to:

where Pi is the frequency of manifestation of the i sign, and Ld is the binary logarithm. Three phenomena of this distribution of information should be noted.

It does not depend at all on the semantics, meaning of the message, and it can be used even in a situation where the exact meaning is not entirely clear. It implies that the probability of the appearance of symbols does not depend on their preliminary history.

The symbolic system in which the message is translated, that is, the language, the encryption method, is known ahead of time.

In what units is the value of the volume of information according to Shannon measured? Most likely, the answer to this question can be given by the encryption theorem, which states that any message can be encrypted with the characters 0 and 1 in such a way that the amount of information obtained will be arbitrarily close from above to H. Such a theorem also allows us to indicate a unit of information - this is a bit.

2.2 Natural text properties Now let's take a look at one way of applying knowledge of natural text features for encryption needs. It is necessary to determine by a piece of text what it is - a message carrying a semantic load or just a sequence of random characters. A number of cryptographic methods have to be broken on a computer by a banal search of keys, and it is simply impossible to manually try over a thousand pieces of text a day, and the search speed is very low. in this regard, it is necessary to implement such a task using a computer.

Let's say we have to iterate over approximately one billion keys on a computer at a rate of one thousand keys per second. This will take us about ten days. In this case, we run the risk of falling into two extremes. If we are too careful in our assessments, some of the meaningless fragments of the text will be identified as messages and returned to the person. This error is most commonly referred to as a "false alarm" or Type I error.

With the volume of such errors exceeding one thousand a day, a person sitting at a computer will get tired and may later check fragments of text inattentively. This means that it is possible to make no more than one error of this kind per 100,000 checks. At the other extreme, if you approach the check inattentively, then it is quite possible to skip a meaningful text and at the end of the full search it will have to be repeated again. In order not to risk the need to repeat the entire amount of work, errors of the second kind, also called "fragment omissions", can only be made in one case out of 100 or 1000.

2.3 Criteria for determining naturalness At first glance, the simplest criterion that can come to mind is the use of the alphabet of the message fragment. Considering that theoretically only punctuation marks, numbers, uppercase and lowercase Russian letters can be found in it, no more than half of the ASCII code table set can be found in the text of a message fragment.

This means that when a computer encounters an unacceptable sign in a text fragment, one can definitely declare that it is not meaningful - errors of the second kind are practically excluded with a well-functioning communication channel.

In order to reduce the theoretical possibility of "false alarms" to the value indicated in the previous article, we need a message fragment to consist of at least twenty-three characters. The question becomes more complicated if the letter code used is not redundant, like the ASCII representation of the Russian text, but contains exactly as many characters as there are in the alphabet.

In this case, we will have to enter an estimate for the theoretical possibilities of hitting characters in the text. In order to provide the accepted possibilities of errors of the first and second kind, when assessing the maximum possible likelihood, it is necessary to analyze already about 100 characters, and the analysis of the possibility of meeting bigrams only slightly reduces this value.

Therefore, short message fragments with a large key value in general is practically impossible to decode unambiguously, since random text fragments that appear may well coincide with phrases that make sense. The same problem must be solved in the quality control of cryptography. In this case, however, the possibility of a false alarm may well be increased by making it no more than one thousandth, with the same possibility of ignoring a fragment of the message. That will allow us to limit ourselves to only twenty or thirty characters for checking texts.

3. Symmetric encryption Symmetric cryptosystems (also symmetric encryption, symmetric ciphers) are an encryption method in which the same cryptographic key is used for encryption and decryption. Before the invention of the asymmetric encryption scheme, the only existing method was symmetric encryption. The algorithm key must be kept secret by both parties. The encryption algorithm is chosen by the parties prior to starting the exchange of messages.

Currently symmetric ciphers are:

Block ciphers. The information is processed in blocks of a certain length (usually 64, 128 bits), applying a key to the block in a prescribed order, as a rule, several cycles of mixing and substitution, called rounds. The result of the repetition of rounds is an avalanche effect - an increasing loss of bit correspondence between blocks of open and encrypted data.

Stream ciphers, in which encryption is performed on each bit or byte of the original (plain) text using gamma. A stream cipher can be easily created on the basis of a block cipher (for example, GOST 28 147-89 in gamma mode), launched in a special mode.

Most symmetric ciphers use a complex combination of many substitutions and permutations. Many such ciphers are executed in several (sometimes up to 80) passes, using a "pass key" on each pass. The set of "pass keys" for all passes is called a "key schedule". As a rule, it is created from a key by performing certain operations on it, including permutations and substitutions.

The typical way to build symmetric encryption algorithms is the Feistel network. The algorithm builds an encryption scheme based on the function F (D, K), where D is a piece of data that is half the size of the encryption block, and K is the "pass key" for this pass. A function is not required to be reversible - its inverse function may not be known. The advantages of the Feistel network are that decryption and encryption almost completely coincide (the only difference is the reverse order of the "pass keys" in the schedule), which greatly facilitates the hardware implementation.

The permutation operation shuffles the bits of the message according to a certain law. In hardware implementations, it is trivially implemented as wire entanglement. It is the permutation operations that make it possible to achieve the "avalanche effect". Permutation operation is linear - f (a) xor f (b) == f (a xor b)

Substitution operations are performed as replacing the value of some part of the message (often 4, 6, or 8 bits) with a standard, hard-coded other number in the algorithm by referring to a constant array. The substitution operation introduces non-linearity into the algorithm.

Often, the robustness of an algorithm, especially against differential cryptanalysis, depends on the choice of values ​​in lookup tables (S-boxes). At a minimum, it is considered undesirable to have fixed elements S (x) = x, as well as the lack of influence of some bit of the input byte on some bit of the result - that is, cases when the result bit is the same for all pairs of input words that differ only in this beat.

Figure 1. Types of keys

4. Asymmetric encryption A public key cryptographic system (or asymmetric encryption, asymmetric cipher) is an encryption and / or electronic digital signature system in which the public key is transmitted over an open (that is, unprotected, accessible for observation) channel and is used to verify the EDS and to encrypt the message. A secret key is used to generate an EDS and to decrypt the message. Public key cryptographic systems are now widely used in various network protocols, in particular in the TLS protocols and its predecessor SSL (underlying HTTPS), in SSH.

The idea of ​​public key cryptography is very closely related to the idea of ​​one-way functions, that is, functions such that it is quite easy to find the value from the known, while the definition of from is impossible in a reasonable time.

But the one-way function itself is useless: it can encrypt a message, but it cannot decrypt it. Therefore, public key cryptography uses one-way functions with a loophole. A loophole is a secret that helps decipher. That is, there is one that, knowing and, can be calculated. For example, if you disassemble a clock into many component parts, it is very difficult to assemble a newly working clock.

The following example helps to understand the ideas and methods of public key cryptography - storing passwords on a computer. Each user on the network has his own password. When entering, he specifies a name and enters a secret password. But if you store the password on a computer disk, then someone can read it (it is especially easy for the administrator of this computer to do this) and gain access to secret information. A one-way function is used to solve the problem. When creating a secret password, the computer does not store the password itself, but the result of calculating a function from this password and username. For example, the user Alice came up with the password "Gladiolus". When saving this data, the result of the function (GLADIOLUS) is calculated, let the result be the string CHAMOMILE, which will be saved in the system. As a result, the password file will look like this:

The login now looks like this:

When Alice enters the "secret" password, the computer checks whether or not the function applied to GLADIOLUS gives the correct result CHAMOMILE stored on the computer disk. It is worth changing at least one letter in the name or in the password, and the result of the function will be completely different. The "secret" password is not stored in the computer in any form. The password file can now be viewed by other users without loss of secrecy, since the function is practically irreversible.

The previous example uses a one-way function without a trapdoor, since it is not required to retrieve the original from the encrypted message. The following example examines a scheme with the ability to recover the original message using a "loophole", that is, hard-to-find information. To encrypt the text, you can take a large subscriber directory consisting of several thick volumes (it is very easy to find the number of any city resident using it, but it is almost impossible to find a subscriber using a known number). For each letter from the encrypted message, a name starting with the same letter is selected. Thus, the letter is associated with the subscriber's phone number. The sent message, for example "BOX", will be encrypted as follows:

Message

Selected name

Cryptotext

Kirsanova

Arsenyev

The cryptotext will be a chain of numbers written in the order they are selected in the directory. To make it difficult to decipher, you should choose random names starting with the desired letter. Thus, the original message can be encrypted with many different lists of numbers (cryptotexts).

Examples of such cryptotexts:

Cryptotext 1

Cryptotext 2

Cryptotext 3

To decipher the text, you need to have a directory compiled according to the ascending numbers. This guide is a loophole (a secret that helps to get the initial text) known only to legal users. Without a copy of the handbook, a cryptanalyst will spend a lot of time decrypting.

Public Key Encryption Scheme Let is the key space, and and are the encryption and decryption keys, respectively. - an encryption function for an arbitrary key, such that:

Here, where is the space of ciphertexts, and, where is the space of messages.

A decryption function with which you can find the original message knowing the ciphertext:

(:) is the encryption set, and (:) is the corresponding decryption set. Each pair has a property: knowing, it is impossible to solve the equation, that is, for a given arbitrary ciphertext, it is impossible to find a message. This means that from this it is impossible to determine the corresponding decryption key. is a one-way function, and a loophole.

Below is a diagram of the transfer of information by a person, A to person B. They can be both individuals and organizations, and so on. But for easier perception, it is customary to identify the participants in the program with people, most often called Alice and Bob. The participant who seeks to intercept and decrypt messages from Alice and Bob is most often referred to as Eve.

Figure 2. Asymmetric encryption Bob chooses a pair and sends the encryption key (public key) to Alice over an open channel, and the decryption key (private key) is protected and secret (it should not be transmitted over an open channel).

To send a message to Bob, Alice uses the encryption function defined by the public key: - the received ciphertext.

Bob decrypts the ciphertext using the inverse transform, which is uniquely determined by the value.

The Scientific Basis Asymmetric ciphers began with New Directions in Modern Cryptography by Whitfield Diffie and Martin Hellman, published in 1976. Influenced by Ralph Merkle's work on distributing the public key, they proposed a method for obtaining private keys using an open channel. This method of exponential key exchange, which became known as Diffie-Hellman key exchange, was the first published practical method for establishing secret key sharing among authorized channel users. In 2002, Hellman proposed calling the algorithm Diffie-Hellman-Merkle, recognizing Merkle's contributions to the invention of public key cryptography. This same scheme was developed by Malcolm Williamson in the 1970s, but was kept secret until 1997. The Merkle method for distributing a public key was invented in 1974 and published in 1978, also called the Merkle puzzle.

In 1977, scientists Ronald Rivest, Adi Shamir and Leonard Adleman of the Massachusetts Institute of Technology developed an encryption algorithm based on the factorization problem. The system was named after the first letters of their surnames (RSA - Rivest, Shamir, Adleman). The same system was invented in 1973 by Clifford Cox, who worked at the government communications center (GCHQ), but this work was kept only in the internal documents of the center, so its existence was not known until 1977. RSA was the first algorithm suitable for both encryption and digital signatures.

In general, known asymmetric cryptosystems are based on one of the complex mathematical problems that allows one to build one-way functions and loophole functions. For example, the Merkle-Hellman and Hoare-Rivest cryptosystems rely on the so-called knapsack stowage problem.

Basic Principles of Building Public Key Cryptosystems Let's start with a difficult task. It should be difficult to solve in the sense of theory: there should not be an algorithm with which it would be possible to enumerate all the options for solving the problem in polynomial time with respect to the size of the problem. It would be more correct to say: there should not be a known polynomial algorithm that solves this problem, since it has not yet been proven for any problem that there is no suitable algorithm for it in principle.

You can select an easy subtask from. It should be solved in polynomial time and better, if in linear time.

"Shuffle and shake" to get a problem that is completely different from the original. The problem should at least look like the original hard-to-solve problem.

opens with a description of how it can be used as an encryption key. How to get it is kept secret like a secret loophole.

The cryptosystem is organized in such a way that the decryption algorithms for a legal user and a cryptanalyst are significantly different. While the second solves the -problem, the first uses a secret loophole and solves the -problem.

Cryptography with Multiple Public Keys The following example shows a scheme in which Alice encrypts a message so that only Bob can read it, and vice versa, Bob encrypts the message so that only Alice can decrypt it.

Let there are 3 keys distributed as shown in the table.

cryptography encryption key symmetric

Then Alice can encrypt the message with the key, and Ellen can decrypt with the keys, Carol can encrypt with the key, and Dave can decrypt with the keys. If Dave encrypts the message with the key, then the message can be read by Ellen, if with the key, then Frank can read it, if with both keys and, then the message will be read by Carol. Other participants act by analogy. Thus, if one subset of keys is used for encryption, then the remaining keys of the set are required for decryption. This scheme can be used for n keys.

Now you can send messages to groups of agents without knowing the composition of the group in advance.

Let's start with a set of three agents: Alice, Bob, and Carol. Alice is given the keys and, Bob is given the keys, and Carol is given the keys. Now, if the message being sent is encrypted with a key, then only Alice can read it, sequentially applying the keys and. If you need to send a message to Bob, the message is encrypted with a key, Carol with a key. If you need to send a message to both Alice and Carol, then the keys and are used for encryption.

The advantage of this scheme is that only one message and n keys are needed to implement it (in a scheme with n agents). If individual messages are transmitted, that is, separate keys are used for each agent (n keys in total) and each message, then keys are required to transmit messages to all different subsets.

The disadvantage of this scheme is that it is also necessary to broadcast a subset of the agents (the list of names can be impressive) to which the message needs to be transmitted. Otherwise, each of them will have to go through all the combinations of keys in search of a suitable one. Also, agents will have to store a considerable amount of information about the keys.

Cryptanalysis of public key algorithms It would seem that a public key cryptosystem is an ideal system that does not require a secure channel to transmit the encryption key. This would imply that two legitimate users could communicate over an open channel without meeting to exchange keys. Unfortunately, this is not so. The figure illustrates how Eve, acting as an active interceptor, can hijack the system (decrypt the message intended for Bob) without breaking the encryption system.

Figure 3. Cryptosystem with a public key and an active interceptor In this model, Eve intercepts the public key sent by Bob to Alice. Then he creates a pair of keys and disguises himself as Bob by sending Alice the public key, which Alice thinks is the public key Bob sent her. Eve intercepts encrypted messages from Alice to Bob, decrypts them with the private key, re-encrypts them with Bob's public key, and sends the message to Bob. Thus, none of the participants realizes that there is a third party who can either simply intercept the message or replace it with a false message. This highlights the need for public key authentication. This is usually done using certificates. Distributed key management in PGP solves the problem with the help of guarantors.

Another form of attack is the computation of the private key, knowing the public one (figure below). The cryptanalyst knows the encryption algorithm, analyzing it, trying to find it. This process is simplified if the cryptanalyst intercepts several cryptotexts sent by person A to person B.

Figure 4. An asymmetric cryptosystem with a passive interceptor.

Most public key cryptosystems are based on the problem of factorizing large numbers. For example, RSA uses the product of two large numbers as the public key n. The difficulty in breaking such an algorithm is the difficulty in factoring the number n. But this task can be solved realistically. And every year the process of decomposition becomes faster and faster. Below are the data of the factorization using the "Quadratic Sieve" algorithm.

Also, the decomposition problem can potentially be solved using Shor's Algorithm using a sufficiently powerful quantum computer.

For many methods of asymmetric encryption, the cryptographic strength obtained as a result of cryptanalysis differs significantly from the values ​​declared by the developers of the algorithms based on theoretical estimates. Therefore, in many countries, the issue of using data encryption algorithms is in the field of legislative regulation. In particular, in Russia, only those data encryption software that have passed state certification in administrative bodies, in particular, in the FSB, are allowed for use in government and commercial organizations.

Conclusion In the course of work on the chosen topic within the framework of educational practice, I carried out: a review of the history of the development of cryptography and cryptanalysis; an analytical review of existing types of cryptographic algorithms (symmetric and asymmetric ciphers are considered) and methods for assessing their strength. I hope that the development of cryptography will only benefit mankind.

References Gatchin Yu. A., Korobeinikov A. G. Fundamentals of cryptographic algorithms. Tutorial. - SPb .: SPbGITMO (TU), 2002.

Kon P. Universal algebra. - M .: Mir. - 1968

Korobeynikov A.G. Mathematical foundations of cryptography. Tutorial. SPb: SPb GITMO (TU), 2002.

Schneier B. Applied Cryptography. Protocols, algorithms, source texts in C = Applied Cryptography. Protocols, Algorithms and Source Code in C. - M .: Triumph, 2002.

Basic encryption algorithms

Basic concepts and definitions

With the formation of the information society, large states have access to technical means of total supervision over millions of people. Therefore, cryptography is becoming one of the main tools for ensuring confidentiality, trust, authorization, electronic payments, corporate security and other important things.

Dealing with the problem of protecting information by transforming it cryptology , which is divided into two directions: cryptography and cryptanalysis ... The goals of these directions are exactly the opposite.

Cryptography is engaged in the search and research of mathematical methods for transforming information. Area of ​​interest cryptanalysis- investigation of the possibility of decrypting information without knowing the keys.

Modern cryptography includes 4 main sections:

1. Symmetric cryptosystems.

2. Cryptosystems with a public key.

3. Electronic signature systems.

4. Key management.

The main directions of using cryptographic methods are the transfer of confidential information through communication channels, the authentication of transmitted messages, and the storage of information on media in encrypted form.

Cryptography makes it possible to transform information in such a way that its reading (recovery) is possible only with knowledge of the key. As information subject to encryption and decryption, texts based on a certain alphabet will be considered.

Alphabet- a finite set of characters used to encode information. Examples:

ü alphabet Z33 - contains 32 letters of the Russian alphabet and a space;

ü alphabet Z256 - characters included in the standard ASCII and KOI-8 codes;

ü binary alphabet Z2 - two characters (0 and 1);

ü octal or hexadecimal alphabets.

Text- an ordered set of elements of the alphabet.

Encryption- the transformative process of replacing the original (plain) text with cipher text.

Decryption(inverse to encryption) - a transformative process of replacing the ciphertext based on the ciphertext key with the original text.

Key- information necessary for unhindered encryption and decryption of texts.

Cryptographic system is a family of T [T 1, T 2, ..., T to] plaintext transformations. Members of this family are indexed or denoted by the symbol To; parameter To is the key. The key space K is the set of possible key values. Typically, a key is a sequential series of alphabetical characters.

Cryptosystems are divided into symmetrical and asymmetric ... V symmetric cryptosystems the same key is used for both encryption and decryption. V asymmetric systems (with public key) two keys are used - public and private, which are mathematically related to each other. Information is encrypted using a public key that is available to everyone, and decrypted using a private key known only to the recipient of the message.

Terms key distribution and key management refer to information processing processes, the content of which is the compilation of keys and their distribution among users.

Electronic (digital) signature is called its cryptographic transformation, which is attached to the text, which allows, when the text is received by another user, to verify the authorship and authenticity of the message.

Crypto resistance is a characteristic of a cipher that determines its resistance to decryption without knowing the key (i.e., resistance to cryptanalysis). There are several indicators of cryptographic strength:

the number of all possible keys;

average time required for cryptanalysis.

Requirements for cryptosystems

The process of cryptographic data closure can be carried out both in software and hardware. The hardware implementation is significantly more expensive, but it has high performance, simplicity, and security. Software implementation is more practical and allows for a certain flexibility in use.

Generally accepted requirements for cryptographic systems:

· The encrypted message should be readable only if the key is present;

· The number of operations required to determine the used key from the fragment of the encrypted message and the corresponding plain text must be at least the total number of possible keys;

· The number of operations required to decrypt information by enumerating possible keys must have a strict lower bound and go beyond the capabilities of modern computers (taking into account the capabilities of network computing);

· Knowledge of the encryption algorithm should not affect the reliability of protection;

· A slight change in the key should lead to a significant change in the type of encrypted message;

· Structural elements of the encryption algorithm must be unchanged;

· Additional bits introduced into the message during the encryption process must be completely and reliably hidden in the cipher text;

· The length of the cipher text must be equal to the length of the original text;

· There should be no simple and easily established dependencies between keys that are sequentially used in the encryption process;

· Any key from the set of possible ones should provide reliable information protection;

· The algorithm should allow both software and hardware implementation, while changing the key length should not lead to a qualitative deterioration of the encryption algorithm.

Basic encryption algorithms

The encryption-decryption method is called cipher ... The key used for decryption may not match the key used for encryption, however, in most algorithms, the keys are the same.

Key algorithms are divided into two classes: symmetrical (with secret key) and asymmetrical (with public key). Symmetric algorithms use the same encryption key and decryption key, or the decryption key is simply calculated from the encryption key. Asymmetric algorithms use different keys and the decryption key cannot be calculated from the encryption key.

Symmetric algorithms are subdivided into stream ciphers and block ciphers. Streaming allows you to encrypt information bit by bit, while block ones work with a certain set of data bits ( usually block size is 64 bits) and encrypt this set as a whole.

Typically, the encryption key is a file or data array and is stored on a personal key carrier (for example, a USB flash drive or smart card); it is imperative to take measures to ensure that the personal key carrier is inaccessible to anyone other than its owner.



Authenticity is ensured due to the fact that without preliminary decryption it is practically impossible to carry out semantic modification and forgery of a cryptographically closed message. A fake message cannot be encrypted correctly without knowing the secret key.

Data integrity is ensured by attaching a special code to the transmitted data ( imitation inserts ) generated by the secret key. Imitation insertion is a kind of checksum, i.e. some reference characteristic of the message, according to which the integrity of the latter is checked. The algorithm for generating an imitating insert must ensure its dependence according to some complex cryptographic law on each bit of the message. The message integrity check is performed by the recipient of the message by generating a dummy insert corresponding to the received message using the secret key and comparing it with the received dummy value. If there is a match, it is concluded that the information has not been modified on the way from the sender to the recipient.

Symmetric encryption is ideal for encrypting information "for yourself", for example, in order to prevent unauthorized access to it in the absence of the owner. I have a high encryption speed, single-key cryptosystems can solve many important information security problems. However, the autonomous use of symmetric cryptosystems in computer networks gives rise to the problem of distribution of encryption keys between users.

Before starting the exchange of encrypted data, it is necessary to exchange secret keys with all recipients. The transfer of the secret key of a symmetric cryptosystem cannot be carried out via public communication channels; the secret key must be transferred to the sender and the recipient via a secure channel (or using a courier). To ensure effective protection of messages circulating in the network, a huge number of frequently changing keys are required (one key for each pair of users). The problem of distributing secret keys with a large number of users is a very time consuming and complex task. In a network for N users, it is necessary to distribute N (N-1) / 2 secret keys.

Asymmetric ciphers allow the public key to be available to everyone (for example, published in a newspaper). This allows anyone to encrypt the message. However, only the user with the decryption key can decrypt this message. The encryption key is called public key , and the decryption key is private key or secret key .

The private and public keys are generated in pairs. The secret key must remain with its owner and be reliably protected from tampering (similar to the encryption key in symmetric algorithms). A copy of the public key must be kept by each subscriber of the cryptographic network with whom the owner of the secret key exchanges information.

Public key cryptographic systems use so-called irreversible or one-way functions that have the property: given a value X relatively easy to compute the value f (x) however if yM = j (x), then there is no easy way to calculate the value X... Many classes of irreversible functions give rise to all the variety of public key systems.

The process of transferring encrypted information in an asymmetric cryptosystem is as follows.

Preparatory stage:

· Subscriber B generates a pair of keys: a secret key k in and a public key K in;

· Public key K is sent to subscriber A and other subscribers (or made available, for example, on a shared resource).

Usage ( information exchange between A and B ):

· Subscriber A encrypts the message with the public key K to subscriber B and sends the ciphertext to subscriber B;

· Subscriber B decrypts the message using his secret key k B; no one else can decrypt this message, because does not have the secret key of subscriber V.

Information protection in an asymmetric cryptosystem is based on the secrecy of the key k in the recipient of the message.

Advantages asymmetric cryptographic systems over symmetric cryptosystems:

ü in asymmetric cryptosystems the complex problem of key distribution between users is solved, since each user can generate his own key pair himself, and public keys of users can be freely published and distributed over network communications;

ü the quadratic dependence of the number of keys on the number of users disappears; in an asymmetric cryptosystem, the number of keys used is related to the number of subscribers by a linear relationship (in a system of N users, 2N keys are used), and not by quadratic, as in symmetric systems;

ü Asymmetric cryptosystems allow implementing protocols of interaction between parties that do not trust each other, since when using asymmetric cryptosystems, the private key must be known only to its owner.

Flaws asymmetric cryptosystems:

ü at the moment there is no mathematical proof of irreversibility of functions used in asymmetric algorithms;

ü asymmetric encryption is significantly slower than symmetric, since very resource-intensive operations are used for encryption and decryption; for the same reason, it is much more difficult to implement a hardware encryptor with an asymmetric algorithm than to implement a hardware symmetric algorithm;

ü the need to protect public keys from substitution.

Modern encryption-decryption algorithms are quite complex and cannot be performed manually. Real cryptographic algorithms are designed to be used by computers or specialized hardware devices. In most applications, cryptography is done in software and there are many cryptographic packages available.

Symmetric algorithms are faster than asymmetric ones. In practice, both types of algorithms are often used together: a public key algorithm is used to transmit a randomly generated secret key, which is then used to decrypt the message.

Many quality cryptographic algorithms are widely available. The most famous symmetric algorithms are DES and IDEA; the best asymmetric algorithm is RSA. In Russia, GOST 28147-89 is adopted as the encryption standard.

Table 1 shows the classification of cryptographic information closure.

Table 1

Conversion types Conversion methods Varieties of the way Implementation method
Encryption Replacement (substitution) Simple (one-alphabetic) Prog.
Multi-alphabetic single-line ordinary Prog.
Multi-alphabetic single-line monaural Prog.
Prog.
Permutation Simple Prog.
Complicated by the table Prog.
Complicated by routes Prog.
Analytical transformation By the rules of matrix algebra Prog.
By special dependencies Prog.
Gumming With a finite short range Hardware-Prog.
With a finite long range Hardware-Prog.
With an endless gamut Hardware-Prog.
Combined Replacement + permutation Hardware-Prog.
Replacement + gumming Hardware-Prog.
Permutation + gamming Hardware-Prog.
Gumming + gumming Hardware-Prog.
Coding Semantic According to special tables (dictionaries) Prog.
Symbolic By code alphabet Prog.
Other species Explode-Explode Semantic Hardware-Prog.
Mechanical Prog.
Compression-expansion

I. Under encryption this kind of cryptographic closure is understood, in which each character of the protected message is subjected to transformation.

All known encryption methods can be divided into five groups: substitution (substitution), permutation, analytical transformation, gamma and combined encryption. Each of these methods can be of several varieties.

Varieties of the way replacement (substitution ):

1) Simple (mono-alphabetic) - the characters of the encrypted text are replaced with other characters of the same alphabet. If the volume of the cipher text is large, then the frequencies of the appearance of letters in the cipher text will be closer to the frequencies of the appearance of letters in the alphabet (of the language in which the text is written) and the decoding will be very simple. This method is currently rarely used and in cases where the encrypted text is short.

2) Multi-alphabetic substitution is the simplest type of transformations, which consists in replacing characters of the original text with characters of other alphabets according to a more or less complex rule. To ensure high cryptographic strength, the use of large keys is required.

At multi-alphabetic single-line ordinary substitution to replace characters in the original text uses several alphabets, and the alphabet is changed sequentially and cyclically, i.e. the first character is replaced by the corresponding character of the first alphabet, the second by the character of the second alphabet, and so on. until all selected alphabets have been used. After that, the use of the alphabets is repeated.

Feature multi-alphabetic single-line monophonic substitution is that the number and composition of alphabets are selected so that the frequencies of occurrence of all characters in the cipher text are the same. In this situation, cryptanalysis of the ciphertext with the help of its statistical processing becomes difficult. The equalization of the frequencies of occurrence of characters is achieved due to the fact that for frequently occurring characters of the source text, a larger number of replacement elements are provided than for rarely occurring ones.

Multi-alphabetic multi-outline The substitution consists in the fact that several sets (contours) of alphabets are used for encryption, which are used cyclically, and each contour in the general case has its own individual period of application. This period is calculated, as a rule, by the number of characters, after encryption of which the outline of the alphabets changes.

Way permutations - an uncomplicated method of cryptographic conversion. It is used, as a rule, in combination with other methods. This method consists in the fact that the characters of the encrypted text are rearranged according to certain rules within the encrypted block of characters. All encryption and decryption procedures by the permutation method are sufficiently formalized and can be implemented algorithmically.

Simple permutation encryption is done as follows:

· A keyword with non-repeating characters is selected;

· The encrypted text is written in sequential lines under the symbols of the keyword;

· The cipher text is written out in columns in the order in which the letters of the key are located in the alphabet (or in the order of the numbers in the natural row, if it is digital).

Example:

plain text: BE CAREFUL

key: 5 8 1 3 7 4 6 2

encryption scheme:

B U D B T E q O (where q is a space)

S T O R O W N S

Group by 2 characters and get the ciphertext:

DOOOYREZHBSqNTOUT

The disadvantage of simple permutation encryption is that when the length of the encrypted text is large, the ciphertext may exhibit patterns in the key symbols. To eliminate this drawback, you can change the key after encrypting a certain number of characters. With a fairly frequent key change, the encryption strength can be significantly increased. This, however, complicates the organization of the encryption and decryption process.

Complicated table permutation lies in the fact that a special table is used to record the characters of the encrypted text, into which some complicating elements are introduced. The table is a matrix, the dimensions of which can be chosen arbitrarily. In it, as in the case of a simple permutation, the characters of the encrypted text are written. The complication is that a certain number of cells in the table are not used. The number and location of unused items is an optional encryption key. Encrypted text in blocks of ( m x nS) elements are written to the table ( m x n - table sizes, S - number of unused elements). Further, the encryption procedure is similar to a simple rearrangement.

By varying the size of the table, the sequence of key characters, the number and location of unused elements, you can get the required strength of the cipher text.

Complicated rearrangement along routes possesses a high encryption strength, uses a complicated method of permutations along the routes of the Hamiltonian type. In this case, the vertices of a certain hypercube are used to write the characters of the cipher text, and the characters of the cipher text are counted along the Hamilton routes, and several different routes are used.

Encryption method using analytical transformations provides a fairly reliable information closure. To do this, you can apply methods of matrix algebra, for example, multiplication of a matrix by a vector. If the matrix is ​​used as a key, and instead of the vector component, the characters of the original text are substituted, then the components of the resulting vector will be ciphertext characters. Decryption is carried out using the same rule for multiplying a matrix by a vector, only the inverse matrix is ​​taken as the basis, with the help of which the closure is carried out, and the corresponding number of closed text characters is taken as a factor vector. The result vector values ​​are the digital equivalents of the plaintext characters.

Gumming- this method consists in imposing on the source text some pseudo-random sequence generated on the basis of the key. The procedure for imposing gamma on the original text can be done in two ways. V the first way characters of the original text and gamut are replaced by digital equivalents, which are then added modulo TO, where K is the number of characters in the alphabet, i.e.

t c = (t p + t g) mod K, where t c, t p,t g - symbols of cipher text, source text and gamut, respectively.

In the second method, the symbols of the original text and gamma are represented as a binary code, and then the corresponding bits are added modulo 2. Instead of modulo 2 addition, gamma can use other logical operations, for example, transformation according to the rule of logical equivalence or logical nonequivalence. This replacement is tantamount to the introduction of one more key, which is the choice of the rule for forming the characters of the encrypted message from the characters of the original text and the gamma.

The strength of encryption by the gamma method is determined mainly by the properties of the gamma - the length of the period and the uniformity of the statistical characteristics. The latter property ensures that there are no patterns in the appearance of various symbols within a period.

With good statistical properties of the gamut, the encryption strength is determined only by the length of its period. Moreover, if the length of the gamma period exceeds the length of the encrypted text, then such a cipher is theoretically absolutely secure. Any sequence of random symbols can be used as an infinite range, for example, a sequence of digits of the PI number. When encrypting with a computer, a gamma sequence is generated using a pseudo-random number sensor.

Combined encryption methods use several different methods at the same time, i.e. sequential encryption of the original text using two or more methods. This is a fairly effective means of increasing the strength of encryption.

A typical example of a combined cipher is the US national standard for cryptographic data closure (DES).

II. Under coding this type of cryptographic closure is understood when some elements of the protected data (these are not necessarily separate characters) are replaced with pre-selected codes (numeric, alphabetic, alphanumeric combinations, etc.).

This method has two flavors: semantic and symbolic encoding. At semantic coding the encoded elements have a well-defined meaning (words, sentences, groups of sentences). At character encoding each character of the protected message is encoded. Character encoding is essentially the same as substitution encryption.

When used correctly, codes are much more difficult to break down than other classical systems. There are three reasons for this. Firstly, large length of the code used (for encryption - several hundred bits; codebook - hundreds of thousands - a million bits). Secondly, codes remove redundancy - the cryptanalyst's job becomes more difficult. Thirdly, codes operate on relatively large blocks of plaintext (words and phrases) and therefore hide local information that could otherwise provide valuable clues to a cryptanalyst.

TO disadvantages coding, it should be attributed to the fact that the key is not used well enough during encoding, because when encoding a single word and phrase, only a very small portion of the codebook is used. As a result, the code under heavy use is amenable to partial analysis and is especially sensitive to attack in the presence of a known plaintext. For these reasons, codes need to be changed more frequently to ensure greater reliability.

III. other methods cryptographic closures include dissection / exploding and data compression. Data dissection / exploding consists in the fact that the array of protected data is dissected into such elements, each of which does not allow disclosing the content of the protected information, and the elements allocated in this way are placed in different memory zones. The reverse procedure is called data collection. Obviously, the data exploration and collection algorithm should be kept secret.

Data compression is the replacement of frequently occurring identical data strings or sequences of the same characters with some preselected characters.

Hash functions

Hash function is called a one-way function designed to receive a digest or "fingerprint" of a file, message or some block of data.

Initially, hashing functions were used as functions for creating a unique image of information sequences of arbitrary length, in order to identify and determine their authenticity. The image itself must be a small, fixed-length chunk, typically 30, 60, 64, 128, 256, or 512 bits. Therefore, search operations, sorting and others with large arrays or databases are greatly simplified, i.e. take much less time. To ensure the required error probability, it is necessary to meet a number of requirements for the hashing function:

· The hash function must be sensitive to all kinds of changes in the text M, such as insertions, outliers, permutations;

· The hash function must have the property of irreversibility, that is, the task of selecting a document M ", which would have the required value of the hash function, must be computationally unsolvable;

· The probability that the values ​​of hash functions of two different documents (regardless of their lengths) coincide should be negligible.

A large number of existing mathematical functions can provide these requirements. If these functions are used for sorting, searching, etc. However, later, based on the work of Simonson on the theory of authentication, it became clear that it is advisable to use hashing methods in message authentication schemes in communication channels and telecommunication systems. In this connection, a number of directions in research in the field of cryptography have opened, which are associated with the development of new and improvement of existing hash functions. The main idea of ​​using hashing functions is to obtain one-way functions on their basis, which are the main product for the development of modern cryptographic mechanisms and authentication methods.
Let's consider the basic concepts related to one-way hashing functions.

Most hash functions are built from a one-way function f (), which forms an output value of length n when specifying two input values ​​with length n... These inputs are the source block Mi and hash value Hi – 1 the previous block of text (fig. 1):

Hi = f (Mi, Hi – 1).

The hash value calculated when you enter the last block of text becomes the hash value of the entire message M.

Fig. 1. One-way hash function diagram

As a result, a one-way hash function always produces an output of a fixed length n (regardless of the length of the input text). The hashing algorithm is iterative, therefore hashing functions are also called iterative algorithms. The essence of the hashing algorithm lies in its one-sidedness, i.e. the function should work in one direction - squeeze, mix and dissipate, but never reconstitute. Such schemes make it possible to track changes in the source texts, which is to ensure the integrity of the data, and in digital signature algorithms also to ensure the authenticity of the data. However, in pure form, these functions cannot be verified.

In some sources, steganography, coding and compression of information belong to the branches of knowledge adjacent to, but not included in, cryptography.

Traditional (classical) encryption methods include permutation ciphers, simple and complex replacement ciphers, as well as some of their modifications and combinations. Combinations of permutation ciphers and replacement ciphers form the whole variety of symmetric ciphers used in practice.

Permutation ciphers. In permutation encryption, the characters of the encrypted text are rearranged according to a specific rule within the block of this text. Permutation ciphers are the simplest and probably the most ancient ciphers.

Encryption tables. As a key in encryption tables, the following are used: the size of the table, the word or phrase specifying the permutation, the features of the structure of the table.

One of the most primitive table permutation ciphers is a simple permutation, for which the size of the table is the key. Naturally, the sender and receiver of the message must agree in advance on a common key in the form of a table size. It should be noted that the combination of letters of the ciphertext into 8-letter groups is not included in the cipher key and is carried out for the convenience of writing nonsense text. When decrypting, the actions are performed in the reverse order.


The encryption method called single key permutation is somewhat more resistant to disclosure. This method differs from the previous one in that the table columns are rearranged by a keyword, phrase, or a set of numbers as long as a table row.

For additional secrecy, you can re-encrypt a message that has already been encrypted. This encryption method is called double permutation. In the case of a double permutation of columns and rows of the table, permutations are defined separately for columns and for rows. First, the text of the message is written into the table column by column, and then the columns are rearranged, and then the rows.

The number of double permutation options increases rapidly with an increase in the size of the table: for a 3 × 3 table - 36 options, for a 4 × 4 table - 576 options, for a 5 × 5 table - 14,400 options. However, the double permutation is not very strong and is relatively easy to "crack" for any size of the encryption table.

Simple replacement ciphers. When encrypting by substitution (substitution), the characters of the cipher text are replaced by characters of the same or a different alphabet with a predetermined replacement rule. In a simple substitution cipher, each character of the original text is replaced by characters of the same alphabet, one rule at a time throughout the text. Simple substitution ciphers are often referred to as mono-alphabetic substitution ciphers.

Caesar's encryption system . Caesar's cipher is a special case of a simple substitution cipher (one-alphabet substitution). This cipher got its name from the name of the Roman emperor Gaius Julius Caesar, who used this cipher in correspondence.

When encrypting the original text, each letter was replaced with another letter of the same alphabet according to the following rule. The replacement letter was determined by alphabetical offset m from the original letter to k letters. When the end of the alphabet was reached, a cyclic transition to its beginning was performed. Caesar used the Latin alphabet m= 26 and replacement cipher at offset k= 3. Such a substitution cipher can be specified by a substitution table containing the corresponding pairs of plaintext and ciphertext letters. The set of possible substitutions for k= 3 is shown in Table 6.1.

Table 6.1 - One-alphabetical substitutions (k = 3, m = 26)

Caesar's encryption system essentially forms a family of one-alphabetic substitutions for selectable key values k, and 0 £ k < m... The advantage of the Caesar encryption system is the simplicity of encryption and decryption.

The disadvantages of the Caesar system include the following:

Substitutions performed according to the Caesar system do not mask the frequencies of the various letters in the original plaintext;

The alphabetical order is preserved in the sequence of substitute letters; when the value of k changes, only the initial positions of such a sequence change;

The number of possible keys k is small;

Caesar's cipher is easily revealed based on the analysis of the frequencies of the appearance of letters in the ciphertext.

A cryptoanalytic attack against a mono-alphabetical substitution system begins by counting the frequencies of occurrence of characters: the number of occurrences of each letter in the ciphertext is determined. Then the obtained distribution of the frequencies of the letters in the ciphertext is compared with the distribution of the frequencies of the letters in the alphabet of the original messages. The letter with the highest frequency of occurrence in the ciphertext is replaced by the letter with the highest frequency of occurrence in the alphabet, etc. The probability of a successful opening of the encryption system increases with the length of the ciphertext. At the same time, the ideas inherent in Caesar's encryption system turned out to be very fruitful, as evidenced by their numerous modifications.

Caesar's affine substitution system. In this transformation, the letter corresponding to the number t, is replaced with the letter corresponding to the numeric value ( at + b) modulo m... Such a transformation is a one-to-one mapping on the alphabet if and only if GCD ( a, m) - the greatest common divisor of numbers a and m is equal to one, i.e. if a and m are coprime numbers.

The advantage of the affine system is convenient key management: encryption and decryption keys are presented in a compact form as a pair of numbers ( a, b). The disadvantages of the affine system are similar to those of Caesar's encryption system. In practice, the affine system was used several centuries ago.

Complex replacement ciphers . Complex replacement ciphers are called multi-alphabetic, since their own simple replacement cipher is used to encrypt each character of the original message. Multi-alphabetic substitution sequentially and cyclically changes the alphabets used. At r- alphabetic substitution character x 0 of the original message is replaced by y 0 from the alphabet B 0, character x 1 - symbol y 1 from the alphabet B 1, etc .; symbol x r-1 is replaced by the character y r-1 from the alphabet B r-1, character x r is replaced by the symbol y r again from the alphabet B 0, etc.

The effect of using multi-alphabetic substitution is that it provides a masking of the natural statistics of the source language, since a specific character from the source alphabet A can be converted to several different characters of cipher alphabets B j... The degree of protection provided is theoretically proportional to the length of the period r in the sequence of alphabets used B j.

System Viginera encryption . Viginer's system is similar to that of Caesar's encryption system, in which the substitution key changes from letter to letter. This poly-alphabetic substitution cipher can be described by an encryption table called the Viginer table (square). The Viginer table is used for both encryption and decryption.

It has two entrances:

The top line of underlined characters used to read the next letter of the original plaintext;

The leftmost column of the key.

The sequence of keys is usually obtained from the numerical values ​​of the letters of the keyword. When encrypting the original message, it is written out in a string, and a keyword (or phrase) is written under it.

If the key is shorter than the message, then it is repeated cyclically. In the process of encryption, the next letter of the source text is found in the top row of the table and the next key value in the left column. The next letter of the ciphertext is located at the intersection of the column defined by the letter being encrypted and the line defined by the numerical value of the key.

One-time ciphers. Almost all ciphers used in practice are characterized as conditionally reliable, since they can, in principle, be broken with unlimited computational capabilities. Absolutely strong ciphers cannot be broken, even with unlimited computational power. There is only one such cipher in practice - a one-time encryption system. A characteristic feature of a one-time encryption system is the one-time use of a key sequence.

This cipher is absolutely reliable if the set of keys K i really random and unpredictable. If a cryptanalyst tries to use all possible sets of keys for a given ciphertext and restore all possible variants of the original text, then they will all turn out to be equally probable. There is no way to select the original text that was actually sent. It has been theoretically proven that one-time systems are non-decrypted systems, since their ciphertext does not contain sufficient information to recover the plaintext.

The use of the disposable system is limited to purely practical aspects. The essential point is the requirement for a one-time use of a random key sequence. The key sequence with a length not less than the length of the message must be transmitted to the recipient of the message in advance or separately via some secret channel. Such a requirement is practically difficult to implement for modern information processing systems, where it is required to encrypt many millions of characters, but in justified cases, the construction of systems with one-time ciphers is the most expedient.

Historically, scramblers with external and internal scales are distinguished. In encryptors with an external gamut, a single-use random sequence is used as a key, the length of which is equal to the length of the encrypted message. In encryptors with an internal gamut, a reusable random sequence with a length much less than the length of the encrypted text is used as a key, on the basis of which the gamma of the cipher is formed. Scramblers with an internal scale, that is, possessing the property of practical stability, are currently prevalent in the construction of encrypted communication systems. Their main advantage is simplicity in key management, i.e., their preparation, distribution, delivery and destruction. This advantage makes it possible to create encrypted communication systems of practically any size on the basis of encoders with internal gamut, without limiting their geography and the number of subscribers.

The modern development of information technologies makes it possible to concentrate a significant amount of information on small physical media, which also determines the practical applicability of this approach.

The problem of building an encrypted communication system based on encryptors with an external gamut can have several approaches to its solution. For example, based on the established boundary value of the volume of the key document, the optimal number of system subscribers and the permissible load are determined. On the other hand, it is possible, based on the required number of subscribers and the load on them, to calculate the required volume of the key document.

Encryption method gamming . Gamma is understood as the process of imposing a cipher according to a certain gamma law on open data. A cipher gamma is a pseudo-random sequence generated according to a given algorithm for encrypting open data and decrypting received data.

The encryption process consists in generating a cipher gamma and applying the resulting gamma to the original plain text in a reversible way, for example, using the addition operation modulo 2.

Before encryption, the open data is split into blocks of equal length, usually 64 bits each. The gamma of the cipher is generated as a sequence of the same length. The decryption process is reduced to re-generating the cipher gamma and superimposing this gamut on the received data.

The ciphertext obtained by this method is rather difficult to expand, since the key is now variable. In fact, the gamma of the cipher must be changed randomly for each encrypted block. If the gamma period exceeds the length of the entire encrypted text and the attacker does not know any part of the original text, then such a cipher can only be revealed by direct enumeration of all key variants. In this case, the cryptographic strength of the cipher is determined by the key length.

Top related articles