How to set up smartphones and PCs. Informational portal

Page replacement algorithms. Clock

Suppose that all distortions in the channel are strictly deterministic and only Gaussian additive noise n (t) is random, which is assumed to be white at first, with spectral density N 0. This means that when transmitting a signal u i (t) (symbol b i (i = 0,1, ..., m-1), the incoming signal can be described by model (3.38):

z (t) = s i (t) + n (t), (0≤t≤T), (6.17)

where all s i (t) = ku i (t-τ) (i = 0, 1, ..., m-1) are known. Only the implementation of the interference and the index i of the actually transmitted signal are unknown, which must be determined by the decision circuit.

We will also assume that all si (t) are finite signals, the duration of which is T. This is the case if the transmitted signals ui (t) are finite and have the same duration (the system is synchronous), and there is no multipath propagation or linear distortions in the channel. causing an increase in the signal duration (or they are corrected).

In what follows, we will everywhere assume that reliable clock synchronization is ensured in the system, i.e., the boundaries of the clock interval at which the signal s (t) arrives are known exactly. Synchronization issues are very important in the implementation of optimal demodulators and synchronous communication systems in general, but they are beyond the scope of this course. The moment of the beginning of the sending s (t) is taken as zero.

Under these conditions, let us define the algorithm for the operation of the optimal (i.e., based on the maximum likelihood rule) demodulator that analyzes the signal at the 0-T clock interval. To do this, it is necessary to find the likelihood ratios for all m possible signals with respect to the null hypothesis (s (t) = 0; z (t) = n (t)).

The task is complicated by the fact that the width of the signal spectrum is infinite (since it is finite), and therefore the signal space is infinite-dimensional L 2 (T). For such signals (or infinite-dimensional vectors), as noted, there is no probability density. However, there are "-dimensional probability densities for any n signal cross-sections (see § 2.1).

First, we replace white noise with quasi-white noise, which has the same one-sided spectral power density N 0, but only in a certain frequency band F = n / 2T, where n >> 1. Let us first consider an additional hypothesis, that is, we will assume that Z (t) is noise. Let us take n equidistant sections on the clock interval through Δt = 1 / 2F = T / n. The counts Z 1, ...., Z n in these sections for quasi-white Gaussian noise are independent in accordance with (2.49). Therefore, the n-dimensional probability density for the samples taken is

where σ 2 = N 0 F is the dispersion (power) of the quasi-white noise.

Under the hypothesis that the symbol b i was transmitted, according to (6.17), n (t) = z (t) - s i (t). Consequently, the conditional n-dimensional probability density of the sections Z (t) is determined by the same formula as (6.18), if z (t k) is replaced by the difference z (t k) -s i (t k), which is noise under this hypothesis:

The likelihood ratio for the signal s i (relative to the additional hypothesis), calculated for n sections:

We replace the variance σ 2 with its expression: σ 2 = N 0 F = N 0 / (2Δt). Then

According to the maximum likelihood rule in the case of quasi-white noise, the decision circuit should choose the value of i that provides the maximum Λ i [n]. Instead of the maximum Λ i, one can find the maximum of its logarithm:

The second term in (6.22) does not depend on t and can be ignored when comparing hypotheses. Then the decision rule that the symbol b i was transmitted, according to (6.10), can be expressed by the system of inequalities

Let's now go back to the original problem for white noise. To do this, we will expand the strip F, then the number of sections n tends to infinity, and Δt tends to zero. The sums in (6.22) turn into integrals, and the etching solution is defined as follows:

Expression (6.24) determines those operations (operation algorithm) that the optimal receiver should perform over the input oscillation z (t).

In fig. 6.2 for m = 2 shows a block diagram of a receiving device operating in accordance with the algorithm (6.24).

Here "-" are subtractors; Γ 0, Γ 1 - generators of reference signals s 0 (t), s 1 (t); "KB" - squarers; ∫ - integrators; RU is a deciding device that determines, at times that are multiples of T (when the keys are closed), the number of the branch with the minimum signal.

For m> 2 in the circuit in Fig. 6.2 and other schemes below, the number of signal processing branches reaching the RU grows accordingly.

In Hilbert space


determines the norm of the difference between the vectors z and s or the distance between them *. Therefore, algorithm (6.24) can be written as

|| z - s i ||

and give it a simple geometric interpretation: the optimal demodulator should register that of the signals s i (t) (corresponding to the symbol b i), which is "closer" to the received waveform z (t). As an example, Fig. 6.3 shows the optimal partitioning of the two-dimensional space of the received signals z (t) when transmitting signals s 1 (t) and s 0 (t). Areas for making decisions in favor of symbols 0 or 1 are located on either side of the line 0-0, which is perpendicular to the segment connecting the signal points and dividing it in half.

The presence of fig. 6.2 squares, designed to provide a square-law transformation of the instantaneous values ​​of input signals in their entire dynamic range, often complicates its implementation. Therefore, based on (6.24), we obtain an equivalent reception algorithm that does not require squaring devices.

Expanding the brackets under the integral sign and canceling in both sides of inequalities (6.24) the term

we arrive at the reception algorithm:

where E j is the energy of the expected signal s j (t):


For a binary system, algorithm (6.25) is reduced to checking one inequality

A device that directly computes the dot product


is called an active filter, or a correlator; therefore, a receiver that implements algorithm (6.25) is called a correlation one.

* (For n-dimensional Euclidean space, this norm is equal to)


In fig. 6.4 shows a block diagram of a receiving device operating in accordance with (6.27). Here blocks × are multipliers; Γ 0, Γ 1 - generators of reference signals s 0 (t) s 1 (t); ∫ - integrators; "-" - subtractors; RU is a solver that determines at times that are multiples of T (when the key is closed), i = 0, 1 is the number of the branch with the maximum signal.

If the signals ui (t) are chosen in such a way that all their realizations (and, consequently, all realizations of si (t) have the same energies (E i = const), the reception algorithm (6.25) (and, accordingly, its implementation) is simplified (there is no need in subtractors) and takes the form

From (6.29) it can be seen that the decision rule does not change if the signal z (t) arriving at the demodulator input is multiplied by any number. Therefore, a system in which all signal realizations have equal energy differs in that the optimal reception algorithm does not require knowledge of the "scale" of the incoming signal, or, in other words, knowledge of the transmission coefficient k of the channel. This important feature has given rise to the widespread use of equal energy signal systems, which are commonly referred to as active pause systems. This is especially important for fading channels where the gain fluctuates (see §6.7).

Note that for a binary system inequality (6.27) can be represented in a simpler form:


where s Δ (0 = s 1 (t) - s 0 (t) is the difference signal; λ = 0.5 (E 1 -E 0) is the threshold level. For a system with an active pause, X = 0, which greatly facilitates the implementation optimal scheme.

When inequality (6.30) is satisfied, the symbol 1 is registered, otherwise - 0. To implement (6.30) in the circuit in Fig. 6.4 only one branch is required.

In fig. 6.5, a shows a diagram that implements algorithm (6.30) for a binary transmission system with unipolar pulses (with a passive pause): s 1 (t) = a, s 0 (t) = 0. For these signals, s Δ (t) = s 1 (t) = a, E 1 = a 2 T, E 0 = 0, λ = a 2 T / 2 and (6.30) takes the following form:


The considered system of binary signals is used in the simplest wire communication devices. In radio channels, as well as in modern cable channels, high-frequency signals are used. The simplest binary systems with harmonic signals are amplitude (AM), phase (PM) and frequency (FM) shift keying systems.

In binary AM, s 1 (t) = acos (ω 0 t + φ), s 0 (t) = 0. All constants (a, ω 0, φ) in this section are assumed to be known. Since here s Δ (t) = s 1 (t), E 1 = a 2 T / 2 and E 0 = 0, rule (6.30) will be written as follows:


It is implemented by the circuit in Fig. 6.5.6, which differs from Fig. 6.5, a block for multiplying the incoming signal with the reference signal cos (ω 0 t + φ). The threshold level λ̇ in this case is equal to aT / (4RC).

For a binary FM system s 0 (t) = a cos (tω 0 + φ), s 0 (t) = a cos (tω 0 + φ + π) = -s 1 (t). This is a system with an active pause, and therefore λ = 0 in (6.30). It is easy to verify that the decision rule reduces in this case to the following:


and is implemented by the same circuit in Fig. 6.5.6 at λ̇ = 0. In this case, RU plays the role of a polarity discriminator.

Rice. 6.6. Optimal demodulator with a whitening filter for Gaussian "colored" noise

Let us briefly consider the case when the Gaussian noise in the channel is not white or quasi-white, but "colored", that is, it has an uneven power density G (f) in the signal spectrum band. Let us pass the sum of signal and noise arriving at the demodulator input through a filter with a transfer function k (i2πf) such that the product G (f) | k (i2πf) | 2 was constant N 0. Of all possible filters with k (i2πf), satisfying this condition and differing only in the phase-frequency response, one can choose the minimum phase one, which is reversible. Obviously, the noise at the filter output will turn out to be quasi-white: G out (f) = N 0. Therefore, such a filter is called whitening.

The signal si (t), after passing through the whitening filter, will turn into some other signal, which we denote by s "i (t). Its type can be determined knowing si (t) and k (i2πf). If we now apply oscillations from the output of the whitening filter to demodulator, which is optimal for receiving signals s "i (t) (i = 0, 1, ..., m-1), then we get the circuit in Fig. 6.6, which is obviously optimal for signals s i (t) with colored noise.

It should be noted that in the diagram in Fig. 6.2, 6.4 and 6.5, the reference signal must have the same initial phases as the expected arriving signals, or in other words, it must be coherent with the arriving signals. This requirement usually complicates the implementation of the demodulator and requires the introduction into it, in addition to the blocks indicated in the figures, additional devices designed to adjust the phases of the reference signals.

All reception methods, for the implementation of which an exact a priori knowledge of the initial phases of the incoming signals is required, are called coherent. In cases where information about the initial phases of the expected signals is extracted from the received signal itself (for example, if the phase fluctuates, but so slowly that it can be predicted from previous signal elements), the reception is called quasi-coherent. If information about the initial phases of the incoming signals is absent or, for some reason, they are not used, then the reception is called incoherent (see § 6.6).

Suppose that all distortions in the channel are strictly deterministic and only Gaussian additive noise is random, which we will initially assume to be white, with spectral density.This means that when transmitting a signal (symbol), the incoming signal can be described by model (3.28):

where everyone is known. Only the implementation of the interference and the index of the actually transmitted signal are unknown, which must be determined by the decision circuit.

We will also assume that all are finite signals, the duration of which takes place if the transmitted signals are finite and have the same duration (the system is synchronous), and there is no multipath propagation or linear distortions in the channel that cause signal stretching (or they are corrected).

In what follows, we will everywhere assume that reliable clock synchronization is provided in the system, that is, the boundaries of the clock interval at which the signal arrives are known exactly. Synchronization issues are very important in the implementation of optimal demodulators and synchronous communication systems in general, but they are beyond the scope of this course. We will take the moment of the beginning of the sending as zero.

Under these conditions, let us determine the operation algorithm of the optimal (i.e., based on the maximum likelihood rule) demodulator analyzing the signal at the clock interval.To this end, it is necessary to find the likelihood ratios for all possible signals with respect to the null hypothesis

The task is complicated by the fact that the width of the signal spectrum is infinite (since it is finite), and therefore the signal space is infinite. For such signals (or infinite-dimensional vectors), as already noted, there is no probability density. However, there are -th probability densities for any signal cross-sections (see § 2.1).

Let's replace the white noise first. quasi-white, having the same one-sided spectral power density but only in a certain frequency band where 1. Consider first the null hypothesis, that is, we will assume that is noise. Let us take on the clock interval equally spaced cross-sections through. The counts in these cross-sections for quasi-white Gaussian noise are independent in accordance with (2.49). Therefore, the -dimensional probability density for the samples taken

where the variance (power) of the quasi-white noise.

Under the hypothesis that the symbol was transmitted, Consequently, the conditional -dimensional probability density of the cross-sections is determined by the same formula as (4.18), if replaced by the difference

The likelihood ratio for the signal (relative to the null hypothesis), calculated for the cross sections:

Let's replace the variance with its expression:

According to the maximum likelihood rule in the case of quasi-white noise, the decision circuit should choose a value that provides a maximum Instead of a maximum, you can find the maximum of its logarithm:

Note that the second term in (4.22) does not depend on and can be ignored in the comparison. Then the decision rule that a signal was transmitted can be formulated as follows:

In -dimensional Euclidean space determines the norm of the difference of vectors or the distance between them. Therefore, algorithm (4.23) can be written as

and give it a simple geometric interpretation: the optimal demodulator should register that of the signals (corresponding to the symbol which is "closer" to the received waveform. As an example, Fig. 4.2 shows the optimal partitioning of the two-dimensional space of received signals when transmitting binary signals. both sides of

Rice. 4.2. Optimal partitioning of the space of received oscillations with a binary code and exactly known signals

We transform (4.22) by expanding the brackets and making abbreviations:

Let's now go back to the original problem for white noise. For this purpose, we will expand the strip. Then the number of sections will tend to infinity, to zero. The sums in (4.24) turn into integrals, and the logarithm of the likelihood ratio is defined as

and the transfer decision algorithm takes the form

where the energy of the expected signal

A device that directly calculates the dot product,

is called an active filter, or a correlator; therefore, a receiver that implements algorithm (4.26) is called a correlation one.

In fig. 4.3 shows a block diagram of a receiving device operating in accordance with (4.26). Here blocks X are multipliers; A - reference signal generators - integrators, subtractors; a solver that determines, at times that are multiples (when the key is closed), the number of the branch with the maximum signal.

If the signals are chosen in such a way that all their realizations (and, consequently, all realizations have the same energies, the algorithm

Rice. 4.3. Optimal demodulator with exactly known signals

reception (4.26) (and, accordingly, its implementation) is simplified (there is no need for subtractors) and takes

From (4.29) it can be seen that the decision rule will not change if the signal arriving at the demodulator input is multiplied by any number. Therefore, a system in which all signal realizations have equal energy differs in that the optimal reception algorithm in it does not require knowledge of the “scale” of the incoming signal, or, in other words, knowledge of the channel transmission coefficient. This important feature has given rise to the widespread use of equal energy signal systems, which are commonly referred to as active pause systems. This is especially important for fading channels in which the gain fluctuates (see § 4.7 below).

It should be emphasized that the correct clock synchronism for detecting the boundaries of the parcels (picking up signals at the output of the unit at times, multiples and resetting the voltage from the integrator after making a decision) is an indispensable condition for the practical implementation of the considered algorithms according to the scheme in Fig. 4.3.

For the most common binary system of inequalities (4.26), there is only one thing left, and the reception algorithm can be presented in a simpler form:

where is the difference signal; threshold level. For a system with an active pause, which greatly facilitates the implementation of the optimal scheme.

When inequality (4.30) is satisfied, the symbol 1 is registered, otherwise it is 0. To implement (4.30) in the circuit in Fig. 4.3 only one branch is required.

In fig. 4.4 shows a diagram that implements the algorithm (4.30) for a binary transmission system with unipolar pulses (with a passive pause):

Rice. 4.4. Implementation of optimal reception of binary rectangular video pulses

With these signals and rule (4.30) will take the following form:

Integration in the circuit fig. 4.4 is carried out with sufficient accuracy by the circuit, provided that at the same time on the capacitor C the voltage at the moment is equal to - Therefore, the rule is that this voltage must exceed the threshold level which is entered into. After this record (occurring when the key is closed, it is necessary to reset the voltage from the integrator so that the next signal element can be received. Reset is carried out by closing the key that discharges the capacitor.

The same scheme, with a slight modification, can be used for demodulation in a binary transmission system with bipolar pulses (with an active pause): In this case, therefore, In this case, after reduction, rule (4.30) takes the form

It is implemented by the circuit in Fig. 4.4, if the threshold level X is set equal to zero. In this case, it turns into a polarity discriminator, which issues a symbol 1 when the voltage at its input is positive, otherwise.

The considered two systems are used in the simplest wire communication devices. In radio channels, as well as in modern cable channels, high-frequency signals are used. The simplest binary systems with harmonic signals are amplitude (AM), phase (PM) and frequency shift keying systems.

In binary All constants included in this section are assumed to be known. Since here rule (4.30) will be written as follows:

It is implemented by the circuit in Fig. 4.5, which differs from fig. 4.4. unit for multiplying the incoming signal with the reference signal Threshold level in this case is equal to

Rice. 4.5. Realization of optimal reception in the binary system AM, PM with a precisely known signal

In a binary FM system

This is a system with an active pause, and therefore it is easy to make sure that the decision rule is reduced to the following: and

is implemented by the same circuit in Fig. 4.5 at In this case, plays the role of a polarity discriminator. Its appearance can be determined by knowing it against the background of white noise with spectral density.It is easy to see that at the output of the filter there will be signals and the noise will be colored, with spectral density, that is, exactly those signals and the noise will arrive at the input of the imaginary optimal demodulator. it is calculated. Thus, the diagram in Fig. 4.66 is a demodulator for signals against a white noise background, in which the probability of errors is less than in the optimal demodulator connected to the output of the whitening filter in Fig. 4.6a. This contradiction proves that there cannot be a demodulator for signals against the background of colored noise better than in Fig. 4.6a.

Note that when implementing such a demodulator with a whitening filter, difficulties arise due to the fact that signals, when passing through the filter, as a rule, are stretched and there is an overlap of elements, the signal There are a number of ways to overcome this difficulty, but their detailed analysis is beyond the scope of the course.

It should be noted that in the diagram in Fig. 4.5 the reference signal should have the same initial phases as the expected arriving signals or, in other words, should be coherent with the arriving signals. This requirement usually complicates the implementation of the demodulator and requires introduction into it in addition to those indicated in Fig. 4.5 blocks of additional devices designed to adjust the phases of the reference signals.

All reception methods, for the implementation of which an exact a priori knowledge of the initial phases of the incoming signals is required, are called coherent. In cases where information about the initial phases of the expected signals is extracted from the received signal itself (for example, if the phase fluctuates, but so slowly that it can be predicted from previous signal elements), the reception is called quasi-coherent. If information about the initial phases of the incoming signals is absent or, for some reason, is not used, then the reception is called incoherent (see § 4.6 below).

Optimization of the algorithm of the developed program
The stage of developing an algorithm for your application is the most difficult in the entire chain of the program life cycle. The success of its implementation in the form of a program code largely depends on how deeply all aspects of your task are thought out. In general, changes in the structure of the program itself have a much greater effect than fine-tuning the program code. There are no ideal solutions, and the development of an application algorithm is always accompanied by errors and flaws. Here it is important to find the bottlenecks in the algorithm that most affect the performance of the application.

In addition, as practice shows, it is almost always possible to find a way to improve an already developed program algorithm. Of course, it is best to carefully develop an algorithm at the beginning of the design in order to avoid in the future many unpleasant consequences associated with the completion of fragments of program code within a short period of time. Take the time to develop the application algorithm - this will save you headaches when debugging and testing the program and will save you time.

It should be borne in mind that an algorithm that is efficient from the point of view of program performance never meets the requirements of setting the problem to 100% and vice versa. Algorithms that are not bad in terms of structure and readability, as a rule, are not efficient in terms of implementing program code. One of the reasons is the desire of the developer to simplify the overall structure of the program by using, wherever possible, high-level nested structures for calculations. Simplifying the algorithm in this case inevitably leads to a decrease in program performance.

At the beginning of the development of an algorithm, it is rather difficult to estimate what the program code of the application will be. To correctly design a program algorithm, you need to follow a few simple rules:
1. Carefully study the task for which the program will be developed.
2. Determine the basic requirements for the program and present them in a formalized form.
3. Determine the form of presentation. input and output data and their structure, as well as possible restrictions.
4. On the basis of these data, determine the program version (or model) of the task implementation.
5. Choose a method for implementing the task.
6. Develop an algorithm for the implementation of the program code. The algorithm for solving the problem should not be confused with the algorithm for implementing the program code.
In general, they never match. This is the most critical stage in software development!
7. Develop the source code of the program in accordance with the implementation algorithm of the program code.
8. Debug and test the program code of the developed application.

These rules should not be taken literally. In each specific case, the programmer himself chooses the methodology for developing programs. Some stages of application development may be further detailed, and some may be absent altogether. For small tasks, it is enough to develop an algorithm, slightly tweak it to implement the program code, and then debug it.

When creating large applications, it may be necessary to develop and test separate fragments of the program code, which may require additional detailing of the program algorithm.
Numerous literary sources can help the programmer for the correct algorithmization of tasks. The principles for constructing efficient algorithms are well developed. There is a lot of good literature on this topic, for example, the book by D. Knuth "The Art of Programming".

Optimized for computer hardware
Typically, the software developer strives to ensure that the performance of the application is as little dependent on the computer hardware as possible. In this case, you should take into account the worst case, when the user of your program will not have the latest model of the computer. In this case, the "revision" of the hardware often allows you to find reserves for improving the performance of the application.
The first thing to do is to analyze the performance of the computer peripherals on which the program should run. In any case, knowing what is faster and what is slower will help in developing a program. Analyzing system capacity allows you to identify bottlenecks and make the right decision. Different computer devices have different bandwidth. The fastest of these are the processor and RAM, while the relatively slow ones are the hard disk and CD drive. The slowest are printers, plotters and scanners.

In interviews, people are often asked which sort is the fastest. A trick question. We explain why and look for the best option.

In response, you must ask: "For what case is the time-optimal sorting chosen?" And only when the conditions are announced, you can safely go through the available options.

Exists:

  • sorting algorithms O (n 2) like insertion, bubble, and selection sort, which are used in special cases;
  • quick sort (general): average O (n log n) exchanges, but the worst time is O (n 2) if the array is already sorted or the elements are equal;
  • algorithms O (nlogn) such as merge sort and heap sort (heap sort), which are also good general purpose sorting algorithms;
  • O (n) or linear sorting algorithms (select, select and swap, select and count) for lists of integers, which may be appropriate depending on the nature of the integers in your lists.

If all you know is a general ordering relationship between elements, then optimal algorithms will have complexity O (n log n)... For linear algorithms, more information about the structure of the elements is needed.

The optimality of the algorithm depends closely on the type of lists / arrays that you intend to sort, and even on the computer model. The more information you have, the more accurate your choice will be. Under very weak assumptions about the factors, the optimal worst-case complexity might be Oh (n!).

This answer only addresses the complexities. The actual execution time of the algorithms depends on a huge number of factors.

Testing

So what's the fastest sorting?

Visualization

A nice visualization of sorts is demonstrated in this video:

It seems to answer the question of which sort is the fastest, but keep in mind that there are many factors that affect speed, and this is just one of the options demonstrated.

FEDERAL EDUCATION AGENCY

State educational institution of higher professional education "Voronezh State Technical University"

Radio engineering faculty

Department of Radio Engineering

Specialty 210302 "Radio engineering"

Optimizing search algorithms

Completed by student gr. RT-041 D.S. Chetkin

Checked by associate professor V.P. Litvinenko

Introduction. 4

1. Development of an optimal dichotomous search algorithm with an equiprobable distribution of probabilities and the number of events M = 16. 5

2. Development of an optimal search algorithm for the exponential law of probability distribution at M = 16. 7

3. Development of an optimal search algorithm for an exponential distribution law with the number of measurements from N = 15 to N = log2M .. 9

4. Development of an optimal search algorithm for the 9th variant of the distribution with the number of measurements from N = 1 to 15. 12

Conclusion. nineteen

References .. 20

Introduction

Secrecy characterizes the costs (time, money) required to identify a re-event with a given reliability (the probability of a correct decision, confidence probability).

When forming an assessment of the secrecy of a random event, a two-alternative step-by-step search procedure was adopted as justified, the essence of which is as follows.

The set X with the corresponding law of probability distribution is divided into two subsets and (the superscript is the number of the partition). The binary meter makes a binary measurement, identifying which subset the re-event is in (its trace). Then the subset in which the re-event is detected (in Fig.2.1. This) is again divided into two subsets and the trace of the re-event in one of them is revealed. The procedure ends when one event appears in the selected subset. Search can be sequential and dichotomous. In the first algorithm (), a sequential enumeration of states from the first to the last is performed until a re-event is encountered.

The second search algorithm () involves dividing the entire set of states in half, checking for the presence of a re-event in each of these parts, then dividing the selected half of the set X into two equal parts, checking for the presence of a re-event in them, and so on. The search ends when one event appears in the selected subset.

There are several ways to minimize binary search routines. Examples are the Zimmermann-Huffman and Shannon-Fono methods. The algorithm can be optimized for various parameters, taking into account the cost of measurement and without. In this laboratory work, we investigated the optimization of the dichotomous search algorithm for the least value of the average secrecy.

1. Development of an optimal dichotomous search algorithm with an equiprobable probability distribution and the number of events M = 16

Turn on the dichotomous search mode. Set the number of events for a uniform distribution of probabilities and set the number of measurements. Develop an optimal search algorithm, set it on the typesetting field, carry out modeling, determine potential secrecy.

In this case, the most optimal search algorithm is the algorithm developed according to the Shannon-Fano principle. This method assumes that the initial set of elements with a given distribution is divided into two subsets numbered 0 and 1 so that the probabilities of getting into them are as close as possible (ideally, in half). Then each of the obtained subsets is separately divided into two subsets with the same condition and numbers with 00,01,10,11. The split ends when all the elements of the subset have only one element each.

As a result, an optimal search algorithm was developed for an equiprobable probability distribution law.

Let's calculate the potential secrecy for an equiprobable probability distribution law:

(1)

As a result, for this case:

As a result, a simple expression was obtained for determining the potential secrecy of a uniform distribution law, which, with a dichotomous search algorithm, does not depend on the enumeration of a combination of measurements, but only on the type of the search tree.

Development of an optimal search algorithm for the exponential law of probability distribution at M = 16

Choose the exponential probability distribution of events of the form,, - normalizing factor, with the same as in point 1. Determine the optimal search algorithm, set it on the typesetting field, carry out modeling, determine the potential secrecy.

Initially, we will leave the search tree the same as in the previous paragraph. "PrintScreen" of the "Poisk" program for this case for an exponential distribution law.

Looking at the course of the curve for removing the uncertainty, we come to the conclusion that its course is not optimal. Using well-known search optimization algorithms, we come to the conclusion that in this case the optimal search algorithm is not a dichotomous algorithm at all for any combinations of finding a re-event, but a sequential one. For this case, it is optimal, since the first measurement checks the most probable, then the next, and so on until there is no uncertainty in the decision-making.

Proof of using a sequential search algorithm. For this, the Zimmermann-Huffman method is used. This optimization method consists of two stages: "Procurement operations" and "Readout". More details about this are given in the book.

Since the exponent is greater than 1, and this satisfies the inequality:

Where λ is the exponent of the probability distribution equal to 1, then a sequential search algorithm is optimal for this case.

As a result of this paragraph, it is shown that the sequential search algorithm is optimal. Comparing the results of performing the two points, you come to the conclusion that for each law of probability distribution there is its own optimal search algorithm, either sequential, or dichotomous, or combined search algorithm.

Development of an optimal algorithm for finding an exponential distribution law with the number of measurements from N = 15 to N = log2M

For the exponential distribution of probabilities from point 2, successively decreasing the maximum number of measurements from to, develop optimal search algorithms and, based on the simulation results, determine the corresponding values ​​of the average number of measurements.

For N = 15 from the previous point, the sequential search algorithm is optimal and for it the mean value of the binary measurements is determined in the same way as for potential secrecy. The Rcp value is presented in Table 1.

Table 1 - Dependence of the average number of measurements

on the number of measurements with optimal search algorithms

Let's calculate the potential secrecy for each case according to formula 1:

With the number of dimensions equal to 3, it is impossible to develop a search algorithm, because it does not satisfy the condition of the search feasibility, namely:

As a result, a graph of the dependence of the average number of measurements on the number of measurements is built, shown in Figure 8.

Figure 8 - Dependence of the average number of measurements on the number of measurements for the exponential law of probability distribution

4. Development of an optimal search algorithm for the 9th distribution option with the number of measurements from N = 1 to 15

For your version of the probability distribution with the number of events, develop an optimal search algorithm, build a search tree, explain its form, what is it caused by?

On the typesetting field, set the optimal complete search algorithm. Sequentially excluding the last measurements (before), consider the dependence of the average number of measurements, the probability of an incomplete solution and the residual secrecy on the duration of the search. The results are shown in Table 2.

Table 2 - Dependence of the average number of measurements,

residual secrecy, the probability of uncertainty from the number of measurements

n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
R 4 3.775 4.325 4.725 5.1625 5.375 5.5 5.65 5.7 5.7625 5.8 5.8
Pneop 0.55 0.7625 0.875 0 0 0 0 0 0 0 0 0 0 0 0
Sost 0.801 0.785 0.791 0.802 0.814 0.826 0.837 0.848 0.858 0.868 0.877 0.885 0.893 0.901

In this table, Sres was considered at a confidence level of 0.9. "PrintScreen" of the "Poisk" program for different values ​​of the number of measurements is shown in Figures 8-11.

When the number of measurements is less than 4, the probability of an incomplete solution appears, due to the fact that it is impossible to check all the events. As a result, not everything has to be checked; the best option would be to check the most likely events. "PrintScreen" of the "Poisk" program with the number of measurements less than 3 is shown in Figure 12.

Let's build a graph of potential secrecy versus the number of dimensions, which is shown in Figure 13.

Figure 13 - Dependence of the average number of measurements on the number of measurements for the 9th law of probability distribution

Figure 14 - Dependence of the probability of an incomplete solution on the number of measurements for the 9th law of probability distribution

(3)

(4)

Confidence probability will be changed within 0.7 ÷ 0.9. As a result, a graph of the dependence of the residual secrecy on the number of measurements was obtained, which is shown in Figure 15.

Nost (Pdov) Pdov = 0.9

Figure 15 - Dependence of the residual secrecy at the values ​​of the confidence probability 0.7 ÷ 0.9

From the graph presented above, we can conclude that Pdov should be chosen close to unity, this will lead to a decrease in residual secrecy, but this is not always possible.

Figure 16 - Dependence of residual secrecy at values ​​of the number of measurements 4,8,16

It follows from this graph that, with a large number of measurements, the residual secrecy is higher, although, logically, a larger number of measurements will lead to a decrease in the probability of solution uncertainty.

Conclusion

In this work, studies have been carried out to optimize the dichotomous search algorithm using the Poick program. Comparison with the sequential algorithm is carried out. The form of the SPV is investigated for a uniform, exponential and given according to the variant of the distribution of events. Skills have been developed to use the Poick program.

In the course of the laboratory work, the development of optimal search algorithms for sequential and dichotomous search algorithms was carried out.

The calculation of the curve for removing the uncertainty was carried out and it was found that in some cases it is more correct to use a sequential search algorithm, and in others it is dichotomous. But this can only be related to the original probability distribution.

The correctness of the Poisk program has been confirmed by the calculations carried out in the Matcard 2001 software package.

Bibliography

1. Fundamentals of the theory of secrecy: a textbook for students of the specialty 200700 "Radio engineering" full-time education / Voronezh State Technical University; Compiled by Z.M. Kanevsky, V.P. Litvinenko, G.V. Makarov, D.A. Maximov; edited by Z.M. Kanevsky. Voronezh, 2006.202s.

2. Methodical instructions for laboratory work "Research of search algorithms" in the discipline "Fundamentals of the theory of secrecy" for students of the specialty 200700 "Radio engineering" full-time education / Voronezh State Technical University; compiled by Z.M. Kanevsky, V.P. Litvinenko. Voronezh, 2007, 54p.

3. STP VGTU 005-2007. Course design. Organization, order, execution of the settlement and explanatory note and the graphic part.

Top related articles