How to set up smartphones and PCs. Informational portal
  • home
  • Advice
  • Mathematical models of discrete communication channels. Models of discrete communication channels mikhail vladimirovich markov

Mathematical models of discrete communication channels. Models of discrete communication channels mikhail vladimirovich markov

In order to give a mathematical description of the channel, it is necessary and sufficient to indicate the set of signals that can be fed to its input, and for any admissible input signal, specify a random process (signal) at the channel output. The task of the process is understood in the sense as it was defined

in § 2.1, and is reduced to specifying a probability distribution in one form or another.

An accurate mathematical description of any real channel is usually quite difficult. Instead, they use simplified mathematical models that make it possible to identify all the most important regularities of a real channel, if the most significant features of the channel are taken into account when building the model and minor details that have little effect on the course of communication are discarded.

Let us consider the simplest and most widely used mathematical models of channels, starting with continuous channels, since they largely predetermine the nature of discrete channels.

An ideal noisy channel is a linear circuit with a constant transfer function, usually concentrated in a limited frequency band. Any input signal with a spectrum within a certain frequency band and with a limited average power (or peak power Ppik) is acceptable. These restrictions are typical for all continuous channels, and in the future they will not be discussed. Note that if the signal power is not limited, but is considered finite, then the set of admissible signals forms a vector space, either finite-dimensional (with certain restrictions on the duration and width of the spectrum) or infinite-dimensional (with weaker restrictions). In an ideal channel, the output signal for a given input signal turns out to be deterministic. This model is sometimes used to describe cable ducts. However, strictly speaking, it is unsuitable for real channels, which inevitably contain, albeit very weak, additive interference.

A channel with additive Gaussian noise, in which the output signal is

where is the input signal; permanent; Gaussian additive noise with zero mathematical expectation and a given correlation function. Most often considered white noise or quasi-white (with a uniform spectral density in the signal spectrum band

Usually, the delay is not taken into account, which corresponds to a change in the time reference at the channel output.

Some complication of this model is obtained if the transmission coefficient and delay are considered as known functions of time:

This model satisfactorily describes many wired channels, radio channels in line-of-sight communication, and

also radio channels with slow total fading, at which the values ​​of

The channel with an undefined signal phase differs from the previous one in that the delay in it is a random variable. For narrow-band signals, taking into account (2.69) and (3.2), expression (3.29) at constant and random can be represented in the form

where is the Hilbert transform of a random initial phase. The probability distribution is assumed to be specified, most often it is set uniform over the interval from 0 to This model satisfactorily describes the same channels as the previous one, if the signal phase fluctuates in them. This fluctuation is caused by small changes in the length of the channel, the properties of the medium in which the signal passes, as well as the phase instability of the reference oscillators.

A single-beam Gaussian channel with general fading (fluctuations of signal amplitudes and phases) is also described by formula (3.30), but the factor K, as well as the phase, are considered to be random processes. In other words, the quadrature components

When the quadrature components change in time, the received oscillation

As noted on p. 94, the one-dimensional distribution of the transmission coefficient can be Rayleigh (3.25) or generalized Rayleigh (3.26). Such channels are called, respectively, channels with Rayleigh or generalized Rayleigh fading. In a more general case, it has a four-parameter distribution. This model is called generalized Gaussian. The single-path fading channel model describes quite well many radio communication channels in different wavebands, as well as some other channels.

A linear channel with a random transfer function and Gaussian noise is a further generalization. In this channel, the output oscillation is expressed in terms of the input signal and the random impulse response of the channel

This model is quite universal for both wired and radio communications and describes channels with time scatter in frequency. The channel time scattering can often be ascribed to a discrete character (multipath channel model) and instead of (3.33) one can use the representation

where is the number of rays in the channel; the quadrature components of the channel transfer function for the beam, which are practically independent of ω within the spectrum of the narrowband signal.

A channel with time and frequency scattering is completely specified if, in addition to the noise correlation functions, the statistics of the random impulse response of the channel (or the transfer function or statistics of the quadrature components for all beams) are specified. signals.

Channels with complex additive noise (fluctuation, lumped, impulse) are described by any of the previous models with the addition of additional components of additive noise. Their complete description requires setting the probabilistic characteristics of all components of additive noise, as well as channel parameters. These models most fully reflect real communication channels, however, they are rarely used in the analysis due to their complexity.

Moving on to discrete channel models, it is useful to recall that it always contains a continuous channel as well as a modem. The latter can be regarded as a device that converts a continuous channel into a discrete one. Therefore, in principle, it is possible to derive a mathematical model of a discrete channel from the continuous channel and modem models. This approach is often fruitful, but it leads to rather complex models.

Let us consider simple models of a discrete channel, in the construction of which the properties of a continuous channel and a modem were not taken into account. However, it should be remembered that when designing a communication system, it is possible to vary within a fairly wide range the model of a discrete channel for a given model of a continuous channel by changing the modem.

The discrete channel model contains the setting of a set of possible signals at its input and the distribution of conditional probabilities of the output signal for a given input. Here, the input and output signals are sequences of code symbols. Therefore, to determine the possible input signals, it is sufficient to indicate the number of different symbols (code base), as well as the duration of the transmission of each symbol. We will assume that the meaning is the same for all symbols, which is done in most modern channels. The value determines the number of characters transmitted per unit of time. As stated in § 1.5, this is called the technical speed and is measured in baud. Each symbol arriving at the input of the channel causes the appearance of one symbol at the output, so that the technical speed at the input and output of the channel is the same.

In the general case, for any, the probability that when any given sequence of code symbols is fed to the channel input, some implementation of a random sequence will appear at the output. In this case, all -sequences (vectors), the number of which is equal, form a -dimensional finite vector space, if "addition" is understood as a bitwise summation modulo and similarly to define multiplication by a scalar (integer). For a particular case, such a space was considered in § 2.6.

Let's introduce another useful definition. We will call the bitwise difference (of course, in absolute value between the received and transmitted vectors) as an error vector. This means that the passage of a discrete signal through the channel can be considered as the addition of an input vector with an error vector. The error vector plays in a discrete channel approximately the same role as noise Thus, for any model of a discrete channel it is possible to write using addition in vector space (bitwise, modulo

where are random sequences of symbols at the input and output of the channel; random error vector, which generally depends on Different models differ in the vector probability distribution The meaning of the error vector is especially simple in the case of binary channels, when its components take on the values ​​0 and 1. Any unit in the error vector means that a symbol is received at the corresponding place of the transmitted sequence erroneous, and any zero means error-free reception of the character. The number of nonzero characters in the error vector is called its weight. To put it simply, the modem, which makes the transition from a continuous channel to a discrete one, converts the interference and distortions of the continuous channel into a stream of errors.

Let's list the most important and fairly simple models of discrete channels.

A symmetric memoryless channel is defined as a discrete channel in which each transmitted code symbol can be received erroneously with a fixed probability and correctly with a probability, and in the event of an error, any other symbol can be received with equal probability instead of the transmitted symbol. Thus, the probability that a symbol was received if it was transmitted equals

The term "out of memory" means that the probability of receiving a symbol by mistake does not depend on the history, that is, on what symbols were transmitted before it and how they were received. In what follows, for the sake of shortening, instead of “the probability of erroneous reception of a symbol” we will say “the probability of an error”.

Obviously, the probability of any -dimensional error vector in such a channel is

where I is the number of nonzero characters in the error vector (weight of the error vector). The probability that any errors occurred, located arbitrarily throughout the sequence of length, is determined by the Bernoulli formula

where is the binomial coefficient equal to the number of different combinations I of errors in a block of length

This model is also called the binomial channel. It satisfactorily describes the channel that appears with a certain choice of modem, if there are no fading in the continuous channel, and the additive noise is white (or at least quasi-white). The transition probabilities in a binary symmetric channel are schematically shown in the form of a graph in Fig. 3.3.

Rice. 3.3. Transition probabilities in a binary symmetric channel

Rice. 3.4. Transition probabilities in a binary symmetric erasure channel

Rice. 3.5. Transition probabilities in a binary asymmetric channel

A symmetric channel without memory with erasure differs from the previous one in that the alphabet at the output of the channel contains an additional symbol indicated by the sign This symbol appears when the 1st decision circuit (demodulator) cannot reliably identify the transmitted symbol. The probability of such a refusal to make a decision or to erase a character in this model is constant and does not depend on the transmitted

symbol. By introducing erasure, it is possible to significantly reduce the probability of error, sometimes it is even considered equal to zero. In fig. 3.4 shows schematically the probabilities of transitions in such a model.

An asymmetric memoryless channel is characterized, like the previous models, by the fact that errors occur in it independently of each other, but the error probabilities depend on which symbol is transmitted. So, in a binary asymmetric channel, the probability of receiving the character "1" when transmitting the character "0" is not equal to the probability of receiving "0" when transmitting "1" (Fig. 3.5). In this model, the probability of an error vector depends on which sequence of symbols is transmitted.

Markov channel is the simplest model of a discrete channel with memory. In it, the error probability forms a simple Markov chain, that is, it depends on whether the previous symbol was received correctly or erroneously, but does not depend on which symbol is transmitted.

Such a channel, for example, occurs when relative phase modulation is used in a continuous Gaussian noise channel (with a definite or undefined phase) (see below, § 4.5).

A channel with additive discrete noise is a generalization of symmetric channel models. In such a model, the probability of the error vector does not depend on the transmitted sequence. The probability of each error vector is assumed to be given and, generally speaking, is not determined by its weight. In many channels, out of two vectors with the same weight, it is more likely that the ones are located close to each other, i.e., there is a tendency to grouping errors.

A special case of such a channel is a channel with a variable parameter (VPC). In this model, the error probability for each symbol is a function of some parameter representing a random sequence, discrete or continuous, with known probability distributions, in particular, with a known correlation function. The parameter can be scalar or vector. We can say that determines the state of the channel. This model has many variations. One of them is the Hilbert model, in which it takes only two values ​​- and the error probability at is equal to zero, and at is equal to 0.5. The probabilities of transitions from the state and vice versa are given. In such a channel, all errors occur at and therefore are very closely grouped. There are also more complex gearbox models, for example, the Popov-Turin model. They are studied in special courses. The memory in the checkpoint is determined by the correlation interval of the parameter

Channel with non-additive noise and memory. ISI channel. The error probability in it depends on the transmitted symbols, as in the model of an asymmetric memoryless channel, but not on the symbol (or not only on that) for which the error probability is determined, but on the symbols that were transmitted before it.

Page 1

UDC 621.397

Discrete communication channel models

Mikhail Vladimirovich Markov, undergraduate student, mmarkov [email protected] mail . ru ,

FGOUVPO "Russian State University of Tourism and Service",

Moscow city
The basic models of the discrete communication channels used for information transfer in wireless systems of access to information resources are described. The basic merits and demerits of various communication channels are considered and their general characteristic is given. The mathematical apparatus that is necessary for the description of the pulsing nature of the traffic in real channels of transfer is presented. The mathematical calculations used for definition of functions of density of probability are given. Models of channels with the memory, characterized by packing of errors in the conditions of a frequency-selective dying down and multibeam distribution of signals are considered.
The basic models of discrete communication channels used for information transmission in wireless systems of access to information resources are described. The main advantages and disadvantages of various communication channels are considered and their general characteristics are given. The mathematical apparatus required to describe the pulsating nature of traffic in real transmission channels is presented. Mathematical calculations are given that are used to determine the probability density functions. Models of channels with memory, characterized by error bursting under conditions of frequency selective fading and multipath propagation of signals, are considered.
Key words: models of communication channels, discrete channels without memory, channels with deleting, asymmetrical channels without memory, channels with memory

Keywords: models of communication channels, discrete channels without memory, channels with erasure, unbalanced channels without memory, channels with memory.
Formulation of the problem

To describe information transmission channels, it is customary to use mathematical models that take into account the peculiarities of the propagation of radio waves in the environment. Among such features, one can, for example, note the presence of frequency selective fading, leading to the phenomenon of intersymbol interference (ISI). These phenomena have a significant effect on the quality of the received information, since in some cases they lead to batching of single errors. To describe the packaging processes, many models of communication channels with memory have been developed. The article describes the main models with various characteristics described using polygeometric distributions of the lengths of error-free gaps and bursts of errors.

Communication channels are usually called discrete in time only if the input and output signals are available for observation and further processing at strictly fixed times. To determine the models of discrete communication channels, it is sufficient to describe the random processes occurring in them, as well as to know the probabilities of errors. To do this, you must have an input ( A) and output () sets of transmitted symbols, a set of transition probabilities must be specified p( | a), which depends on the following quantities:
- a random sequence of characters from the input alphabet, where
- the symbol at the channel input into i-th moment in time;
- the sequence of received characters taken from the output alphabet, where
- the symbol at the output of the channel in i th moment.

Mathematically, the probability
can be defined as the conditional probability of receiving the sequence provided that the sequence is transmitted a... The number of transition probabilities increases in direct proportion with the duration of the input and output sequences. For example, when using a binary code for a sequence of length n, the number of transition probabilities will be
... Below is a description of mathematical models of discrete channels containing errors. With their help, one can quite simply determine the transition probabilities
for a given sequence of length P.


Discrete channel without memory

This type of channel is characterized by the fact that the probability of a symbol appearing at its output is determined only by the set of symbols at its input. This statement is true for all pairs of characters transmitted through the data communication channel. The most striking example of a memoryless channel is a binary balanced channel. The principle of its functioning can be described in the form of a graph shown in Fig. one.

An arbitrary character from the sequence a... On the receiving side, it is reproduced correctly with a constant probability q equal, or false, if the probability is determined by the expression

The transition diagram for a binary channel (BSC) is shown in Fig. one.

Rice. 1. Discrete channel without memory
For BSC, one can easily determine the probability of receiving any sequence of characters at the output, provided that a certain input sequence with a fixed length is given. Suppose that such a sequence has length 3

For the convenience of the analysis, we will represent the BSC as a channel to which the error generator is connected. Such a generator produces a random sequence of errors
... Each of her symbols added modulo with the symbol belonging to a binary channel -
... Addition is only performed if the error and symbol positions are the same. Thus, if the error ( ) has a single value, the transmitted character will be reversed, that is, the sequence ( ) containing an error.

The transition probabilities describing a stationary symmetric channel have the form

It can be seen from the above expression that the channel can be fully described by the statistics of the error sequence ( ), where
(0, 1). Such a sequence with length n, it is customary to call it an error vector. The components of this vector take single values ​​only at positions corresponding to incorrectly received characters. The number of units in a vector determines its weight.


Symmetrical memoryless channel with erasure

This kind of channel is in many ways similar to the memoryless channel, except that the input alphabet contains an additional (m + 1) symbol " ? ". This symbol is used only if the detector is not able to reliably recognize the transmitted symbol. a i... The likelihood of such an event R With is always a fixed value and does not depend on the transmitted information. The graph of transition probabilities for this model is shown in Fig. 2.

Rice. 2. Symmetrical channel without memory with erasure
Unbalanced channel without memory

This communication channel can be characterized by the fact that there is no dependence between the probabilities of error occurrence. But they themselves are determined by the symbols being transmitted at the current time. Thus, for a binary channel, we can write
... The transition probabilities describing this model are shown in Fig. 3.


Rice. 3. Unbalanced channel without memory
Discrete channel with memory.

This channel can be described by the relationship between the characters of the input and output sequences. Each character received depends on both the corresponding transmitted and the previous input and output bits. Most of the actually functioning communication systems contain just such channels. The most significant reason for the presence of memory in the channel is intersymbol interference, which manifests itself due to the restrictions imposed on the bandwidth of the communication channel. Each output character is dependent on several consecutive input characters. The type of this dependence is determined by the impulse response of the communication channel.

The second, no less important, cause of the "memory" effect is pauses in the transmission of data to the channel. The duration of such pauses can significantly exceed the duration of one data bit. During an interruption in transmission, the probability of incorrect reception of information increases sharply, as a result, the appearance of groups of errors, called packets, is possible.

For this reason, many researchers recommend using the concept of "channel state". As a result, each symbol of the received sequence statistically depends on both the input symbols and the channel state at the current time. The term "channel state" is usually understood as a kind of sequence of input and output symbols up to a given moment in time. The state of the channel is also strongly influenced by intersymbol interference. Memory for communication channels is divided into two types: memory for input and output. If there is a relationship between the output character and the bits in the input
, then such a channel has an input memory. It can be described by transition probabilities of the form
, i= –1, 0, 1, 2, ... From the point of view of mathematical analysis, the channel memory is infinite. In practice, the number of characters influencing the probability of correct or incorrect reception of information is finite.

Channel memory is calculated as the number of characters N, starting from which the equality of conditional probabilities is true

For all
. (4)

A sequence of input characters
can be thought of as a channel state
v ( i- 1) th moment. In this case, the channel can be characterized by a set of transition probabilities of the form
.

If the received data bit is characterized by dependence on the previous output symbols, the communication channel is usually called a channel with output memory. Transition probabilities can be represented as an expression

where the output characters are
determine the state of the channel
v ( i–1) th moment.

The use of transition probabilities to describe channels with memory is very ineffective due to the cumbersomeness of mathematical calculations. For example, if there is a channel with ISI and its memory is limited to five characters, then the number of possible channel states will be 2 5 = 32.

If the memory is only on the input or only on the output is limited in the binary channel N symbols, then the number of states is equal to 2 N, that is, it grows exponentially depending on the number of memory symbols N. In practice, most often you have to deal with channels with a memory of tens, hundreds, and even thousands of characters.


Discrete-continuous channel

Consider a discrete-continuous channel at the input of which there are independent symbols a i, and at the output there is a continuous signal
. To describe it, we use the transition (conditional) densities
decoded implementation z(t) provided that the character is transmitted , as well as the prior probabilities of the transmitted symbols
... Transitional densities are also called likelihood functions. On the other hand, a discrete-continuous channel can be described by posterior probabilities
transfer character when receiving an output oscillation z(t). Using Bayes' formula, we get

, (6).

This expression uses the density of the decoded waveform, which is defined as

(7).

The continuous-discrete channel is described in a similar way.


Discrete channel with memory, characterized by correlated

fading

Fading occurs when the amplitude or phase of a signal transmitted through a channel changes in a random fashion. It is clear that fading leads to a significant deterioration in the quality of the received information. Multipath propagation of signals is considered to be one of the most significant causes of fading.

Here in letters E, T designated signal energy and duration,

-whole numbers, l k > 1. (9).

There will be a random process on the receiving side y(t)

This expression uses the following parameters:

µ -channel transmission ratio, selected at random,

- random phase shift,

n (t) - white Gaussian noise (AWGN). Its power spectral density is N 0 /2.

If some sequence is transmitted a, then the output signal of the coherent demodulator will take the form. The named sequence is fed to the input of the decoder. The resulting sequence can be represented as a vector

, for calculating the components of which expressions (11) and (12) are used:

(12)


,

- the quadrature components together giving the channel gain,

- random variables associated with the influence of white Gaussian noise,

-- signal-to-noise ratio.

These expressions are valid only if the character is transmitted
.

If there is a transfer of a character
, then the right-hand sides of equalities (11) and (12) are interchanged. Random variables obey a Gaussian distribution with parameters

(15)

Analyzing these expressions, we can come to the conclusion that the channel transmission coefficient

depends on the Rayleigh distribution.

A fading channel is characterized by the presence of memory between the elements of a character sequence. This memory depends on the nature of the connections between the members of the series.

Let's pretend that

, (18),

where
.

In this case µ c and µ s form independent Markov sequences. And the probability density function w(µ) for consistency µ at N>1 will be equal



(20)

(21).

In the above expression (X) is the zero-order Bessel function of the first kind. The parameter will be equal to the average S / N ratio for the Rayleigh channel. Parameter r characterizes the dependence of random channel transmission coefficients on time. This parameter can be in the range of 0.99-0.999.

Knowing all of the above parameters, you can determine the conditional probability density function
... The analytical expression for this function is

Taking into account the above equations, we get

(23).

Thus, the conditional probability density functions
are the product of probability density functions in the case of centered and non-centered X 2 - distributions. This distribution has two degrees of freedom.

Hilbert's model

Unfortunately, all the above described channel models are unable to describe the pulsating nature of real transmission channels. Therefore, Hilbert proposed the following model of the channel with errors. The probability of an error in the current state of the network depends on the state of the network at the previous moment in time. That is, it is assumed that there is a correlation between two successive events. Thus, the memory of the channel and its pulsating nature are manifested. Hilbert's model is essentially a first-order Markov model with two states - "good" and "bad". If there are no errors in the received data, then it is a "good" state. In the "bad" state, the error probability takes on a certain value greater than 0. In fig. 4 shows the Hilbert model.

Rice. 4. Schematic illustration of the Hilbert model

Rice. 5. Schematic illustration of the Hilbert-Elliott model
The probability that the channel is in a "bad" state is

(24),

and thus the total probability of error is

The Hilbert model is a self-renewing model, which means that the lengths of error bursts and the lengths of error-free gaps do not depend on the previous bursts and error gaps. This is the so-called Hidden Markov Model (HMM). The current state of the model (X or P) cannot be determined until the output of the model is received. In addition, the model parameters ( p, q, P ( 1|B)) cannot be obtained directly during simulation. They can be estimated only with the help of special trigrams or with the help of curve approximation, as suggested in the work of Hilbert.

Due to the possibility of direct estimation of parameters, a simplified version of the Hilbert model was most often used, in which the probability of an error in a “bad” state is always equal to 1. This model can be slightly modified and represented as a first-order Markov chain with two states. Two parameters of the simplified Hilbert model (p, q) can be calculated directly by measuring error traces, taking into account the average length of error bursts

(26)

and the average value of the lengths of the intervals

or the full probability of error

Improvements to the Hilbert model were first described in the work of Eliot. In it, errors can occur in good condition as well, as shown in Fig. 5.

This model, also known as the Gilbert-Eliot channel (GEC), overcomes the constraint of the Hilbert model with respect to geometric distributions of burst lengths. In addition to the fact that this model must correspond to the HMM model, it must be non-renewable, that is, the lengths of error bursts must be statistically independent of the lengths of the gaps. This introduces new possibilities for modeling a radio channel, but also complicates the parameter estimation procedure. The parameters for the non-renewable HMM model and the GEC model can be estimated using the Baum-Walia algorithm.

Rice. 6. Split Markov Chains
In the 1960s, researchers Berger, Mandelbrot, Sussman, and Eliot proposed using renewable processes to model the error characteristics of communication channels. For this, Berger and Mandelbrot used an independent Pareto distribution of the form

for intervals between successive errors.

Rice. 7. Separated Markov chains with two error-free and three error states

Further improvements to the Hilbert model were published by Fritschman (1967), who proposed to split Markov chains into several chains with erroneous and error-free states (Fig. 6). A limitation was introduced on the number of forbidden transitions between error states and error-free states. The parameters of this model can be slightly improved due to the selective approximation of the polygeometric distributions of the gap lengths and the lengths of the error bursts. The polygeometric distribution is calculated as

under the following restrictions

0 i 1 and 0 i 1.

Parameters μ i and λ i correspond to the probabilities of transition to a new state and the probabilities of transition within the new state, K is the number of error-free states, N is the total number of states.

The configuration of this model is shown in Fig. 7. It includes two error-free states and three error states. However, there is still a statistical relationship between the current gap and the previous burst of errors, as well as between the current gap (burst of errors) and the previous gap (burst of errors). Therefore, for a complete description of the model, these dependencies also need to be considered. However, there is a limitation here associated with maintaining fixed proportions of the probabilities of transition from one state to another. In this regard, the model becomes renewable. For example, in the case of a 2/3 model configuration, the ratios between the probabilities will be as follows: p 13 : p 14 : p 15 = p 23 : p 24 : p 25 and p 31 : p 32 = p 41 : p 42 = p 51 : p 52 ... Thus, the Fritschman model shown in Fig. 8 is a special case of a split Markov chain. This figure shows only one of its erroneous states. This configuration of the distribution of the intervals between errors uniquely characterizes the model, and its parameters can be found by approximating the corresponding curve. Each state of the Fritschman model is an erroneous model without memory, and therefore the Fritschman model is limited to polygeometric distributions of the lengths of gaps and bursts of errors.

Rice. 8. Fritschman's model

The article considered the main models of communication channels used to transfer various discrete information and provide access to shared information resources. For most of the models, the corresponding mathematical calculations are given, on the basis of the analysis of which conclusions are drawn about the main advantages and limitations of these models. It was shown in the work that all the models under consideration have significant differences in error characteristics.
Literature


  1. Adoul, J-P.A., Fritchman, B.D. and Kanal, L.N. A critical statistic for channels with memory // IEEE Trans. on Information Theory. 1972. No. 18.

  2. Aldridge, R.P. and Ghanbari, M. Bursty error model for digital transmission channels. // IEEE Letters. 1995. No. 31.

  3. Murthy, D.N.P., Xie, M. and Jiang, R. Weibull Models . John Wiley & Sons Ltd., 2007.

  4. Pimentel, C. and Blake, F. Modeling Burst Channels Using Partitioned Fritchman's Markov Models. // IEEE Trans. on Vehicular Technology. 1998. No. 47.

  5. McDougall, J., Yi, Y. and Miller, S. A Statistical Approach to Developing Channel Models for Network Simulations. // Proceedings of the IEEE Wireless Communication and Networking Conference. 2004. vol. 3. R. 1660-1665.
Page 1

Ministry of Education and Science of the Republic of Kazakhstan

Non-profit joint stock company

"Almaty University of Energy and Communications"

Department of Infocommunication Technologies

COURSE WORK

in the discipline "Digital Communication Technologies"

Performed:

Alieva D.A.

Introduction

2. System with ROS and continuous information transfer (ROS - np) and blocking

3. Determination of n, k, r, with the highest throughput R

4. Construction of encoder and decoder circuits for the selected g (x) polynomial

8. Calculations of the reliability indicators of the main and bypass channels

9. Choosing a highway on the map

Conclusion

Bibliography

Introduction

code cyclic channel device

Recently, digital data transmission systems are becoming more widespread. In this regard, special attention is paid to the study of the principles of transmission of discrete messages. The discipline "Digital communication technologies", which is based on the previously studied disciplines: "Theory of electrical communication", "Theory of electrical circuits", "Fundamentals of construction and CAD of telecommunication systems and networks", "Digital devices and fundamentals computer technology ", etc. As a result of studying this discipline, it is necessary to know the principles of building systems for the transmission and processing of digital signals, hardware and software methods for increasing the noise immunity and transmission speed of digital communication systems, methods for increasing the effective use of communication channels. It is also necessary to be able to make calculations of the main functional units, to analyze the influence of external factors on the performance of communication facilities; have the skills to use computer equipment for calculations and design of software and hardware communications.

Completion of course work contributes to the acquisition of skills in solving problems and a more thorough examination of the sections of the course "Digital Communication Technologies".

The purpose of this work is to design a data transmission path between a source and a receiver of information using a cyclic code and decision feedback, continuous transmission and blocking of the receiver. In the course work it is necessary to consider the principle of operation of the encoding and decoding device of the cyclic code. Software tools are widely used to simulate telecommunication systems. Using the "System View" package, in accordance with the given option, the circuits of the encoder and decoder of the cyclic code should be assembled.

1. Models of partial description of a discrete channel

In real communication channels, errors occur for many reasons. In wired channels, the largest number of errors is caused by short interruptions and impulse noise. In radio channels, fluctuation noises have a noticeable effect. In shortwave radio channels, the main number of errors occurs when the signal level changes due to the influence of fading. In all real channels, errors are distributed very unevenly in time, which is why the error streams are also uneven.

There are many mathematical models for a discrete channel. Also, in addition to general schemes and particular models of a discrete channel, there are a large number of models that give a partial description of the channel. Let us dwell on one of these models - the model of A.P. Purtov.

Discrete channel model formula with independent errors:

Errors are batch in nature, therefore, a coefficient is introduced

Using this model, one can determine the dependence of the probability of the appearance of a distorted combination on its length n and the probability of the appearance of combinations of length n with t errors (t

The probability P (> 1, n) is a non-decreasing function of n.

For n = 1 P (> 1, n) = Posh

The probability of occurrence of distortions of a codeword of length n:

where is the error grouping indicator.

At 0, we have the case of independent occurrence of errors, and at 1, the occurrence of group errors (at = 1, the probability of distortions of the code combination does not depend on n, since in each erroneous combination all elements are received with an error). The largest value of d (0.5 to 0.7) is observed on the CLS, since a short interruption leads to the appearance of groups with a higher density of errors. In microwave links, where, along with intervals of high error density, intervals with rare errors are observed, the value of d lies in the range from 0.3 to 0.5. In HF radiotelegraph channels, the error grouping indicator is the smallest (0.3-0.4).

Distribution of errors in combinations of different lengths:

estimates not only the probability of occurrence of distorted combinations (at least one error), but also the probability of combinations of length n with t predetermined errors P (> t, n).

Consequently, the grouping of errors leads to an increase in the number of code combinations, affected by errors of greater multiplicity. Analyzing all of the above, we can conclude that the grouping of errors reduces the number of code combinations of a given length n. This is also understandable from purely physical considerations. With the same number of errors, batching leads to their concentration on individual combinations (the error rate increases), and the number of distorted code combinations decreases.

2. System with ROS and continuous information transfer (ROS-np) and blocking.

In POC-np systems, the transmitter transmits a continuous sequence of combinations without waiting for acknowledgment signals. The receiver erases only those combinations in which the solver detects errors, and gives a re-request signal based on them. The rest of the combinations are issued by PI as they are received. When implementing such a system, difficulties arise due to the finite time of transmission and propagation of signals. If at some point in time the reception of the codeword in which the error is detected is completed, then by this time instant the next codeword is already being transmitted via the forward channel. If the propagation time of the signal in the channel tc exceeds the duration of the codeword nt o, then by the time t "the transmission of one or more combinations following the second may be completed. Some more codewords will be transmitted until the time (t") until the analyzed the re-request signal for the second combination.

Thus, with continuous transmission, during the time between the moment of error detection (t ") and the arrival of the repeated codeword (t" "), h more combinations will be received, where the symbol [x] means the smallest integer greater than or equal to x.

Since the transmitter repeats only the combinations for which the re-request signal is received, then as a result of repetition with a delay of h combinations, the order of the combinations in the information issued by the PI system will differ from the order in which the code combinations arrive in the system. But to the recipient, the codewords must arrive in the same order in which they were transmitted. Therefore, to restore the order of the combinations in the receiver, there must be a special device and a buffer storage of significant capacity (at least ih, where i is the number of repetitions), since multiple repetitions are possible.

In order to avoid complication and increase in the cost of receivers, systems with POS-np are built basically in such a way that after detecting an error, the receiver erases the combination with an error and is blocked for h combinations (i.e., does not receive h subsequent combinations), and the transmitter repeats h last combinations (combination with an error and h - 1 following it). Such systems with ROS-np are called systems with blocking ROS-npbl. These systems allow you to organize the continuous transmission of code combinations while maintaining their order.

Figure 1 - Block diagram of the system with ROS

3. Determination of n, k, r, at the highest throughput R.

The length of the codeword n ​​should be chosen in such a way as to provide the maximum throughput of the communication channel. When using a correction code, the code combination contains n bits, of which k bits are informational, and r bits are check ones:

Figure 2 - Block diagram of the system algorithm with ROS-npbl

If the communication system uses binary signals (signals of the "1" and "0" types) and each unit element carries no more than one bit of information, then there is a relationship between the information transfer rate and the modulation rate:

C = (k / n) * B, (1)

where C is the information transfer rate, bit / s;

B - modulation rate, Baud.

Obviously, the smaller r, the more the k / n ratio approaches 1, the less C and B differ, i.e. the higher the throughput of the communication system.

It is also known that for cyclic codes with the minimum code distance d 0 = 3 the following relation is true:

The above statement is valid for large d 0, although there are no exact relations for the connections between r and n. There are only upper and lower bounds indicated.

From the above, we can conclude that from the point of view of introducing constant redundancy into the codeword, it is advantageous to choose long codewords, since with increasing n the relative throughput increases, tending to the limit equal to 1:

In real communication channels, there are interference that leads to the appearance of errors in the code combinations. When an error is detected by a decoder in systems with POC, a group of code combinations is asked again. During re-asking, useful information decreases.

It can be shown that in this case:

where Р 00 is the probability of error detection by the decoder (the probability of re-asking);

R PP - the probability of correct reception (error-free reception) of the code combination;

M is the storage capacity of the transmitter in the number of code combinations.

At low probabilities of errors in the communication channel (P osh.< 10 -3) вероятность Р 00 также мала, поэтому знаменатель мало отличается от 1 и можно считать:

With independent errors in the communication channel, with:

Storage capacity:

Sign< >- means that when calculating M, the larger nearest integer value should be taken.

where L is the distance between terminal stations, km;

v is the speed of signal propagation through the communication channel, km / s;

B - modulation rate, Baud.

After the simplest substitutions, we finally have

It is easy to see that at P osh = 0 formula (8) turns into formula (3).

In the presence of errors in the communication channel, the value of R is a function of P osh, n, k, B, L, v. Therefore, there is an optimal n (for given P osh, B, L, v), at which the relative throughput will be maximum.

Formula (8) becomes even more complicated in the case of dependent errors in the communication channel (when batching errors).

Let us derive this formula for the Purtov error model.

As shown in, the number of errors t about in a combination of length n bits is determined by Equation 7.38. To detect such a number of errors, we find a cyclic code with a code distance d 0 not less. Therefore, according to formula 7.38, it is necessary to determine the probability:

As shown, with some approximation, we can associate the probability with the probability of the decoder not detecting an error Р HO and the number of check bits in the code combination:

Substituting the value in (9) with the replacement of t about by d 0 -1, we have:

When calculating on microcalculators, it is more convenient to use decimal logarithms.

After transformations:

Returning to formulas (6) and (8) and replacing k with n-r, taking into account the value of r, from formula (11) we obtain:

The second term of formula (8), taking into account the grouping of errors by the ratio 7.37, will take the form:

Let us determine the optimal length of the codeword n, which provides the largest relative throughput R and the number of check bits r providing the given probability of undetected error Roche.

Table 1 - Target probability of undetected Roche error

Table 1 shows that the highest throughput

R = 0.9127649 provides a cyclic code with parameters n = 511, r = 7, k = 504.

The generating polynomial of degree r is found from the table of irreducible polynomials (Appendix A to this ME).

We choose, for r = 7, the polynomial g (x) = x 7 + x 4 + x 3 + x 2 +1

4. Construction of encoder and decoder circuits for the selected g (x) polynomial

a) Let's build a cyclic code encoder.

The work of the encoder at its output is characterized by the following modes:

1. Formation of k elements of the information group and at the same time dividing the polynomial displaying the information part x r m (x) by the generating (generating) polynomial g (x) in order to obtain the remainder of division r (x).

2. Formation of checking r elements by reading them from the cells of the dividing circuit x r m (x) to the output of the encoder.

The block diagram of the encoder is shown in Figure 2.

The cycle of the encoder for the transmission of n = 511 unit elements is n clock cycles. Clock signals are generated by a transmitting valve, which is not indicated in the diagram.

The first mode of the encoder operation lasts k = 504 clock cycles. From the first clock pulse, the trigger T takes a position at which the signal "1" appears at its direct output, and the signal "0" appears on the inverse one. Signal "1" opens keys (logical circuits AND) 1 and 3. Signal "0" key 2 is closed. In this state, the trigger and the keys are k + 1 clock cycles, i.e. 505 measures. During this time, 504 single elements of the information group k = 504 will be sent to the encoder output through the public key 1.

At the same time, through the public key 3, information elements are fed to the device for dividing the polynomial x r m (x) by g (x).

Division is carried out by a multi-cycle filter with the number of cells equal to the number of check bits (degrees of the generating polynomial). In my case, the number of cells is r = 7. The number of adders in the device is equal to the number of nonzero terms g (x) minus one (note on page 307). In our case, the number of adders is four. The adders are placed after the cells corresponding to the nonzero terms of g (x). Since all irreducible polynomials have a term x 0 = 1, then the adder corresponding to this term is installed before the key 3 (logic AND).

After k = 504 clock cycles, the remainder of the division r (x) will be written in the cells of the division device.

Under the influence of k + 1 = 505 clock pulse, the trigger T changes its state: the signal "1" appears on the inverse output, and "0" appears on the direct output. Keys 1 and 3 are closed and key 2 is opened. For the remaining r = 7 clock cycles, the modulus elements (check group) are fed through key 2 to the encoder output, also starting from the most significant bit.

Figure 3 - Block diagram of the encoder

b) Let's construct a decoding device for a cyclic code.

The operation of the decoder circuit (Figure 3) is as follows. The received code combination, which is displayed by the polynomial P (x), enters the decoding register and simultaneously into the cells of the buffer register, which contains k cells. The cells of the buffer register are connected through logic circuits "no", allowing signals to pass only if there is a "1" at the first input and "O" at the second (this input is marked with a circle). The code combination will enter the input of the buffer register through the AND 1 circuit. This switch opens from the output of the T trigger by the first clock pulse and closes with k + 1 clock pulses (completely analogous to the operation of the T trigger in the encoder circuit). Thus, after k = 504 clock cycles, the information group of elements will be written to the buffer register. The circuits are NO in the register filling mode are open, because the voltage from the side of the AND 2 key is not supplied to the second inputs.

At the same time, in the decoding register, the code combination (polynomial P (x) by the generating polynomial g (x)) is divided during all n = 511 clock cycles. The scheme of the decoding register is completely analogous to the division scheme of the encoder, which was discussed in detail above. If, as a result of division, a zero remainder is obtained - the syndrome S (x) = 0, then subsequent clock pulses will write off information elements to the output of the decoder.

If there are errors in the received combination, the syndrome S (x) is not equal to 0. This means that after the n-th (511) clock cycle at least one cell of the decoding register will be written “1”. Then a signal will appear at the output of the OR circuit. Key 2 (circuit AND 2) will work, the NO circuits of the buffer register will be closed, and the next clock pulse will transfer all register cells to the "0" state. Incorrectly received information will be erased. At the same time, the erasure signal is used as a command to block the receiver and to ask again.

5. Determination of the amount of transmitted information W

Let it be required to transmit information over a time interval T, which is called the rate of information transmission. The failure criterion t open is the total duration of all faults, which is permissible for the time T. If the time of the failures during the time interval T exceeds t open, then the data transmission system will be in a failure state.

Therefore, during the time T lane -t otk it is possible to transmit C bits of useful information. Determine W for the previously calculated R = 0.9281713, B = 1200 baud, T lane = 460 s., T open = 60 s.

W = R * B * (Tper-totk) = 445522 bits

6. Building circuits of the encoder and decoder of the cyclic code in the System View environment

Figure 4 - Cyclic code encoder

Figure 5 - Output and input signal of the encoder

Figure 7 - Decoder input signal, bit error and output syndrome

7. Finding capacity and building a timing diagram

Let's find the capacity of the drive:

M =<3+(2 t p /t k)> (13)

where t p is the propagation time of the signal through the communication channel, s;

t k - the duration of the code combination of n bits, s.

These parameters are found from the following formulas:

t p = L / v = 4700/80000 = 0.005875 s (14)

h = 1 + (16)

where t stand = 3t to + 2t p + t ak + t az = 0.6388 + 0.1175 + 0.2129 + 0.2129 = 1.1821 s,

where t ak, t az is the analysis time in the receiver, t 0 is the duration of a single pulse:

h = 1 +<1,1821/511 8,333 10 -4 >=3

8. Calculation of reliability indicators of the main and bypass channels

The probability of an error is known (P osh = 0.5 10 -3), the total probability will be the sum of the following components p pr - correct reception, p but - non-detection of an error, p about - the probability of detecting an error by the decoder (the probability of re-asking).

The dependence of the probability of the appearance of a distorted combination on its length is characterized as the ratio of the number of distortion of code combinations N osh (n) to the total number of transmitted combinations N (n):

The probability Р (? 1, n) is a non-decreasing function of n. For n = 1 P (? 1, n) = p osh, and for n>? probability Р (? 1, n)> 1:

P (? 1, n) = (n / d 0 -1) 1-b r osh, (17)

P (? 1, n) = (511/5) 1-0.5 0.5 10 -3 = 5.05 10 -3,

For independent errors in the communication channel, for n p osh<<1:

p about? n p osh (18)

p about = 511 0.5 10 -3 = 255.5 10 -3

The sum of the probabilities must be equal to 1, i.e. we have:

p pr + p but + p about = 1 (19)

p pr +5.05 10 -3 +255.5 10 -3 = 1

The timing diagram (Figure 9) illustrates the operation of the system with ROS NPbl when an error is detected in the second combination in the case with h = 3. As can be seen from the diagram, the transmission of the AI ​​combination is carried out continuously until the transmitter receives the re-request signal. After that, the transfer of information from the AI ​​stops for a time t standby and 3 combinations starting from the second. At this time, h combinations are erased in the receiver: the second combination in which an error was detected (marked with an asterisk) and 3 subsequent combinations (shaded). Having received the combinations sent from the storage device (from the second to the 5th inclusive), the receiver issues their PI, and the transmitter continues transmitting the sixth and subsequent combinations.

Figure 8 - Timing diagrams of the system with ROS-npbl

9. Choosing a highway on the map

Figure 9 - Highway Aktyubinsk - Almaty - Astana

Conclusion

During the course work, the essence of the model of the partial description of the discrete channel (the model of Purtova L.P.) was considered, as well as a system with decisive feedback, continuous transmission and blocking of the receiver.

The given values ​​were used to calculate the basic parameters of the cyclic code. In accordance with them, the type of the generating polynomial was chosen. For this polynomial, encoder and decoder circuits are constructed with an explanation of the principles of their operation. The same schemes were implemented using the "System View" package. All the results of the experiments are presented in the form of figures confirming the correct operation of the assembled encoder and decoder circuits.

For the forward and reverse discrete data transmission channel, the main characteristics were calculated: the probability of undetectable and detected by the cyclic code of the error, etc. For the ROS npbl system, time diagrams were constructed according to the calculated parameters explaining the principle of operation of this system.

Two points were selected on the geographic map of Kazakhstan (Aktyubinsk - Almaty - Astana). The 4700 km long highway chosen between them was divided into sections 200-700 km long. For a visual representation, a map is presented in the work.

Analyzing the specified error grouping indicator, we can say that the main calculation was made in the work for the design of cable communication lines, since, i.e. lies in the range of 0.4-0.7.

Bibliography

1 B. Sklyar Digital communication. Theoretical foundations and practical applications: 2nd ed. / Per. from English M .: Publishing house "Williams", 2003. 1104 p.

2 Prokis J. Digital communication. Radio and communication, 2000.-797s.

3 A.B. Sergienko. Digital Signal Processing: A Textbook for Universities. - M .: 2002.

4 Brand standard. Educational work. General requirements for construction, presentation, design and content. FS RK 10352-1910-U-e-001-2002. - Almaty: AIPET, 2002.

5 1 Shvartsman V.O., Emelyanov G.A. The theory of transmission of discrete information. - M .: Communication, 1979.-424 p.

6 Transfer of discrete messages / Ed. V.P. Shuvalov. - M .: Radio and communication, 1990 .-- 464 p.

7 Emelyanov G.A., Shvartsman V.O. Transfer of discrete information. - M .: Radio and communication, 1982 .-- 240 p.

8 Purtov L.P. and other Elements of the theory of transmission of discrete information. - M .: Communication, 1972 .-- 232 p.

9 Kolesnik V.D., Mironchikov E.T. Decoding of cyclic codes. - M .: Communication, 1968.

Similar documents

    Partial description model of a discrete channel (L. Purtov's model). Determination of the parameters of the cyclic code and the generating polynomial. Construction of an encoding and decoding device. Calculation of characteristics for the main and bypass data transmission channel.

    term paper, added 03/11/2015

    Models of partial description of a discrete channel. System with ROS and continuous information transfer (ROS-np). The choice of the optimal length of the codeword when using the cyclic code in the system with POC. The length of the codeword.

    term paper, added 01/26/2007

    Technical systems for collecting telemetric information and guarding stationary and mobile objects, methods of ensuring the integrity of information. Development of an algorithm and a scheme for the operation of the encoder. Calculation of the technical and economic efficiency of the project.

    thesis, added 06/28/2011

    Research and specifics of using inverse code and Hamming. Block diagram of a data transmission device, its components and principle of operation. Simulation of a temperature sensor and an encoder and decoder for inverse code.

    term paper added 01/30/2016

    Designing a medium-speed data transmission path between two sources and recipients. Assembling a circuit using the "System View" package for modeling telecommunication systems, encoding and decoding cyclic code.

    term paper added 03/04/2011

    Calculation of the number of channels on the highway. Selection of a transmission system, determination of capacitance and structural calculation of an optical cable. Selection and characteristics of the intercity highway route. Calculation of signal, numerical aperture, normalized frequency and number of modes.

    term paper, added 09/25/2014

    Partial description model of a discrete channel, L.P. Purtov's model. Block diagram of the system with ROSnp and blocking and block diagram of the system operation algorithm. Construction of an encoder circuit for the selected generating polynomial and an explanation of its operation.

    term paper, added 10/19/2010

    Classification of synchronization systems, calculation of parameters with addition and subtraction of impulses. Construction of an encoder and a decoder of a cyclic code, a diagram of systems with feedback and waiting for a non-ideal return channel, calculating the probability of errors.

    term paper, added 04/13/2012

    The essence of the Hamming code. Circuits of an encoder for four information bits and a decoder. Determination of the number of check digits. Construction of a Hamming correcting code with a single error correction for ten information bits.

    term paper added 01/10/2013

    Study of patterns and methods of message transmission over communication channels and solving the problem of analysis and synthesis of communication systems. Designing a data transmission path between the source and the recipient of information. Partial description model for a discrete channel.

A discrete communication channel (DKS) has a set of code symbols at the input X with source entropy H (X), and the output is a set of characters Y with entropy H (Y)(fig. 42).

If the generated symbols from the set X and those identified from the set Y are located at the nodes of the graph, connecting these nodes with arcs that reflect the probabilities of the transition of one symbol to another, then we get the model of a discrete communication channel shown in Fig. 43.

Many characters X of course and is determined by the base of the number system of the code K x at the channel inlet. The number system for the characters to be detected is also finite and amounts to To y... The transition probabilities connecting the input and output symbols can be written as a matrix

In this matrix, the i-th column determines the probability of identifying the output of the discrete communication channel of the symbol at i. The probabilities located on the main diagonal are called the probabilities of the symbols passing, the rest of the probabilities are the transformation probabilities. The analysis of the model of a discrete communication channel is possible if the statistics of the appearance of symbols at the channel input is known. Then the entropy can be determined H (X)... If the statistics of symbols at the channel output is known, then it is easy to establish the entropy H (Y)... Loss of information can be caused by interference, which is displayed in the discrete channel in the form of a stream of errors. The error flow is specified using a specific error model, on the basis of which a matrix can be established R... Knowing this matrix, they find the conditional entropy, which, as shown above, reflects the loss of information when it passes through the communication channel. In this case, this is the loss of information due to the action of errors in the discrete communication channel. Based on the model of the discrete communication channel, it is possible to perform the classification of discrete channels.

On the basis of the number system, the codes at the DKS input distinguish between binary, ternary, quaternary communication channels and others.

According to the ratio of the number system at the output and at the input of the BCS, channels with erasure are distinguished if K y> K x, and channels without erasure if K y = K x.

By the presence of the dependence of the probability of transitions of symbols in the BCS on time, non-stationary channels for which such a dependence exists, and stationary channels, where the probabilities of transitions are constant, are distinguished. Non-stationary channels can be classified according to the dependence of the transition probability on the previous values. Discrete channels with memory, in which such a dependence takes place, and discrete channels without memory, where this dependence does not exist, are distinguished.

At certain ratios between the probabilities of transitions included in the matrix P, there are: symmetric input channels, for which the probabilities included in the matrix row. are permutations of the same numbers; symmetric output channels, for which it refers to the probabilities included in the columns; balanced input and output channels, provided both conditions are met. Based on the presented classification, the matrix of the binary symmetric channel has the form

where R- the probability of symbol distortion.

Correspondingly, the matrix of a binary symmetric erasure channel

where R- the likelihood of transformation; 1-P-q- the probability of the symbol passing; q- the probability of erasing a symbol.

For the boundary case of a binary symmetric channel without noise, the transition matrix has the form

Graph TO-th channel without noise is shown in Fig. 44.

By using a discrete communication channel, basic transmission problems can be solved. For a channel without noise, this is the choice of the optimal code, which by its properties is consistent with the source, i.e., has the smallest average length. For a noisy channel, this is the choice of the code that provides a given transmission probability at the highest possible rate. To solve these problems, let us consider the main characteristics of the booster compressor station.

The main characteristic of a discrete channel is throughput, Which is understood as the upper limit of the amount of information that can be transmitted through the communication channel displayed by a given model. Let us estimate the throughput of a discrete communication channel. The amount of mutual information linking sets of symbols X, Y, will be. Bandwidth.

Let us open this expression for individual variants of a discrete communication channel.

Noise-free discrete communication bandwidth... In the absence of noise, there is no information loss in the channel, and therefore, then C = I max = H max (Y)... As is known, the maximum entropy for discrete events is achieved when they are equally probable. Considering that the output of the communication channel may appear To y characters, we get that. From here C = log 2 K y.

Thus, the bandwidth of a noiseless discrete channel depends only on the code base. The larger it is, the higher the "informative" of each character, the greater the bandwidth. Throughput is measured in binary units per character and is not related to time in this representation. With the transition from a binary code to a quaternary code, the bandwidth of the DKS without noise doubles.

Bandwidth of a discrete balanced communication channel with noise... Consider a channel without erasure, for which K x = K y = K... In the presence of noise in the BCS, the input symbol x j goes to symbol i, with probability. The probability of symbol transformation will be ... If the channel is symmetric, then the probabilities included in this sum are the same, and therefore ... Symbol passing probability (fig. 45). The bandwidth of the channel in question. It was shown earlier that H max (Y) = log 2 K,

Assuming that the symbols are equiprobable at the input of the DCS, i.e., we find

The minimum of the conditional entropy is achieved by the appropriate choice of the response threshold of the receiving circuit, at which the minimum value of the transformation probability is ensured R... Hence the throughput

It can be seen that it increases with an increase in the code base and with a decrease in the probability of symbol transformation.

In the case of a binary balanced channel with noise, the bandwidth can be found at K = 2, i.e. С = 1 + (1-P) log 2 (1-P) + Plog 2 P... The dependence of the bandwidth of the binary symmetric channel on the probability of symbol distortion is shown in Fig. 46. ​​At Р = 0 we get С = 1. As the probability of distortion rises to 0.5, the bandwidth drops to zero.

The working range of the discrete channel corresponds to the probability P<0,1. При этом пропускная способность близка к единице.

Erasure Binary Symmetrical Channel Bandwidth... If at the input of the binary channel there are symbols x 1, x 2, then in the presence of erasure at the channel output, the symbols at 1, at 2 and erase characters at 3... The erasure symbol is formed when there is a special erasure zone in the receiving device, entering into which means the appearance of an uncertainty (erasure) symbol. The introduction of an erasure zone into the receiver reduces the likelihood of character transformation R due to the appearance of the likelihood of erasing the symbol q(fig. 47). Then the probability of the symbol passing is l-P-q... Bandwidth ... In the presence of the erasure symbol, the aspiration for the equiprobability of the symbols at the channel output does not make sense, therefore the entropy at the output H (Y) defined as

,

where P (y i) is the probability of a symbol at the output of a discrete channel i.

Let us find the probabilities of the occurrence of symbols at the output under the condition that the symbols at the input are equally probable, then

,

Accordingly, the conditional entropy

Hence the throughput

The experience of using an erasure channel has shown that the introduction of an erasure zone is effective only in the presence of interference. Then it is possible to obtain P «q and increase the throughput of the communication channel.

In the general case, in conditions of interference, an increase in the throughput of a discrete channel is achieved due to the equiprobability of symbols at the output and a decrease in the probability of symbol distortion. In the case of a symmetric communication channel, the equiprobability of symbols at the output means the need for equiprobability of symbols at the input of the channel. This condition corresponds to the previously obtained requirement for constructing an optimal code. Reducing the probability of symbol distortion in a discrete channel depends on the design of the receiving circuit at the physical layer. The distribution law of interference at the output of a continuous communication channel makes it possible to find the optimal value of the response threshold of the receiving circuit and, based on it, to estimate and minimize the probability of symbol distortion. Thus, based on the model of a discrete communication channel, it is possible to set an upper limit on the information transfer rate and match the performance of the source with the throughput of the communication channel. Conditional entropy makes it possible to estimate the minimum necessary redundancy per code symbol. This makes it possible to find the lower limit of redundancy when constructing detection and correction codes for noisy communication channels. The specific value of redundancy is established from the requirements for the probabilistic-temporal characteristics of the transmission process. These characteristics can be calculated based on the model of the functioning of the data transmission system.

Discrete channel is called a set of means for transmitting discrete signals. Such channels are widely used, for example, in data transmission, telegraphy, and radar.

Discrete messages, consisting of a sequence of characters of the alphabet of the source of messages (primary alphabet), are converted in the encoder into a sequence of characters. Volume m alphabet of characters (secondary alphabet), as a rule, less volume l alphabet of signs, but they may be the same.

The material embodiment of a symbol is an elementary signal obtained in the process of manipulation - a discrete change in a certain parameter of the information carrier. Elementary signals are generated taking into account the physical constraints imposed by a particular communication line. As a result of manipulation, a complex signal is assigned to each sequence of symbols. Lots of complex signals, of course. They differ in the number, composition and mutual arrangement of elementary signals.

The terms "chip" and "symbol" as well as "complex signal" and "sequence of symbols" will be used synonymously hereinafter.

The information model of a noisy channel is specified by a set of symbols at its input and output and a description of the probabilistic properties of the transmission of individual symbols. In general, a channel can have many states and transition from one state to another both over time and depending on the sequence of transmitted symbols.

In each state, the channel is characterized by the matrix of conditional probabilities? () That the transmitted symbol u i will be perceived at the output as a symbol? j. The values ​​of probabilities in real channels depend on many different factors: properties of signals that are physical carriers of symbols (energy, type of modulation, etc.), the nature and intensity of interference affecting the channel, the method of determining the signal on the receiving side.

If there is a dependence of the channel transition probabilities on time, which is typical for almost all real channels, it is called a non-stationary communication channel. If this dependence is insignificant, a model in the form of a stationary channel is used, the transition probabilities of which do not depend on time. A non-stationary channel can be represented by a number of stationary channels corresponding to different time intervals.

The channel is named with " memory»(With aftereffect), if the transition probabilities in a given channel state depend on its previous states. If the transition probabilities are constant, i.e. the channel has only one state, it is called stationary channel without memory... A k-ary channel is a communication channel in which the number of different symbols at the input and output is the same and equal to k.

Stationary discrete binary channel without memory is uniquely determined by four conditional probabilities: p (0/0), p (1/0), p (0/1), p (1/1). It is customary to depict such a channel model in the form of a graph shown in Fig. 4.2, where p (0/0) and p (1/1) are the probabilities of undistorted transmission of symbols, and p (0/1) and p (1/0) are the probabilities of distortion (transformation) of symbols 0 and 1, respectively.

If the probabilities of symbol distortion can be taken equal, i.e., then such a channel is called binary balanced channel[for p (0/1) p (1/0), the channel is called asymmetrical]. The symbols at its output are correctly received with probability? and wrong - with probability 1-p = q. The mathematical model is simplified.

It was this channel that was studied most intensively not so much because of its practical significance (many real channels are described by him very approximately), but because of the simplicity of the mathematical description.

The most important results obtained for a binary symmetric channel are extended to broader classes of channels.


One more channel model should be noted, which has recently become more and more important. This is a discrete erasure channel. It is characteristic for it that the alphabet of the output symbols differs from the alphabet of the input symbols. At the input, as before, the symbols are 0 and 1, and at the output of the channel, the states are fixed, in which the signal with an equal base can be referred to either one or zero. In place of such a character, neither zero nor one is put: the state is marked with an additional erase character S. During decoding, it is much easier to correct such symbols than misidentified ones.

In fig. 4 3 shows the models of the erasure channel in the absence (Fig. 4.3, a) and in the presence (Fig. 4.3, 6) of transformation of symbols.

Top related articles