How to set up smartphones and PCs. Informational portal

Recovery of signals. Informational calculation of the system

For the information calculation, as an initial criterion, we will use the permissible root-mean-square error of the system, which is determined through the error of individual nodes. In our case, it is determined by the following formula:

where is the rms ADC error arising due to quantization noise (ADC quantization error);

Signal recovery error.

To simplify calculations, all indicated errors are preliminarily assumed to be equal. Thus, it follows from formula (1) that

In accordance with the terms of reference, the conversion error

1%, therefore

Calculation of the ADC capacity

ADCs convert analog signals into digital form and are terminal devices in the interface for entering information into a computer. The main characteristics of an ADC are: resolution, accuracy and speed. The resolution is determined by the bit depth and the maximum range of the analog input voltage.

The relative root mean square error introduced by quantizing the ADC is calculated by the formula

where is the root-mean-square value of the quantization noise.

The ADC quantization step, determined by the range of the signal U s. and the number of ADC bits n.

Thus, the ADC quantization error is

From this expression, you can determine the minimum required ADC bit depth:

Based,

Therefore, the minimum ADC bit capacity for solving this problem is 6 bits. But since the ADC in the ADAM-6024 module has 16 bits, its real conversion error will be equal to

Calculation of the maximum possible recovery error

Since the assignment indicates that the maximum conversion error is 1%, then to satisfy this condition, the recovery error must be less than or equal to

Reconstruction of a continuous signal U (t) using the interpolation method

Interpolation recovery method is very widespread today. This method is best suited for processing signals using computer technology. This recovery method is based on the use of the Lagrange interpolation polynomial. For reasons of simplicity of implementation of interpolating devices, a polynomial of no higher than second order is usually used, mainly using interpolation of the zero and first order (stepwise and linear). Reconstruction of signals using step (a) and linear (b) interpolation is explained in Figure 13.

Figure 13. Reconstruction of signals using step (a) and linear (b) interpolation

With stepwise interpolation, the instantaneous values ​​U (kT) of the discrete signal U (t) are kept constant over the entire sampling interval T (Figure 13, a).

Linear interpolation consists in connecting the instantaneous values ​​of U (kT) by straight line segments, as shown in Figure 13, b.

The interpolation method of reconstruction has an error, which in practice is often expressed in terms of the maximum relative value

where is the signal reconstructed by the interpolation method (with stepwise interpolation, with linear interpolation); - range of variation of the discrete signal U (t).

The sampling period is selected taking into account the permissible error from the formula.

For step interpolator

With linear interpolation

with parabolic interpolation

Let's define the sampling period for one channel according to Kotelnikov:

According to the assignment of the diploma project, the frequency of the processes must be less than 0.1 Hz. The ADAM-6024 analog I / O module has fmax = 10 Hz (per channel). Since the system under development uses 4 analog input channels, the limiting sampling rate for each of the channels will be fmax = 2.5 Hz. Then the required sampling rate for step interpolation will be:

Consequently, step interpolation is not suitable to meet the requirements for the system being developed, since the sampling rate for step interpolation is significantly higher than 2.5 Hz.

The sampling rate for linear interpolation is

The sampling rate for parabolic interpolation is

You may notice that the sample rate for linear and parabolic interpolation is less than the sample rate limit of the unit per channel. But interpolation of the second and higher orders is practically not used, since its implementation becomes more complicated, therefore, to restore the signals, we will use linear interpolation.

Reconstruction of signals is reduced to the estimation of a certain number of unknown parameters of the useful signal. We restrict ourselves to considering the case of estimating one of the signal parameters, for example, the amplitude V, for a given signal shape. In this case, the noise will be assumed to be additive of the type of white Gaussian noise. We represent the useful signal in the form

where f (t)- known function of time; V- signal parameter.

The task is to use the accepted sample Y determine what is the value of the parameter V in useful signal X.

In contrast to the cases of detecting and distinguishing signals, here there is an infinite set of possible values ​​of the parameter V and, accordingly, an infinite number of hypotheses. The methods considered in the case of two-alternative and multi-alternative situations are also applicable to the problem of signal restoration.

Let us estimate the parameter V maximum likelihood method. If the received signal is counted at discrete times, then the likelihood function for the parameter V will be equal

(2.38)

The task is to find such a parameter value V for which the likelihood function is maximum. The maximum of the likelihood function corresponds to the minimum value of the exponent in expression (2.38)

From the minimum condition

whence we obtain the estimated value of the parameter

(2.39)

Passing to a continuous example, we get

(2.40)

In fig. 2.3 shows a diagram of a solver performing the operation of estimating a signal parameter. The device contains a signal generator f (t), the multiplier of the MV, performing the multiplication y (t) on the f (t), and an integrator integrating the product y (t) f (t).

To assess the accuracy of signal reconstruction, we use the standard deviation criterion. For this purpose, in (2.40), the received signal is expressed as the sum y (t) = Bf (t) + (t)... Then 2.40

Fig 2.3 Unknown parameter estimator

Recovery error

Dispersion of error

The mean of the product represents the interference correlation function

where Go- spectral density of the interference; - delta function;

Therefore, the root-mean-square value of the recovery error is

The signal reconstruction problem can also be solved by the optimal filtering method. In general, the formulation is as follows. Let the oscillation received over a certain time interval be a function of the signal and noise:

(2.42)

A signal can depend not on one, but on several parameters, and either the signal itself or its parameter are random processes. Function type, i.e. a method for combining signal and noise, and some of their statistical characteristics are assumed to be known a priori. Proceeding from them, it is necessary to determine the structure of the device (Fig. 1), which decides in an optimal way which realization of the signal itself or its parameter is contained in the received oscillation.

Rice. 2.4 Solving device

Due to the presence of noise and the random nature of the signal, the estimate of the realizations of the signal or its parameter will not coincide with the true realization, i.e. filtering errors will occur. For a quantitative assessment of the quality of filtering, the criteria for the minimum root-mean-square error, the criterion for the maximum signal-to-noise ratio, and the criterion for the maximum a posteriori probability are often used. Consider the linear filtering problem; we will also assume that the signal and noise interact additively, i.e.

Let us dwell at the beginning on the criterion for the minimum of the mean square error. We consider that the signal and noise are stationary normal, random processes with known correlation functions

It is necessary to determine the system which from the received mixture

It extracts the useful signal with the minimum root mean square error. Those. the sought optimal system must minimize the value

(2.43)

It is necessary to determine the structure of the filter (Fig. 2.4)

When evaluating the output of the system, it must predict (predict) the value of the input signal forwards, when the task is reduced to separating (smoothing) the signal from the oscillation.

A rigorous solution to this problem was obtained by A. N. Kolmogorov and N. Wiener.

They showed that the optimal device belongs to the class of linear filters with constant parameters. Let us illustrate their results. Suppose that at the input of a physically realizable linear system (Fig. 2.4) with an impulse response

(2.44)

A stationary random process is at work. In this case, a stationary random process at its output will be determined by the relation

(2.45)

Substituting (2.45) into (2.43), we obtain the following expression for the rms filtering error:

Which, after simple transformations, is reduced to the form:

Here is the mutual correlation function of the processes and

a - autocorrelation function of a random process

To determine the impulse response of the optimal filter that minimizes the root mean square error, use the following technique of the calculus of variations. Let:

where is a parameter independent of, and is an arbitrary function. In this case, the condition for the minimum of the root-mean-square error takes the form

After substituting (8) into (5), condition (9) takes the form:

The last relation must be satisfied for an arbitrary function, it follows that the impulse response must satisfy the Fredholm integral equation of the first kind

(10)

This equation is the basic equation of the theory of linear filtration and is called the Wiener-Hopf equation.

Thus, the problem of finding the optimal smoothing or predictive physically realizable filter is reduced to solving the integral equation (10). This solution has a definition of complexity, due mainly to the requirement of the physical realizability of the optimal filter. In a particular, but important from a practical point of view, case of fractional-rational spectral density of the input process, from (10), the following expression for the transfer function can be obtained:

(12)

In this case, the minimum root-mean-square filtering error is

(13)

where, (14)

For the particular case of smoothing an additive mixture of mutually independent stationary random process and white noise with a correlation function

Formula (11) is simplified:

Where the + index means that if the expression in square brackets is decomposed into simple fractions, then only those of them that correspond to the poles located in the upper half-plane should be left in the decomposition. All simple fractions functions corresponding to the poles in the lower half-plane, as well as the whole part, must be discarded. The minimum root-mean-square error for the case under consideration can be calculated by the formula

All the same, practical calculations using the above formulas turn out to be cumbersome. A significant simplification is obtained if we do not impose the physical realizability requirements on the optimal filter (3), i.e. to assume in (4) and in subsequent formulas the lower limit is equal. In this case, instead of equation (10), we obtain the integral equation:

(15)

whose solution leads to the following expression for the transfer function of a physically unrealizable filter:

(16)

The minimum root mean square error in this case is calculated by the formula (13). For the particular case of statistically independent signal and noise having zero mean values, formula (16) is reduced to the form:

Although the latter relations correspond to physically unrealizable optimal filters, they are useful, since any physically realizable filter cannot give less root mean square error than the filters defined by expression (16). This is due to the fact that the imposition of the physical realizability condition (3) on the filter narrows the possibilities of choosing the optimal filter characteristic and, for this reason, only leads to a deterioration in the final result.

In conclusion, we note that the expression for the mean square reproduction error will have the form

From which it follows that ideal filtration is possible only if , i.e. when the signal and interference spectra do not overlap.

If the function x (t), satisfying the Dirichlet conditions and having a spectrum with a cutoff frequency, is sampled cyclically, with a period, then it can be reconstructed from this set of its instantaneous values ​​without error. (sec) (Hz).

Representation of a signal by means of samples. V. A. Kotelnikov's theorem

As we already said, when digitizing a signal, samples are taken, and sampling and quantization are used to obtain the signal value. In some cases, the sampling times are set randomly on the time axis, and information about the waveform is lost. From random samples, we can only determine the density of the probability distribution. Thus, random samples give us statistical information about the magnitude of the input signal. This means that in this way we can measure the rms and peak values ​​of the input signal, determine the range of values ​​it receives, but we cannot determine the shape of the signal and its spectrum.

In many cases, the signal is sampled at equidistant points in time. Then it is important to decide the question of how many samples should be taken per unit of time in order to be able to adequately describe a signal that is continuous in time. The answer to this question is given by the theorem of V.A. Kotel'nikov. In foreign technical literature, you may come across another name for this theorem, which is interpreted as Shannon's sampling theorem.

This theorem states that in order to recover the original signal without errors from its sampled values ​​taken at regular intervals, the sampling rate must be more than double the frequency of the highest frequency component present in the continuous input signal. Strictly speaking, the text of V.A. Kotelnikov's theorem reads as follows:

The Dirichlet condition means that the function is bounded, piecewise continuous, and has a limited number of extrema.

A feature of the signal sampled in accordance with the Kotelnikov theorem is that it can be reconstructed using a low-pass filter. Therefore, if the signal x (t) sampled with a step is discrete. apply to the input of an ideal filter with an upper transmission limit, then at the output a continuous signal x (t) reconstructed without errors is obtained (Fig)

Rice.. Sampling and signal recovery circuit

Consider the transmission of several signals over one communication line, for this they need to be discretized. This operation is implemented using a switch, then the information is transmitted through the communication line and then, knowing the frequency of the switch, we can restore it at the other end of the communication line (Fig.). The sampling rate of the switch must be n, where n is the number of measuring transducers.



Kotelnikov's theorem allows converting an analog signal into a digital one, which is necessary for its further processing using computer technology. The choice of the sampling step according to Kotelnikov guarantees the safety in the discrete representation of the signal, all information about its spectral composition. An ADC is used to convert an analog signal to digital. The sampling rate of the ADC in accordance with the Kotelnikov theorem, where is the upper cutoff frequency of the signal.

Rice. Information transmission over one communication line

In reverse digital-to-analog conversion, the DAC chip acts as a low-pass filter. The number of ADC and DAC conversion bits determine the accuracy of the signal amplitude transmission, since determine the sampling levels of the signal amplitude. Thus, the computer receives information about the signal in the form of dots.

Rice. Signal sampling after ADC

Typically, ADC microcircuits are produced in the same package with switches on n channels. At the same time, the frequency of polling is regulated in the passport, which can be used both for polling n channels, and for polling 1 channel. Information is entered into a computer through a serial port, for example, in the RS-232 standard.

In this regard, the designer in each specific case makes a decision to use the required microcircuit with the required number of channels, the required sampling rate and the number of ADC conversion bits.

It should be noted that it is not always convenient to supplement the measuring circuit with a low-pass filter; moreover, the presence of such a filter leads to phase distortions of the signal. From these shortcomings, signal recovery using the simplest interpolation method is free.

With this method, the points obtained are simply connected to each other by straight line segments. Obviously, in this case, smooth sections close to straight lines are restored with small errors, and the maximum restoration error is obtained in sections with maximum curvature (Fig.).


It is known that any curve x (t) in some area can be expanded in powers t, that is, describe it by a polynomial. In the simplest case, using only the first terms of the expansion, the section of the curve between the samples can be represented as a parabola, then the linear interpolation error will be the difference between this parabola and its chord connecting adjacent samples. As you know, the parabola has the greatest deviation from the chord in the middle of the interpolation interval t 0 with an absolute value ( D m in Fig.)

where is the value of the second derivative of the process x (t) that is, an estimate of its curvature. Hence, the maximum value of the reconstruction error is observed in the portions of the curve with the greatest curvature (in the region of the maxima and minima of the process shown in the figure).

If we are not interested in the absolute error D m, and its reduced value, where x k- measurement limit, then you can determine the maximum allowable sampling period t c at which the recovery error will not exceed g m:

Since any complex curve can be decomposed into a number of harmonic components, we determine the required sampling period for a sinusoidal process. At x (t) = x k sinwt current curvature estimate , and its maximum value. Hence the necessary sampling period for a sinusoidal process

(3)

Relation (3) is perceived more clearly if it is used to calculate the number of points P, for each period T sinusoidal process:

(4)

This ratio gives:

g m 0,1
n

Thus, to restore a sinusoidal process with a maximum error of 1% with uniform sampling, it is necessary to have 22 samples for the process period, but to represent with an error of 0.1%, at least 70 samples are needed for each period, and for g m= 20%, five readings are enough for the period.

Based on relation (4), it is possible to calculate the minimum period or maximum frequency of the process, which can be recorded with a given maximum error g m... Data on the maximum errors when using some techniques and tools are given in table. and indicate that without the use of special means, only very slow processes (with a period of 0.2-2 s) can be recorded.

By expressing g m from expression (3) or (4) we obtain

(5)

i.e. the dynamic recovery error g m increases by the square of the frequency of the restored process.

In practice, it is most often necessary to measure substantially non-sinusoidal processes containing harmonic components or high-frequency components of noise, interference or interference. In these cases, the dynamic error in recovering the process from discrete readings increases sharply, which the researcher must always remember.

Let us consider this property of the reconstruction error using a specific example. So, in table. it is indicated that when using an ADC with a sampling period t c= 30 μs investigated process with a frequency f 1= 500 Hz recovers from g m 1"0.1%. Indeed, counting g m 1 by formula (5), we obtain

which can often be considered a sufficiently high recovery accuracy. However, if the curve of this process contains an additional 10th harmonic with a frequency f 10= 5000 Hz and an amplitude of 0.1 of the fundamental wave, it will recover with a relative error g m 10, 100 times more than g m 1, i.e. equal to 10%. True, since the amplitude of this harmonic is 10 times less than the amplitude of the fundamental wave, the reduced value of this error will be only g m 10= 1% Nevertheless, the resulting error in restoring the entire process will be 10 times (!) Greater than the error in restoring g m 1= 0.1% of a process that does not contain this high-frequency component.

The reconstruction error for the fundamental wave and its harmonics is systematic (it is always negative, see Fig. And leads to a decrease in the reconstructed amplitude of the curve), however, if the high-frequency component is caused by noise or other interference and is not synchronous with the fundamental wave, then the reconstruction error turns out to be random and is observed as a random scatter of readings.

With manual registration of observations, such a scatter of data will be immediately noticed by the experimenter and he will make an appropriate decision on the course of the experiment. The considered phenomenon is especially dangerous when data is automatically entered into a computer and emphasizes the extreme importance of metrological analysis of dynamic errors in this case.

However, due to the ever increasing speed of computers, this method of sampling and recovery is becoming very attractive.

5.5 Signal filtering

The operation of separating a certain frequency band from the signal spectrum is called filtering. Filters are subdivided into low-pass filters (a), high-pass filters (b), and band-pass filters (c).

Rice. Types of filters.

Low-pass filters (a), high-pass filters (b), band-pass filters (c)

The simplest analog filters consist of R-C chains; to increase the slope, the filters are made multi-stage.

Digital filtering means that the signal x (t) passed through a mathematical filter in which the required characteristic is realized.

5.6 Modulation and detection

Measuring signal influence x (t) to any stationary signal is called modulation.

As a stationary signal, called a carrier, a sinusoidal oscillation is selected

and pulse train

Separation of a component proportional to the measured signal from a modulated signal is called detection.

Sine wave (6) is determined by amplitude, frequency, and phase. All these values ​​can be modulated. The result is amplitude modulation AM, frequency modulation FM and phase modulation PM.

Rice. Types of modulations

Modulation can be characterized as the multiplication of the modulated quantity y (t) by a factor 1 + mx (t), where x (t) is a modulating function such that, and m is the modulation depth, and 0

With amplitude modulation

If , the expression is converted

Hence it follows that the modulated vibration consists of three vibrations with frequencies, and.

The frequency is called the carrier, and the frequency is called the side frequencies. If the modulating signal is a periodic function.

then the modulated signal y (t) will be

It can be seen that the modulated waveform consists of a carrier frequency and two groups called sidebands.

For detection, reverse manipulations are performed by expanding the function in a series.

With frequency modulation, the frequency of the modulated signal changes according to the law

or, if, then

Substituting (7) into (6) and taking into account that the instantaneous phase is the integral of the frequency in expression (6), we obtain

In this expression, is the frequency modulation factor, which depends on the amplitude of the modulating signal.

We represent this expression in the form

At large values ​​of the coefficient m g, this expression is very complex and it can be expressed in the form of series in Bessel functions. For the sake of simplicity, we assume that mg<<1, тогда

In this regard, expression (8) takes the form

Thus, at mg<<1 спектр частотно-модулированного сигнала не отличается от спектра АМС. Если условие mг<<1 не выполняется, т.е. имеет место глубокая частотная модуляция, то спектр модулированного сигнала будет содержать не две боковые частоты, а множество частот. Поэтому спектр ЧМ сигнала в общем случае больше спектра АМ сигнала.

Detection is performed in the same way as an AM signal.

With phase modulation, the modulating signal affects the carrier waves

If the modulating signal, then

where is the phase modulation coefficient, which depends on the amplitude of the modulating signal.

In signal (10), the informative parameter is the phase , we transform the signal (10)

Comparing the last expression and expression (9), we can conclude that the FM and FM signals coincide. The difference is that the FM factor depends on the frequency of the modulating signal, while the PM factor does not depend on the frequency.

This circumstance requires the introduction of an appropriate correction of the signal after detection.

Detection is carried out similarly to AM and FM signals, while to obtain the phase, it is necessary to integrate

If a periodic sequence of pulses is used as a modulated signal, then we get a pulse modulation (Fig.).

In this case, we have pulse-amplitude modulation (PFM), pulse-frequency modulation (PFM), pulse-phase modulation (PPM) and pulse-width modulation (PWM).

If AM, FM, PM are used mainly for analog signals, although AM is used for digital ones, then pulse modulation is used mainly for digital signals.

Rice. Pulse types of modulations

Kotelnikov's theorem is exactly valid only for signals with a finite (finite) spectrum. In fig. 4.15 shows some variants of finite spectra.

However, the spectra of real information signals are infinite (Fig. 4.16). In this case, Kotel'nikov's theorem is valid with an error.

The sampling error is determined by the energy of the spectral components of the signal, lying outside the frequency
(fig. 4.16).

.

The second reason for the occurrence of errors is the imperfection of the restoring low-pass filter.

In this way? The discretization and recovery error of the continuous signal is determined by the following reasons:

    The spectra of real signals are not finite.

    The frequency response of real LPFs is imperfect.

Figure 4.17. Block diagram of an RC filter

For example, if you use an RC filter as a low-pass filter (Figure 4.17), then the reconstructed signal at its output will have the form shown in Figure 4.18.

The impulse response of the RC filter is:

.

Conclusion: the higher
and the closer the low-pass filter characteristics to the ideal, the closer the reconstructed signal to the original one.

4.6. Quantization of messages. Quantization errors

So it has been shown that the transmission of almost any message
can be reduced to the transmission of their counts, or numbers
following each other with a discreteness interval
... Thus, continuous ( endless) the set of possible message values
replaced the final the number of its discrete values
... However, these numbers themselves have a continuous scale of levels (values), that is, they again belong to a continuous set. For absolutely accurate representing such numbers, for example, in decimal (or binary) form, it is necessary theoretically endless number of digits. However, in practice there is no need for an absolutely accurate representation of the values
, as well as any numbers in general.

First, the message sources themselves have a limited dynamic range and generate the original messages with a certain level of distortion and errors. This level can be higher or lower, but absolute fidelity cannot be achieved.

Secondly, the transmission of messages over communication channels is always carried out in the presence of various kinds of interference. Therefore, the received (played) message (rating the message
) always differs to a certain extent from the transmitted one, that is, in practice absolutely accurate transmission is impossible messages in the presence of interference in the communication channel.

Finally, messages are transmitted for their perception and use by the recipient. The recipients of the information are human senses, executive mechanisms, etc. - also have a finite resolution, that is, they do not notice the insignificant difference between absolutely accurate and approximate the values ​​of the message being played. The threshold of sensitivity to distortion can also be different, but it is always there.

Taking these remarks into account, the message sampling procedure can be continued, namely, the samples
quantization.

The quantization process consists in replacing a continuous set of sample values discrete set
... Thus, the exact values ​​of the numbers
are replaced by their approximate (rounded to the nearest permitted level) values. Spacing between adjacent allowed levels , or quantization levels,
called quantization step.

Distinguish between uniform and uneven quantization. In most cases, uniform quantization is used and further considered in detail (Fig. 4.19), in which the quantization step is constant:; however, sometimes uneven quantization gives a certain advantage, in which the quantization step different for different (fig.4.20).

Quantization leads to distortion of messages. If the quantized message resulting from quantizing the sample
, denote as , then

where - the difference between the true value of the elementary message and quantized message (closest allowed level) called quantization error, orquantization noise... Quantization noise has essentially the same effect on the information transfer process as interference in the communication channel. Interference, as well as quantization, leads to the fact that the estimates received on the receiving side of the communication system differ by some value from the true value .

Since message quantization leads to errors and the loss of some information, it is possible to determine the cost of such losses
and the average quantization error:

Most often, a quadratic function of the form

In this case, the variance of these errors is a measure of the quantization errors. For uniform
-level quantization with step the variance of the quantization errors is defined as follows:

The absolute value of the quantization error does not exceed half the quantization step , and then for a sufficiently large number of quantization levels
and small value quantization error probability density
can be considered uniform on the interval + -:

As a result, the magnitude of the quantization error is determined by the relation

and the appropriate choice of the quantization step can be made arbitrarily small or reduced to any predetermined value.

Regarding the required accuracy of transmission of message samples, the same considerations can be made as for time sampling errors: quantization noise or distortions caused by quantization are not significant if these distortions are less than errors caused by interference and admissible technical conditions.

If the sampling period

small enough so that the condition is satisfied that the neighboring components of the spectrum of the sampled oscillation do not overlap, as shown in Fig. 2.5, a. In this case, it is easy to indicate a method for restoring a continuous oscillation from a discrete one, which consists in the fact that the discrete signal should be passed through an ideal low-pass filter with a passband (Fig. 2.5, b).

Rice. 2.5. The spectrum of a discrete vibration in the form of a sequence of modulated pulses; the frequency response of the low-pass filter and the spectrum of the reconstructed signal

In this case, the middle part will be selected from the spectrum of the sampled signal (Fig. 2.5, c), which, up to a constant factor, coincides with the spectrum of the original continuous oscillation

However, if the initial continuous oscillation is such that its spectrum does not vanish strictly with increasing frequency, then for any choice of the sampling interval, the adjacent components of the spectrum of the sampled oscillation will partially overlap (Fig. 2.6, a). If a signal with such a spectrum is passed through an ideal low-pass filter, then the output of the filter will produce an oscillation that differs from the original continuous signal.

that the spectrum of this vibration is superimposed on "tails" from the neighboring spectral components (Fig. 2.6, b).

The simplest and most obvious way to reduce sampling error is to increase the sampling rate. However, to obtain a sufficiently small error, the sampling rate has to be taken very high, especially if the signal spectrum decreases slowly, which in some cases is undesirable.

Rice. 2.6. Sampling errors of a signal with a spectrum decreasing asymptotically: a - spectrum of a sampled signal; b - signal spectrum after passing through an ideal low-pass filter; c - spectrum of the error signal

To reduce the sampling error, you can pass the signal through a low-pass filter with a frequency response close to rectangular before sampling. In this case, the signal spectrum becomes rapidly decreasing, almost limited, and further sampling occurs practically without errors. The resulting error in this case is determined by the distortion of the spectrum when the signal passes through the low-pass filter. Due to the fact that “tails” from neighboring components are not superimposed on the signal spectrum in the frequency domain, this error is approximately 2 times less than with direct sampling of the signal.

Passing the signal through a low-pass filter before sampling is a very useful measure to reduce error when sampling the signal with wideband noise at the input. When passing through a low-pass filter, the variance of the noise is reduced and, accordingly, the sampling error is reduced.

Rice. 2.7. Signal recovery errors with a non-ideal low-pass filter characteristic: a - spectrum of the sampled signal; b - low-pass filter characteristic; c - the spectrum of the signal at the output of the low-pass filter

Another source of error is imperfect filtering in the process of recovering a continuous signal from a discrete one. The ideal rectangular shape of the frequency response of the low-pass filter is practically impossible to realize; to smooth the signal, filters are usually used that have a monotonically falling characteristic (Fig. 2.7, b). If a sampled signal with the spectrum shown in Fig. 2.7, a, then at the output of the filter, in addition to the main signal, which corresponds to the central part of the spectrum, additional components will appear, caused by incomplete suppression of the side parts of the spectrum (Fig. 2.7, c). As a consequence, the reconstructed signal will differ in shape from the original continuous signal. The main method of dealing with these

the inaccuracy consists in increasing the sampling rate. However, increasing the sampling rate increases the complexity and cost of the signal processing device. Therefore, in each specific case, one has to look for a compromise solution based on the nature of the signal, the required accuracy of its reconstruction, the characteristics of the smoothing filter used, and other factors. All this leads to the fact that in real devices the sampling rate is chosen equal not as follows from the Kotelnikov theorem, but 2-5 times higher.

Rice. 2.8. Finite signal and its spectrum

Top related articles