How to set up smartphones and PCs. Informational portal
  • home
  • Windows phone
  • What is the difference between analog communication and digital communication. Differences between analog and digital audio

What is the difference between analog communication and digital communication. Differences between analog and digital audio

Every day people are faced with the use of electronic devices. Without them, modern life is impossible. After all, we are talking about a TV, radio, computer, telephone, multicooker and more. Previously, a few years ago, no one thought about what signal is used in each operable device. Now the words "analog", "digital", "discrete" have long been heard. Some of the listed signals are of high quality and reliable.

Digital transmission came into use much later than analog. This is due to the fact that such a signal is much easier to maintain, and the technology at that time was not so improved.

Every person constantly faces the concept of "discreteness". If you translate this word from Latin, then it will mean "discontinuity". Going deep into science, we can say that a discrete signal is a method of transmitting information, which implies a change in time of the carrier medium. The latter takes any value from all possible. Now discreteness is fading into the background, after the decision was made to produce systems on a chip. They are integral, and all components closely interact with each other. In discreteness, everything is exactly the opposite - each detail is completed and connected to others through special communication lines.

Signal

A signal is a special code that is transmitted into space by one or more systems. This wording is general.

In the field of information and communication, a signal is a special carrier of any data that is used to transmit messages. It can be created but not accepted, the last condition is optional. If the signal is a message, then "catching" it is considered necessary.

The described code is given by a mathematical function. It characterizes all possible changes of parameters. In radio engineering theory, this model is considered basic. In it, noise was called the analogue of the signal. It is a function of time that freely interacts with the transmitted code and distorts it.

The article describes the types of signals: discrete, analog and digital. The main theory on the topic being described is also briefly given.

Signal types

There are several signals available. Let's take a look at the types.

  1. According to the physical medium of the data carrier, an electrical signal, optical, acoustic and electromagnetic are divided. There are several other species, but they are little known.
  2. According to the method of setting, the signals are divided into regular and irregular. The former are deterministic data transfer methods that are specified by an analytic function. Random ones are formulated due to the theory of probability, and they also take on any values ​​at different time intervals.
  3. Depending on the functions that describe all signal parameters, data transmission methods can be analog, discrete, digital (a method that is level quantized). They are used to ensure the operation of many electrical appliances.

The reader is now familiar with all kinds of signaling. It will not be difficult for any person to understand them, the main thing is to think a little and remember the school physics course.

Why is the signal being processed?

The signal is processed in order to transmit and receive the information that is encrypted in it. Once it is extracted, it can be used in a variety of ways. In some situations, it is reformatted.

There is another reason for processing all signals. It consists in a slight compression of frequencies (so as not to damage the information). After that, it is formatted and transmitted at slow speeds.

Analog and digital signals use special techniques. In particular, filtering, convolution, correlation. They are necessary to restore the signal if it is damaged or has noise.

Creation and formation

Often, an analog-to-digital converter (ADC) is needed to generate signals. Most often, both of them are used only in a situation with the use of DSP technologies. In other cases, only the use of a DAC is suitable.

When creating physical analog codes with the further use of digital methods, they rely on the information received, which is transmitted from special devices.

Dynamic Range

It is calculated as the difference between the higher and lower volume levels, which are expressed in decibels. It completely depends on the work and the features of the performance. We are talking about both music tracks and ordinary dialogues between people. If we take, for example, an announcer who reads the news, then his dynamic range fluctuates around 25-30 dB. And while reading a work, it can grow up to 50 dB.

analog signal

An analog signal is a time-continuous way of transmitting data. Its disadvantage is the presence of noise, which sometimes leads to a complete loss of information. Very often there are such situations that it is impossible to determine where the code is important data, and where the usual distortions.

It is because of this that digital signal processing has gained great popularity and is gradually replacing analog.

digital signal

The digital signal is special; it is described by discrete functions. Its amplitude can take on a certain value from those already given. If the analog signal is capable of receiving a huge amount of noise, then the digital one filters out most of the received interference.

In addition, this type of data transfer transfers information without unnecessary semantic load. Several codes can be sent through one physical channel at once.

Types of digital signal do not exist, since it stands out as a separate and independent method of data transmission. It is a binary stream. Nowadays, such a signal is considered the most popular. It has to do with ease of use.

Digital signal application

How is a digital electrical signal different from others? The fact that he is able to perform a complete regeneration in the repeater. When a signal with the slightest interference enters the communication equipment, it immediately changes its form to digital. This allows, for example, a TV tower to form a signal again, but without the noise effect.

In the event that the code arrives already with large distortions, then, unfortunately, it cannot be restored. If we take analog communications in comparison, then in a similar situation, the repeater can extract part of the data, spending a lot of energy.

When discussing cellular communications of different formats, with strong distortion on a digital line, it is almost impossible to talk, since words or whole phrases are not heard. Analog communication in this case is more effective, because you can continue to conduct a dialogue.

It is because of such problems that repeaters often form a digital signal in order to reduce the gap in the communication line.

discrete signal

Now every person uses a mobile phone or some kind of “dialer” on his computer. One of the tasks of devices or software is the transmission of a signal, in this case a voice stream. To carry a continuous wave, a channel is needed that would have a higher level bandwidth. That is why the decision was made to use a discrete signal. It does not create the wave itself, but its digital form. Why? Because the transmission comes from technology (for example, a phone or a computer). What are the advantages of this type of information transfer? With its help, the total amount of transmitted data is reduced, and batch sending is also easier to organize.

The concept of "discretization" has long been stably used in the work of computer technology. Thanks to such a signal, not continuous information is transmitted, which is completely encoded with special symbols and letters, but data collected in special blocks. They are separate and complete particles. This encoding method has long been relegated to the background, but has not completely disappeared. With it, you can easily transfer small pieces of information.

Comparison of digital and analog signals

When buying equipment, hardly anyone thinks about what types of signals are used in this or that device, and even more so about their environment and nature. But sometimes you still have to deal with concepts.

It has long been clear that analog technologies are losing demand, because their use is irrational. Instead comes digital communication. It is necessary to understand what is at stake and what humanity refuses.

In short, an analog signal is a method of transmitting information, which involves the description of data by continuous functions of time. In fact, speaking specifically, the amplitude of oscillations can be equal to any value that is within certain limits.

Digital signal processing is described by discrete time functions. In other words, the oscillation amplitude of this method is equal to strictly specified values.

Moving from theory to practice, it must be said that the analog signal is characterized by interference. With digital, there are no such problems, because it successfully “smooths” them. Due to new technologies, this method of data transmission is capable of restoring all the original information on its own without the intervention of a scientist.

Speaking about television, we can already say with confidence: analog transmission has long outlived its usefulness. Most consumers are moving to a digital signal. The disadvantage of the latter is that if any device is capable of receiving an analog transmission, then a more modern method is only a special technique. Although the demand for the outdated method has long fallen, nevertheless, these types of signals are still not able to completely disappear from everyday life.

Very often we hear such definitions as "digital" or "discrete" signal, what is its difference from "analogue"?

The essence of the difference is that the analog signal is continuous in time (blue line), while the digital signal consists of a limited set of coordinates (red dots). If everything is reduced to coordinates, then any segment of an analog signal consists of an infinite number of coordinates.

For a digital signal, the coordinates along the horizontal axis are located at regular intervals, in accordance with the sampling frequency. In the common Audio-CD format, this is 44100 points per second. Vertically, the accuracy of the height of the coordinate corresponds to the bit depth of the digital signal, for 8 bits it is 256 levels, for 16 bits = 65536 and for 24 bits = 16777216 levels. The higher the bit depth (the number of levels), the closer the vertical coordinates are to the original wave.

Analogue sources are: vinyl and audio cassettes. Digital sources are: CD-Audio, DVD-Audio, SA-CD (DSD) and files in WAVE and DSD formats (including derivatives of APE, Flac, Mp3, Ogg, etc.).

Advantages and disadvantages of analog signal

The advantage of an analog signal is that it is in analog form that we perceive sound with our ears. And although our auditory system converts the perceived sound stream into digital form and transmits it to the brain in this form, science and technology have not yet reached the possibility of connecting players and other sound sources directly in this form. Similar research is now being actively conducted for people with disabilities, and we enjoy exclusively analog sound.

The disadvantage of the analog signal is the ability to store, transmit and replicate the signal. When recording to tape or vinyl, the signal quality will depend on the properties of the tape or vinyl. Over time, the tape demagnetizes and the quality of the recorded signal deteriorates. Each read gradually destroys the medium, and overwriting introduces additional distortion, where additional deviations are added by the next medium (tape or vinyl), reading, recording and signal transmission devices.

Making a copy of an analog signal is like taking a picture of a photograph to copy it again.

Advantages and disadvantages of a digital signal

The advantages of a digital signal include accuracy in copying and transmission of the audio stream, where the original is no different from the copy.

The main disadvantage can be considered that the signal in digital form is an intermediate stage and the accuracy of the final analog signal will depend on how detailed and accurately the sound wave will be described by the coordinates. It is quite logical that the more points there are and the more accurate the coordinates, the more accurate the wave will be. But there is still no consensus on how many coordinates and data accuracy is sufficient to say that the digital representation of the signal is sufficient to accurately restore the analog signal, indistinguishable from the original by our ears.

In terms of data volumes, the capacity of a conventional analog audio cassette is only about 700-1.1 MB, while a conventional CD holds 700 MB. This gives an idea of ​​the need for carriers large capacity. And this gives rise to a separate war of compromises with different requirements for the number of describing points and for the accuracy of coordinates.

To date, it is considered quite sufficient to represent a sound wave with a sampling frequency of 44.1 kHz and a bit depth of 16 bits. With a sampling rate of 44.1 kHz, a signal up to 22 kHz can be recovered. As psychoacoustic studies show, a further increase in the sampling rate is hardly noticeable, but an increase in the bit depth gives a subjective improvement.

How DACs build a wave

DAC is a digital-to-analog converter, an element that converts digital sound into analog. We'll take a superficial look at the basic principles. If the comments show interest in considering a number of points in more detail, a separate material will be released.

Multibit DACs

Very often, the wave is represented as steps, which is due to the architecture of the first generation of R-2R multi-bit DACs, which work similarly to a switch from a relay.

The DAC input receives the value of the next vertical coordinate and in each cycle it switches the current (voltage) level to the corresponding level until the next change.

Although it is believed that the human ear hears no higher than 20 kHz, and according to Nyquist theory it is possible to restore a signal up to 22 kHz, the question remains of the quality of this signal after restoration. In the high frequency region, the shape of the resulting "stepped" wave is usually far from the original. The easiest way out of the situation is to increase the sampling rate when recording, but this leads to a significant and undesirable increase in file size.

An alternative option is to artificially increase the sampling rate when playing in the DAC by adding intermediate values. Those. we imagine a continuous wave path (gray dotted line) smoothly connecting the original coordinates (red dots) and adding intermediate points on this line (dark purple).

As the sampling frequency increases, it is usually necessary to increase the bit depth as well, so that the coordinates are closer to the approximated wave.

Thanks to intermediate coordinates, it is possible to reduce the "steps" and build a wave closer to the original.

When you see a boost function from 44.1 to 192 kHz in a player or external DAC, it is a function to add intermediate coordinates, not restore or create sound above 20 kHz.

Initially, these were separate SRC microcircuits before the DAC, which then migrated directly to the DAC microcircuits themselves. Today you can find solutions where such a microcircuit is added to modern DACs, this is done in order to provide an alternative to the built-in algorithms in the DAC and sometimes get even better sound (as, for example, it was done in Hidizs AP100).

The main industry rejection of multi-bit DACs occurred due to the impossibility of further technological development of quality indicators with current production technologies and higher cost against “switching” DACs with comparable characteristics. Nevertheless, in Hi-End products preference is often given to old multi-bit DACs, rather than new solutions with technically better characteristics.

Switching DACs

In the late 70s, an alternative version of DACs based on the "pulse" architecture - "delta-sigma" became widespread. Pulsed DAC technology made possible the emergence of ultra-fast switches and allowed the use of a high carrier frequency.

The amplitude of the signal is the average value of the amplitudes of the pulses (green indicates pulses of equal amplitude, and white is the final sound wave).

For example, a sequence of eight cycles of five impulses will give an average amplitude of (1+1+1+0+0+1+1+0)/8=0.625. The higher the carrier frequency, the more pulses fall under smoothing and a more accurate amplitude value is obtained. This made it possible to present the sound stream in a one-bit form with a wide dynamic range.

Averaging can be done with a conventional analog filter, and if such a set of pulses is applied directly to the speaker, then we will get sound at the output, and ultra high frequencies will not be reproduced due to the large inertia of the emitter. PWM amplifiers in class D work according to this principle, where the energy density of pulses is created not by their number, but by the duration of each pulse (which is easier to implement, but cannot be described with a simple binary code).

A multi-bit DAC can be thought of as a printer capable of applying Pantone colors. Delta-Sigma is an inkjet printer with a limited range of colors, but due to the ability to apply very small dots (compared to an antler printer), due to the different density of dots per unit surface, it produces more shades.

In the image, we usually do not see individual points due to the low resolution of the eye, but only the average tone. Similarly, the ear does not hear impulses separately.

Ultimately, with current technologies in pulse DACs, you can get a wave close to the one that theoretically should be obtained by approximating intermediate coordinates.

It should be noted that after the advent of the delta-sigma DAC, the relevance of drawing the “digital wave” with steps disappeared, because. so modern DACs do not build a wave with steps. A correctly discrete signal is built with dots connected by a smooth line.

Are switching DACs ideal?

But in practice, not everything is rosy, and there are a number of problems and limitations.

Because Since the vast majority of records are stored in a multi-bit signal, the conversion to a pulsed signal on a bit-to-bit basis requires an unnecessarily high carrier frequency, which modern DACs do not support.

The main function of modern pulse DACs is the conversion of a multi-bit signal into a single-bit signal with a relatively low carrier frequency with data decimation. Basically, it is these algorithms that determine the final sound quality of pulse DACs.

To reduce the problem of high carrier frequency, the audio stream is split into several one-bit streams, where each stream is responsible for its bit group, which is equivalent to a multiple of the carrier frequency from the number of streams. Such DACs are called multi-bit delta-sigma.

Today, switching DACs have gained a new lease of life in high-speed general-purpose ICs from NAD and Chord due to the ability to flexibly program conversion algorithms.

DSD format

After the widespread use of delta-sigma DACs, it was quite logical for the binary code format to appear directly in delta-sigma encoding. This format is called DSD (Direct Stream Digital).

The format was not widely used for several reasons. Editing files in this format turned out to be unnecessarily limited: you can not mix streams, adjust the volume and apply equalization. This means that without loss of quality, you can only archive analog recordings and produce a two-microphone recording of live performances without further processing. In a word, you can't really make money.

In the fight against piracy, SA-CD discs were not (and still are not) supported by computers, which makes it impossible to make copies of them. No copies - no wide audience. It was possible to play DSD audio content only from a separate SA-CD player from a proprietary disc. If for the PCM format there is an SPDIF standard for digital data transfer from a source to a separate DAC, then there is no standard for the DSD format and the first pirated copies of SA-CD discs were digitized from the analog outputs of SA-CD players (although the situation seems silly, but in reality some recordings were only released on SA-CD, or the same recording on Audio-CD was deliberately made of poor quality to promote SA-CD).

The turning point occurred with the release of SONY game consoles, where the SA-CD disc was automatically copied to the console's hard drive before playback. Fans of the DSD format took advantage of this. The appearance of pirated recordings stimulated the market for the release of separate DACs for playing DSD streams. Most DSD-enabled external DACs today support USB data transfer using the DoP format as a separate digital signal encoding over SPDIF.

Carrier frequencies for DSD are relatively small, 2.8 and 5.6 MHz, but this audio stream does not require any data decimation conversion and is quite competitive with high resolution formats such as DVD-Audio.

There is no clear answer to the question of which is better, DSP or PCM. Everything depends on the quality of the implementation of a particular DAC and the talent of the sound engineer when recording the final file.

General conclusion

Analog sound is what we hear and perceive as the world around us with our eyes. Digital sound is a set of coordinates that describe a sound wave, and which we cannot directly hear without converting to an analog signal.

An analog signal recorded directly on an audio cassette or vinyl cannot be re-recorded without loss of quality, while a wave in a digital representation can be copied bit by bit.

Digital recording formats are a constant trade-off between the amount of coordinate accuracy versus file size, and any digital signal is only an approximation of the original analog signal. However, the different levels of technology for recording and reproducing a digital signal and storage on media for an analog signal give more advantages to the digital representation of the signal, similar to a digital camera versus a film camera.

A signal is defined as a voltage or current that can be transmitted as a message or as information. By their nature, all signals are analog, whether DC or AC, digital or pulsed. However, it is customary to make a distinction between analog and digital signals.

A digital signal is a signal that has been processed in a certain way and converted into numbers. Usually these digital signals are connected to real analog signals, but sometimes there is no connection between them. An example is the transmission of data in local area networks (LANs) or other high-speed networks.

In the case of digital signal processing (DSP), an analog signal is converted into binary form by a device called an analog-to-digital converter (ADC). The output of the ADC is a binary representation of the analog signal, which is then processed by an arithmetic digital signal processor (DSP). After processing, the information contained in the signal can be converted back to analog form using a digital-to-analog converter (DAC).

Another key concept in defining a signal is the fact that a signal always carries some information. This brings us to the key problem of processing physical analog signals - the problem of information extraction.

Purposes of signal processing.

The main purpose of signal processing is the need to obtain the information contained in them. This information is usually present in the amplitude of the signal (absolute or relative), in frequency or spectral content, in phase or in the relative time dependences of several signals.

Once the desired information has been extracted from the signal, it can be used in a variety of ways. In some cases, it is desirable to reformat the information contained in the signal.

In particular, the change in signal format occurs when an audio signal is transmitted in a frequency division multiple access (FDMA) telephone system. In this case, analog methods are used to accommodate multiple voice channels in the frequency spectrum for transmission via microwave radio relay, coaxial or fiber optic cable.

In the case of digital communication, analog audio information is first converted to digital using an ADC. The digital information representing the individual audio channels is time multiplexed (Time Division Multiple Access, TDMA) and transmitted over a serial digital link (as in a PCM system).

Another reason for signal processing is to compress the signal bandwidth (without significant loss of information), followed by formatting and transmission of information at reduced speeds, which can narrow the required channel bandwidth. High-speed modems and adaptive pulse code modulation (ADPCM) systems make extensive use of data de-redundancy (compression) algorithms, as do digital mobile communication systems, MPEG audio recording systems, and high-definition television (HDTV).

Industrial data acquisition and control systems use information received from sensors to generate appropriate feedback signals, which in turn directly control the process. Note that these systems require both ADCs and DACs, as well as sensors, signal conditioners, and DSPs (or microcontrollers).

In some cases, there is noise in the signal containing information, and the main goal is to restore the signal. Techniques such as filtering, autocorrelation, convolution, etc. are often used to accomplish this task in both the analog and digital domains.

PURPOSE OF SIGNAL PROCESSING
  • Extraction of signal information (amplitude, phase, frequency, spectral components, timing)
  • Signal format conversion (telephony with channel division FDMA, TDMA, CDMA)
  • Data compression (modems, cell phones, HDTV television, MPEG compression)
  • Formation of feedback signals (industrial process control)
  • Extraction of signal from noise (filtering, autocorrelation, convolution)
  • Extraction and storage of a signal in digital form for further processing (FFT)

Signal conditioning

In most of the above situations (associated with the use of DSP technologies), both an ADC and a DAC are needed. However, in some cases, only a DAC is required, when analog signals can be directly generated based on the DSP and DAC. A good example is video-scan displays, in which a digitally generated signal drives the video image or a RAMDAC (Digital to Analogue Array of Pixel Value Converter) block.

Another example is artificially synthesized music and speech. In fact, when generating physical analog signals using only digital methods, they rely on information previously obtained from sources of similar physical analog signals. In display systems, the data on the display must convey relevant information to the operator. When designing sound systems, the statistical properties of the generated sounds are specified, which were previously determined using extensive use of DSP methods (sound source, microphone, preamplifier, ADC, etc.).

Signal processing methods and technologies

Signals can be processed using analog techniques (analog signal processing, or ASP), digital techniques (digital signal processing, or DSP), or a combination of analog and digital techniques (combined signal processing, or MSP). In some cases, the choice of methods is clear, in other cases there is no clarity in the choice and the final decision is based on certain considerations.

As for the DSP, its main difference from traditional computer data analysis lies in the high speed and efficiency of complex digital processing functions, such as filtering, real-time data analysis and compression.

The term "combined signal processing" implies that the system performs both analog and digital processing. Such a system can be implemented as a printed circuit board, a hybrid integrated circuit (IC), or a single chip with integrated elements. ADCs and DACs are considered to be combined signal processing devices, since both analog and digital functions are implemented in each of them.

Recent advances in very high integration (VLSI) chip technology enable complex (digital and analog) processing on a single chip. The very nature of DSP implies that these functions can be performed in real time.

Comparison of analog and digital signal processing

Today's engineer is faced with the choice of the proper combination of analog and digital methods to solve a signal processing problem. It is not possible to process physical analog signals using only digital methods, since all sensors (microphones, thermocouples, piezoelectric crystals, magnetic disk drive heads, etc.) are analog devices.

Some types of signals require the presence of normalization circuits for further processing of signals in both analog and digital methods. Signal conditioning circuits are analog processors that perform functions such as amplification, accumulation (in instrumentation and pre-amplifiers (buffer) amplifiers), signal detection against background noise (by high-precision common-mode amplifiers, equalizers, and linear receivers), dynamic range compression (by logarithmic amplifiers, logarithmic DACs and PGAs) and filtering (passive or active).

Several methods for implementing the signal processing process are shown in Figure 1. The upper area of ​​the figure depicts a purely analog approach. The rest of the areas show the implementation of the DSP. Note that once a DSP technology is chosen, the next decision must be to locate the ADC in the signal processing path.

ANALOG AND DIGITAL SIGNAL PROCESSING

Figure 1. Signal processing methods

In general, since the ADC has been moved closer to the sensor, most of the analog signal processing is now done by the ADC. An increase in ADC capabilities can be expressed in increasing the sampling rate, expanding the dynamic range, increasing resolution, cutting off input noise, using input filtering and programmable amplifiers (PGA), the presence of on-chip voltage references, etc. All the add-ons mentioned increase the functional level and simplify the system.

With the availability of modern DAC and ADC manufacturing technologies with high sampling rates and resolutions, significant progress has been made in integrating more and more circuits directly into the ADC/DAC.

In the field of measurement, for example, there are 24-bit ADCs with built-in programmable amplifiers (PGAs) that allow you to digitize full-scale 10 mV bridge signals directly, without subsequent normalization (for example, the AD773x series).

At voice and audio frequencies, complex encoding-decoding devices are common - codecs (Analog Front End, AFE), which have an analog circuit built into the chip that meets the minimum requirements for external normalization components (AD1819B and AD73322).

There are also video codecs (AFE) for applications such as CCD image processing (CCD) and others (such as the AD9814, AD9816, and AD984X series).

Implementation example

As an example of using DSP, let's compare analog and digital low-pass filters (LPF), each with a cutoff frequency of 1 kHz.

The digital filter is implemented as the typical digital system shown in Figure 2. Note that the diagram makes several implicit assumptions. First, to accurately process the signal, it is assumed that the ADC/DAC path has sufficient sample rate, resolution, and dynamic range. Secondly, in order to complete all of its calculations within the sampling interval (1/f s), the DSP device must be fast enough. Thirdly, at the input of the ADC and the output of the DAC, there is still a need for analog filters for limiting and restoring the signal spectrum (anti-aliasing filter and anti-imaging filter), although the requirements for their performance are low. With these assumptions in mind, the digital and analog filters can be compared.



Figure 2. Block diagram of a digital filter

The required cutoff frequency for both filters is 1 kHz. The analog conversion is implemented of the first kind of the sixth order (characterized by the presence of gain ripple in the passband and the absence of ripple outside the passband). Its characteristics are shown in Figure 2. In practice, this filter can be represented by three second-order filters, each of which is built on an operational amplifier and several capacitors. Using modern computer-aided design (CAD) systems for filters, creating a sixth-order filter is quite simple, but precise selection of components is required to meet the 0.5 dB flatness specification.

The 129-coefficient digital FIR filter shown in Figure 2 has a ripple of only 0.002 dB in the passband, a linear phase response, and a much steeper rolloff. In practice, such characteristics cannot be realized using analog methods. Another obvious advantage of the circuit is that the digital filter does not require component matching and is not subject to parameter drift, since the filter's clock frequency is stabilized by a quartz resonator. A filter with 129 coefficients requires 129 multiply-accumulate (MAC) operations to calculate the output sample. These calculations must be completed within the 1/fs sampling interval to ensure real-time operation. In this example, the sample rate is 10 kHz, so 100 µs is sufficient for processing if no significant additional calculations are required. The ADSP-21xx family of DSPs can complete the entire multiplication-accumulate process (and other functions required to implement a filter) in a single instruction cycle. Therefore, a filter with 129 coefficients requires a speed of more than 129/100 µs = 1.3 million operations per second (MIPS). Existing DSPs are much faster and thus are not a limiting factor for these applications. The 16-bit ADSP-218x fixed-point series achieves up to 75MIPS performance. Listing 1 shows the assembler code that implements the filter on DSP processors of the ADSP-21xx family. Note that the actual lines of executable code are marked with arrows; the rest are comments.


Figure 3. Analog and digital filters

Of course, in practice, there are many other factors that are considered when comparing analog and digital filters or analog and digital signal processing methods in general. Modern signal processing systems combine analog and digital methods to achieve a desired function and take advantage of the best methods, both analog and digital.

ASSEMBLY PROGRAM:
FIR FILTER FOR ADSP-21XX (SINGLE PRECISION)

MODULE fir_sub; ( Filter FIR subroutine Subroutine call parameters I0 --> Oldest data in delay line I4 --> Start of filter coefficient table L0 = Filter length (N) L4 = Filter length (N) M1,M5 = 1 CNTR = Filter length - 1 (N-1) Return values ​​MR1 ​​= Result of summation (rounded and limited) I0 --> Oldest data in delay line I4 --> Start of filter coefficient table Change registers MX0,MY0,MR Run time (N - 1) + 6 cycles = N + 5 cycles All coefficients are in the format 1.15 ) .ENTRY fir; fir: MR=0, MX0=DM(I0,M1), MY0=PM(I4,M5) CNTR=N-1; DO convolution UNTIL CE; convolution: MR=MR+MX0*MY0(SS), MX0=DM(I0,M1), MY0=PM(I4,M5); MR=MR+MX0*MY0(RND); IF MV SAT MR; RTS; .ENDMOD; REAL TIME SIGNAL PROCESSING

  • Digital signal processing;
    • The width of the spectrum of the processed signal is limited by the sampling rate of the ADC / DAC
      • Remember the Nyquist criterion and the Kotelnikov theorem
    • limited by ADC/DAC bit depth
    • The performance of the DSP processor limits the amount of signal processing because:
      • For real-time operation, all calculations performed by the signal processor must be completed within a sampling interval equal to 1/f s
  • Don't Forget Analog Signal Processing
    • RF / RF filtering, modulation, demodulation
    • analog limiting and spectrum recovery filters (usually low-pass filters) for ADCs and DACs
    • where common sense and the cost of implementation dictate

Literature:

Together with the article "Types of signals" they read:

Digital circuitry is the most important discipline that is studied in all higher and secondary educational institutions that train specialists in electronics. A real radio amateur should also be well versed in this matter. But most of the books and manuals are written in a very difficult language to understand, and it will be difficult for a novice electronics engineer (perhaps a schoolboy) to master new information. A cycle of new educational materials from Master Kit aims to fill this gap: our articles about complex concepts are described in the simplest terms.


8.1. Analog and digital signals

First you need to figure out how analog circuitry differs from digital circuitry in general. And the main difference is in the signals with which these circuits work.
All signals can be divided into two main types: analog and digital.

Analog Signals

Analog signals are the most familiar to us. We can say that the entire natural world around us is analog. Our vision and hearing, as well as all other senses, perceive incoming information in an analog form, that is, continuously in time. Transmission of sound information - human speech, the sounds of musical instruments, the roar of animals, the sounds of nature, etc. – also available in analog form.
To understand this issue even better, let's draw an analog signal (Fig. 1.):

Fig.1. analog signal

We see that the analog signal is continuous in time and amplitude. For any point in time, you can determine the exact value of the amplitude of the analog signal.

Digital Signals

Let's analyze the signal amplitude not constantly, but discretely, at fixed intervals. For example, once a second, or more often: ten times a second. How often we will do this is called the sampling rate: once per second - 1 Hz, a thousand times per second - 1000 Hz or 1 kHz.

For clarity, let's draw graphs of analog (top) and digital (bottom) signals (Fig. 2.):

Fig.2. Analog signal (top) and its digital copy (bottom)

We see that in every instantaneous period of time, you can find out the instantaneous digital value of the signal amplitude. What happens to the signal (according to what law it changes, what is its amplitude) between the “check” intervals, we do not know, this information is lost to us. The less often we check the signal level (the lower the sampling rate), the less information we have about the signal. Of course, the opposite is also true: the higher the sampling rate, the better the quality of the signal representation. In the limit, increasing the sampling rate to infinity, we get almost the same analog signal.
Does this mean that the analog signal is in any case better than the digital one? In theory, perhaps yes. But in practice, modern analog-to-digital converters (ADCs) operate with such a high sampling rate (up to several million samples per second), they describe the analog signal in digital form so qualitatively that the human senses (eyes, ears) can no longer feel the difference between original signal and its digital model. A digital signal has a very significant advantage: it is easier to transmit over wires or radio waves, interference does not significantly affect such a signal. Therefore, all modern mobile communications, television and radio broadcasting are digital.

The bottom chart in fig. 2 can also be easily represented in another form - as a long sequence of a pair of numbers: time / amplitude. And numbers are just what digital circuits need. True, digital circuits prefer to work with numbers in a special representation, but we will talk about this in the next lesson.

Now we can draw important conclusions:

The digital signal is discrete, it can be determined only for individual moments of time;
- the higher the sampling frequency, the better the accuracy of the digital signal representation.

It is not at all necessary for a simple consumer to know what the nature of the signals is. But sometimes you need to know the difference between analog and digital formats in order to approach the choice of one or another option with open eyes, because today it is rumored that the time for analog technologies has passed, they are being replaced by digital ones. You should understand the difference in order to know what we are leaving and what to expect.

Analogue signal- this is a continuous signal, having an infinite number of data close in value within the maximum, all parameters of which are described by a time dependent variable.

Digital signal- this is a separate signal, described by a separate function of time, respectively, at each moment of time, the magnitude of the signal amplitude has a strictly defined value.

Practice has shown that with analog signals interference is possible, which is eliminated with a digital signal. In addition, digital can restore the original data. With a continuous analog signal, a lot of information passes, often redundant. Instead of one analog, several digital ones can be transmitted.

Today, the consumer is interested in the issue of television, since it is in this context that the phrase "transition to a digital signal" is more often pronounced. In this case, analog can be considered a relic of the past, but it is precisely it that the existing technology accepts, and a special one is needed to receive digital. Of course, in connection with the emergence and expansion of the use of "numbers", they are losing their former popularity.

Advantages and disadvantages of signal types

Security plays an important role in assessing the parameters of a particular signal. A different nature of influence, extraneous intrusions make the analog signal defenseless. With digital, this is excluded, since it is encoded from radio pulses. For long distances, the transmission of digital signals is complicated, it is necessary to use modulation-demodulation schemes.

Summing up, we can say that difference between analog and digital signal consist of:

  • In the continuity of analog and the discreteness of digital;
  • More likely to interfere with analog transmission;
  • In the redundancy of the analog signal;
  • In the ability of digital to filter interference and restore the original information;
  • In the transmission of a digital signal in an encoded form. One analog signal is replaced by several digital ones.

Top Related Articles