How to set up smartphones and PCs. Informational portal
  • home
  • OS
  • Composition of technical means of information processing. Technical means of information processing

Composition of technical means of information processing. Technical means of information processing

The technological process of data processing in information systems is carried out using:

    technical means for collecting and recording data;

    telecommunication facilities;

    data storage, search and retrieval systems;

    means of computing data processing;

    technical means of office equipment.

In modern information systems, technical means of data processing are used in an integrated manner, based on a technical and economic calculation of the feasibility of their use, taking into account the price / quality ratio and the reliability of the technical means.

Information Technology

Information technology can be defined as a set of methods- techniques and algorithms for data processing and tools- software and hardware data processing.

Information technology can be roughly divided into categories:

    Basic information technologies are universal technological operations of data processing, as a rule, independent of the content of the information being processed, for example, launching programs for execution, copying, deleting, moving and searching for files, etc. They are based on the use of widely used software and hardware data processing.

    Special information technology - a complex of information-related basic information technologies designed to perform special operations, taking into account the content and / or form of data presentation.

Information technology is a necessary basis for the creation of information systems.

Information Systems

An information system (IS) is a communication system for collecting, transferring, processing information about an object, supplying workers of various ranks with information to implement the management function.

The users of the IS are organizational units of management - structural divisions, management personnel, performers. The content basis of the IS is made up of functional components - models, methods and algorithms for the formation of control information. The functional structure of an IS is a set of functional components: subsystems, task complexes, information processing procedures that determine the sequence and conditions for their implementation.

The introduction of information systems is carried out in order to increase the efficiency of production and economic activities of the facility by not only processing and storing routine information, automating office work, but also by fundamentally new management methods. These methods are based on modeling the actions of the organization's specialists when making decisions (artificial intelligence methods, expert systems, etc.), using modern telecommunications (e-mail, teleconferences), global and local computer networks, etc.

IP classification is carried out according to the following criteria:

    the nature of information processing;

    scale and integration of IP components;

    information technology architecture of IS.

According to the nature of information processing and the complexity of algorithms for processing ICs, it is customary to divide into two large classes:

    IS for operational data processing. These are traditional ISs for accounting and processing of primary data of large volume using strictly regulated algorithms, a fixed structure of the database (DB), etc.

    IS of support and decision making... They are focused on the analytical processing of large amounts of information, the integration of heterogeneous data sources, the use of methods and tools for analytical processing.

Currently, the main information technology architectures have developed:

    IS with centralized data processing;

    file-server architecture;

    client-server architecture.

Centralized processing assumes the unification of the user interface, applications and databases on one computer.

V architecturefile server”Many network users are provided files the main computer of the network, called file server... These can be individual user files, database files, and application programs. All data processing is performed on users' computers. Such a computer is called workstation(RS). On it, the PS of the user interface and applications are installed, which can be entered both from the PC input devices and transmitted over the network from the file server. The file server can also be used for centralized storage of files of individual users, sent by them over the network from the PC. Architecture “ file server”Is mainly used in local computer networks.

V architectureclient-server”Software is focused not only on the collective use of resources, but also on their processing at the location of the resource at the request of users. Client-server architecture software systems consist of two parts: server software and user-client software. The operation of these systems is organized as follows: client programs run on the user's computer and send requests to a server program that runs on a shared computer. The main data processing is performed by a powerful server, and only the results of the query are sent to the user's computer. So, for example, a database server is used in powerful DBMS, such as Microsoft SQL Server, Oracle, etc., working with distributed databases. Database servers are designed to work with large amounts of data (tens of gigabytes or more) and for a large number of users, while providing high performance, reliability and security. The client-server architecture, in a sense, is the main one in the applications of global computer networks.

1.1 Modes of data processing

When designing technological processes, they are guided by the modes of their implementation. The mode of implementation of the technology depends on the volume-temporal features of the tasks being solved: frequency and urgency, requirements for the speed of message processing, as well as on the operational capabilities of technical means, and primarily computers. There are: batch mode; real-time mode; time sharing mode; routine mode; request; dialog; teleprocessing; interactive; single-program; multiprogram (multiprocessing).

Batch mode. When using this mode, the user does not have direct communication with the computer. Collection and registration of information, input and processing do not coincide in time. First, the user collects information, forming it into packages in accordance with the type of tasks or some other feature. (As a rule, these are tasks of a non-operational nature, with a long-term validity of the results of the solution). After the completion of receiving information, its input and processing is performed, i.e., there is a processing delay. This mode is used, as a rule, with a centralized method of information processing.

Dialogue mode (request) mode, in which there is a possibility for the user to directly interact with the computing system during the user's work. Data processing programs are permanently in the computer memory if the computer is available at any time, or during a certain period of time when the computer is available to the user. The interaction of a user with a computing system in the form of a dialogue can be multifaceted and determined by various factors: the language of communication, the active or passive role of the user; who is the initiator of the dialogue - the user or the computer; response time; the structure of the dialogue, etc. If the initiator of the dialogue is a user, then he must have knowledge of working with procedures, data formats, etc. If the initiator is a computer, then the machine itself informs at every step what to do with a variety of choices. This method of operation is called “menu selection”. It provides support for user actions and prescribes their sequence. At the same time, less preparedness is required from the user.

Dialogue mode requires a certain level of technical equipment of the user, i.e. the presence of a terminal or PC connected to the central computer system by communication channels. This mode is used to access information, computing or software resources. The ability to work in a dialogue mode can be limited in the time of the beginning and end of the work, or it can be unlimited.

Sometimes a distinction is made between conversational and interrogative modes, then a request is understood as a one-time call to the system, after which it issues an answer and turns off, and a dialog mode is a mode in which the system, after a request, issues a response and waits for further user actions.

Real time mode. It means the ability of a computing system to interact with controlled or controlled processes at the rate of these processes. The response time of the computer must meet the pace of the controlled process or user requirements and have a minimum delay. Typically, this mode is used for decentralized and distributed data processing.

Teleprocessing mode enables a remote user to interact with a computing system.

Interactive mode assumes the possibility of two-way interaction between the user and the system, i.e. the user has the ability to influence the data processing process.

The time-sharing mode assumes the ability of the system to allocate its resources to a group of users one at a time. The computing system serves each user so quickly that it seems as if several users are working at the same time. This capability is achieved through appropriate software.

Single-program and multi-program modes characterize the ability of the system to work simultaneously on one or more programs.

The routine mode is characterized by the time certainty of the individual tasks of the user. For example, receiving summary summaries at the end of the month, calculating payroll sheets by specific dates, etc. The timing of the decision is set in advance by regulation, as opposed to arbitrary requests.

1.2 Methods of data processing

The following methods of data processing are distinguished: centralized, decentralized, distributed and integrated.

Centralized assumes availability. With this method, the user delivers the initial information to the computer center and receives the processing results in the form of resulting documents. A feature of this processing method is the complexity and laboriousness of establishing fast, uninterrupted communication, a large workload of the CC with information (since its volume is large), regulation of the timing of operations, organization of system security from possible unauthorized access.

Decentralized processing. This method is associated with the emergence of personal computers, which make it possible to automate a specific workplace.

The distributed method of data processing is based on the distribution of processing functions between different computers included in the network. This method can be implemented in two ways: the first involves the installation of a computer at each network node (or at each level of the system), while data processing is carried out by one or more computers, depending on the real capabilities of the system and its needs at the current time. The second way is to place a large number of different processors within the same system. This way is used in systems for processing banking and financial information, where a data processing network is needed (branches, offices, etc.). The advantages of the distributed method: the ability to process any amount of data in a given time frame; a high degree of reliability, since if one technical device fails, it is possible to instantly replace it with another; reducing the time and cost of data transmission; increasing the flexibility of systems, simplifying the development and operation of software, etc. The distributed method is based on a set of specialized processors, i.e. each computer is designed to solve certain problems, or tasks of its own level.

An integrated way of processing information. It provides for the creation of an information model of a managed object, that is, the creation of a distributed database. This method provides maximum user convenience. On the one hand, databases are shared and centrally managed. On the other hand, the amount of information and the variety of tasks to be solved require the distribution of the database. Integrated information processing technology improves the quality, reliability and processing speed, because processing is carried out on the basis of a single information array entered into the computer once. A feature of this method is the technological and time separation of the processing procedure from the procedures for collecting, preparing and entering data.

Lecture number 3

The main questions of the lecture:

1. Technical means of informatics.

2. The concept of the principles of computer operation.

3. The main components of a personal computer.

Technical means of informatics

Computer is the main technical means of information processing, classified according to a number of characteristics, in particular: by purpose, principle of action, ways of organizing the computing process, size and computing power, functionality, ability to execute programs in parallel and etc.

By appointment Computers can be divided into three groups:

· universal (general purpose) - designed to solve a variety of engineering and technical problems: economic, mathematical, informational and other problems that differ in the complexity of algorithms and a large amount of processed data. The characteristic features of these computers are high performance, a variety of forms of processed data (binary, decimal, symbolic), a variety of operations performed (arithmetic, logical, special), a large capacity of random access memory, a developed organization of information input-output;

· problem-oriented - designed to solve a narrower range of tasks, usually associated with technological objects, registration, accumulation and processing of small amounts of data (control computer systems);

· specialized - to solve a narrow range of tasks in order to reduce the complexity and cost of these computers, while maintaining high performance and reliability (programmable microprocessors for special purposes, controllers that perform the functions of controlling technical devices).

By principle of action(the criterion for dividing computers is the form of presentation of information with which they work):

· Analog computers (AVM) - computers of continuous operation, work with information presented in continuous form, i.e. in the form of a continuous series of values ​​of any physical quantity (most often electrical voltage); in this case, the voltage value is analogous to the value of some measured variable. For example, entering 19.42 at a scale of 0.1 is equivalent to applying a voltage of 1.942 V to the input;

· Digital computers (DCM) - computers of discrete action, work with information presented in discrete, or rather in digital, form - in the form of several different voltages, equivalent to the number of units in the represented value of the variable;

· Hybrid computers (GVM) - computers of combined action, work with information presented in both digital and analog form.

AVM are simple and easy to use; programming tasks for solving them is not laborious, the speed of the solution changes at the request of the operator (more than that of a digital computer), but the accuracy of the solution is very low (relative error 2-5%). AVM is used to solve mathematical problems containing differential equations that do not contain complex logic. Digital computers are the most widespread, they are meant when they talk about computers. It is advisable to use the GVM to control complex high-speed technical complexes.

By generations the following groups can be distinguished:

1st generation. In 1946. the idea of ​​using binary arithmetic (John von Neumann, A. Burns) and the principle of a stored program were published, which are actively used in computers of the 1st generation. Computers were distinguished by their large dimensions, high energy consumption, low speed, low reliability, and programming in codes. The tasks were solved mainly computational containing complex calculations required for weather forecasting, solving nuclear power problems, controlling aircraft and other strategic tasks.

2nd generation. In 1948 Bell Telefon Laboratory announced the creation of the first transistor. In comparison with the computers of the previous generation, all technical characteristics have improved. Algorithmic languages ​​are used for programming, the first attempts of automatic programming have been made.

3rd generation. A feature of computers of the 3rd generation is considered to be the use of integrated circuits in their design, and operating systems in the control of computer operation. The possibilities of multiprogramming, memory management, input-output devices appeared. Disaster recovery was handled by the operating system. From the mid-60s to the mid-70s, databases containing different types of information on all kinds of branches of knowledge became an important type of information services. For the first time, information technology of decision support appears. This is a completely new way of human-computer interaction.

4th generation. The main features of this generation of computers are the presence of storage devices, the launch of the computer using a bootstrap system from ROM, a variety of architectures, powerful operating systems, and the integration of computers into a network. Since the mid-70s, with the creation of national and global data transmission networks, the leading type of information services has become the interactive search for information in databases remote from the user.

5th generation. Computers with dozens of processors operating in parallel, making it possible to build efficient knowledge processing systems; A computer based on super-complex microprocessors with a parallel vector structure, simultaneously executing dozens of sequential program commands.

6th generation. Optoelectronic computers with massive parallelism and neural structure - with a network of a large number (tens of thousands) of simple microprocessors that simulate the structure of neural biological systems.

Computer classification in size and functionality.

Large computers. Historically, large computers were the first to appear, the element base of which went from vacuum tubes to integrated circuits with an ultra-high degree of integration. However, their productivity turned out to be insufficient for modeling ecological systems, genetic engineering problems, managing complex defense complexes, etc.

Mainframes are often referred to abroad as MAINFRAME, and rumors of their death are greatly exaggerated.

Typically, they have:

Performance of at least 10 MIPS (millions of floating point operations per second)

Main memory from 64 to 10000 MB

External memory of at least 50 GV

Multiuser mode of operation

Main directions of use- is the solution of scientific and technical problems, work with large databases, management of computer networks and their resources as servers.

Small computers. Small (mini) computers are reliable, inexpensive and easy-to-use; they have somewhat lower capabilities than large computers.

Super-mini computers have:

Main memory capacity - 4-512 MB

Disk storage capacity - 2 - 100 GW

· The number of supported users - 16-512.

Minicomputers are focused on using as control computer systems, in simple modeling systems, in automated control systems, for controlling technological processes.

Supercomputer. These are powerful multiprocessor computers with a speed of hundreds of millions - tens of billions of operations per second.

It is impossible to achieve such performance on one microprocessor using modern technologies, in view of the final value of the propagation speed of electromagnetic waves (300,000 km / sec), because the time of signal propagation over a distance of several millimeters becomes commensurate with the execution time of one operation. Therefore, supercomputers are created in the form of highly parallel multiprocessor computing systems.

Currently, there are several thousand supercomputers in the world, ranging from simple office Cray EL to powerful Cray 3, SX-X from NEC, VP2000 from Fujitsu (Japan), VPP 500 from Siemens (Germany).

Microcomputer or personal computer. The PC must have characteristics that meet the requirements of general availability and versatility:

Low cost

Autonomy of operation

· Flexibility of architecture, which makes it possible to adapt in the field of education, science, management, in everyday life;

· Friendliness of the operating system;

· High reliability (more than 5000 hours of MTBF).

Most of them are self-powered by batteries, but can be connected to the network.

Special computers. Special computers focused on solving special computational problems or control problems. Electronic microcalculators can also be considered as a special computer. The program that the processor executes is in ROM or in the RAM, and since the machine usually solves one problem, then only the data changes. It is convenient (the program is stored in ROM), in this case, the reliability and speed of the computer increases. This approach is often used in on-board computers, control of the operating mode of a camera, movie camera, and in sports simulators.

The concept of the principles of computer operation

The architecture of modern personal computers is based on the trunk-modular principle. The modular principle allows the consumer to complete the required computer configuration and, if necessary, upgrade it. The modular organization of a computer is based on the trunk (bus) principle of information exchange between devices.

The backbone includes three multi-bit buses:

Data bus,

Address bus

· And control bus.

Busbars are multi-wire lines.

Data bus. On this bus, data is transferred between various devices. For example, data read from main memory can be transferred to a processor for processing, and then the received data can be sent back to main memory for storage. Thus, data on the data bus can be transferred from device to device in any direction.

The bit width of the data bus is determined by the bit width of the processor, i.e. the number of bits that the processor processes in one clock cycle. The bit capacity of processors has constantly increased with the development of computer technology.

Address bus. The choice of a device or memory cell where data is sent or read from via the data bus is made by the processor. Each device or RAM cell has its own address. The address is transmitted over the address bus, and signals are transmitted along it in one direction from the processor to the main memory and devices (unidirectional bus). Width of the address bus defines the address space of the processor, i.e. the number of memory cells that can have unique addresses. The bit width of the address bus has been constantly increasing and in modern personal computers it is 32 bits.

Control bus. Signals are transmitted over the control bus that determine the nature of the exchange of information along the highway. Control signals determine what operation to read or write information from memory to be performed, synchronize the exchange of information between devices, etc.

The overwhelming majority of computers are based on the following general principles formulated in 1945 by an American scientist John von Neumann.

1. The principle of programmed control. The program consists of a set of instructions that are executed by the processor automatically in a specific sequence. The program is retrieved from memory using the command counter. This processor register sequentially increases the address of the next instruction stored in it by the instruction length. And since the program commands are located in memory one after another, thus the selection of a chain of commands from sequentially located memory cells is organized. If, after executing the command, you need to go not to the next, but to some other, use the commands conditional or unconditional jump, which enter into the command counter the number of the memory cell containing the next command. Fetching commands from memory stops after reaching and executing the command "stop". In this way, the processor executes the program automatically, without human intervention.

2. The principle of memory homogeneity. Programs and data are stored in the same memory, so the computer cannot distinguish what is stored in a given memory location — a number, text, or command. You can perform the same actions on commands as on data, and this opens up a number of possibilities. For instance, the program in the course of its execution can also undergo processing, which allows you to specify in the program itself the rules for obtaining some of its parts (this is how the execution of loops and subroutines is organized in the program). Moreover, the commands of one program can be received as the results of the execution of another program. This principle is based on broadcast methods- translation of the program text from a high-level programming language into the language of a specific machine.

3. The targeting principle. Structurally, the main memory consists of renumbered cells. Any cell is available to the processor at any time. Hence, it is possible to give names to memory areas so that the values ​​stored in them can be subsequently accessed or changed during the execution of programs using the assigned names. Computers built on the listed principles are of the type von Neumann. But there are computers that are fundamentally different from von Neumann's. For them, for example, the principle of program control may not be fulfilled, that is, they may operate without an instruction counter indicating the currently executing program command. These computers do not need to give it a name to refer to a variable in memory. Such computers are called not von Neumann.

The main components of a personal computer

The computer has a modular structure that includes:

System unit

Metal case with power supply. Currently, system units are produced in ATX standard, 21x42x40cm in size, power supply - 230W, operating voltage 210-240V, 3x5.25 "" and 2x3.5 "" bays, automatic shutdown upon completion of work. The housing also houses a speaker.

1.1. System (mother) board(motherboard), which contains the various devices included in the system unit. The design of the motherboard is made on the principle of a modular constructor, which allows each user to easily replace failed or outdated elements of the system unit. The following are attached to the system board:

a) CPU (CPU - Central Processing Unit) - a large integrated circuit on a chip. Performs logical and arithmetic operations, controls the functioning of the computer. The processor is characterized by the manufacturer and clock frequency... The most famous manufacturers are Intel and AMD. Processors have their own names Athlon, Pentium 4, Celeron, etc. The clock frequency determines the speed of the processor and is measured in Hertz (1 \ s). So, Pentium 4 2.2 GHz, has a clock speed of 220,000,000 Hz (performs more than 2 billion operations per second). Another characteristic of the processor is the presence cache memory- even faster than RAM memory, which stores the data most frequently used by the CPU. The cache is a buffer between the processor and RAM. The cache is completely transparent and cannot be detected programmatically. The cache reduces the total number of CPU clock cycles when accessing RAM.

b) Coprocessor (FPU - Floating Point Unit). Built into the CPU. Performs floating point arithmetic.

v) Controllers - microcircuits responsible for the operation of various computer devices (keyboard, HDD, FDD, mouse, etc.). This also includes the ROM (Read Only Memory) microcircuit in which the ROM-BIOS is stored.

d) Slots(buses) - connectors (ISA, PCI, SCSI, AGP, etc.) for various devices (RAM, video card, etc.).

A bus is actually a set of wires (lines) that connect various components of a computer to supply power to them and exchange data. Existing buses: ISA (frequency - 8 MHz, number of bits - 16, data transfer rate - 16 Mb / s),

e) Random access memory (RAM, RAM - Random Access Memory (types SIMM, DIMM (Dual Inline Memory Module), DRAM (Dynamic RAM), SDRAM (Synchronous DRAM), RDRAM)) - microcircuits used for short-term storage of intermediate instructions, values ​​of calculations performed by the CPU as well as other data. Executable programs are also stored there to improve performance. RAM - high-speed memory with a regeneration time of 7 · 10 -9 sec. Capacities up to 1GB. 3.3V power supply.

e) Video card (video accelerator) - a device that expands the possibilities and accelerates the work with graphics. The video card has its own video memory (16, 32, 64, 128 MB) for storing graphic information and a graphic processor (GPU - Graphic Processor Unit), which takes care of calculations when working with 3D graphics and video. The GPU runs at 350MHz and contains 60mln. transistors. Supports 2048x1536 60Hz resolution at 32-bit color. Performance: 286 million pixels / sec. It can have a TV and video input. Supported effects: transparency and translucency, shading (getting realistic lighting), glare, color lighting (light sources of different colors), blurring, three-dimensional, fogging, reflection, reflection in a curved mirror, surface shake, image distortion caused by water and warm air, transformation of distortions using noise algorithms, imitation of clouds in the sky, etc.

g) Sound card - a device that enhances the sound capabilities of a computer. Sounds are generated using samples of sounds of different timbres stored in memory (32MB). Up to 1024 sounds are played simultaneously. Various effects are supported. May have line-in / out, headphone-out, mic-in, joystick jack, answering machine-in, analog and digital CD audio input.

h) LAN card - a device responsible for connecting a computer to the network for the exchange of information.

In addition to the motherboard, the system unit contains:

1.2. Hard disk drive(hard drive, HDD - Hard Disk Drive) - a hermetically sealed case with rotating magnetic disks and magnetic heads. Serves for long-term storage of information in the form of files (programs, texts, graphics, photography, music, video). Capacity - 75 Gb, buffer size 1-2 Mb, data transfer speed 66.6 Mb / s. Maximum spindle rotation speed - 10,000, 15,000 rpm. The IBM HDD has a capacity of 120GB, the spindle speed is 7200 rpm.

1.3. Floppy disk drive(floppy drive, floppy, FDD - Floppy Disk Drive) - a device used to write / read information from floppy disks that can be transferred from computer to computer. Floppy disk capacity: 1.22MB (size 5.25 "" (1 "" = 2.54cm)), 1.44MB (size 3.5 ""). 1.44MB is equivalent to 620 pages of text.

1.4. CD-ROM(Compact Disc Read Only Memory) - a device that only reads information from a CD. Binary information from the CD surface is read by a laser beam. CD capacity - 640MB = 74min. music = 150,000 p. text. The spindle speed is 8560 rpm, the buffer size is 128Kb, the maximum data transfer rate is 33.3Mb / s. Jumps and breaks during video playback are the reasons for not filling or overflowing the buffer used for intermediate storage of the transmitted data. There is a volume control and a headphone output (for listening to music CDs).

1.5. CD-R(Compact Disc Recorder) - a device used to read and write once information on a CD. The recording is based on the change in the reflective properties of the CD substrate material under the action of a laser beam.

1.6. DVD-ROM discs (digital video discs) have a much larger information capacity (up to 17 GB), because information can be written on two sides, in two layers on one side, and the tracks themselves are thinner.

The first generation of DVD-ROM drives provided a read speed of approximately 1.3 MB / s. Currently, 5-speed DVD-ROMs reach read speeds of up to 6.8 MB / s.

Exists DVD-R discs (R - recordable), which are golden in color. Special DVD-R drives have a powerful enough laser, which, in the process of recording information, changes the reflectivity of areas of the surface of the disc being recorded. Information on such discs can be recorded only once.

1.7. There are also CD-RW and DVD-RW discs (RW - Rewritable, rewritable), which have a "platinum" tint. Special CD-RW and DVD-RW drives in the process of recording information also change the reflectivity of certain areas of the disc surface, however, information on such discs can be recorded many times. Before rewriting, the recorded information is "erased" by heating portions of the disc surface with a laser.

The composition of the computer, in addition to the system unit, includes the following input-output devices.

2. Monitor(display) - graphic information output device. There are digital and liquid crystal. Diagonal Sizes - 14 "", 15 "", 17 "", 19 "", 21 "", 24 "". Pixel size - 0.2-0.3mm. Frame rate - 77Hz @ 1920x1200 pixel, 85Hz @ 1280x1024, 160Hz @ 800x600. The number of colors is determined by the number of bits per pixel and can be 256 (2 8, where 8 is the number of bits), 65536 (2 16, High Color mode), 16 777 216 (2 24, True Color mode, maybe 2 32) ... There are cathode ray and LCD monitors. Monitors use RGB color education system, i.e. the color is obtained by mixing 3 primary colors: red (Red), green (Green) and blue (Blue).

3. Keyboard(keyboard) - a device for entering commands and symbolic information (108 keys). Connects to the serial interface (COM port).

4. Mouse-type manipulator(mouse) - command input device. A 3-button mouse with a scrolling wheel is standard.

5. Printing device(printer) - a device for displaying information on paper, film or other surface. Connects to the parallel interface (LPT port). USB (Universal Serial Bus) is a universal serial bus that replaces the outdated COM and LPT ports.

a) Matrix. The image is formed by needles piercing the ink ribbon.

b) Jet. The image is formed by droplets of paint ejected from the nozzles (up to 256). The droplet speed is up to 40m / s.

v) Laser. The image is transferred to the paper from a special drum, electrified by a laser, to which ink (toner) particles are attracted.

6. Scanner- a device for inputting images into a computer. There are manual, tablet, drum.

7. Modem(Modulator-DEModulator) - a device that allows you to exchange information between computers via analog or digital channels. Modems differ from each other in the maximum data transfer rate (2400, 9600, 14400, 19200, 28800, 33600, 56000 bits per second), supported by the communication protocols. There are internal and external modems.

In the modern world, it is very important to receive accurate information on time. The vital activity of people depends on this. For this reason, every day there are more and more various devices that collect and process data. What should be understood by these processes?

Procedure for obtaining data from the outside world

The collection of information can be done by a person. And you can use technical means and systems. In such situations, this process will occur in hardware. For example, the user was able to obtain data on train routes on his own, by studying the timetable at the station. He can do the same using his phone or computer.

This suggests that the procedure for collecting information is a rather complex hardware and software complex. What should be understood by such a process? This is the procedure for obtaining any data coming from the outside world. Such information is reduced to a standard form for applied systems. Modern technical devices not only collect data, encode it and display it for review. Information processing also takes place.

Using different ways of working with data. Technology of working with them

Processing should be understood as an ordered process of obtaining the required information from a set of specific data using special algorithms. This procedure can be done in several ways. Distinguish between such means of information processing as centralized, decentralized, distributed and integrated.

Using data centers for data processing

Centralized processing implies that there must be a computing center (CC). With this method, the initial data is delivered by the user to the CC. After that, he is provided with the result in the form of certain documentation.

A distinctive feature of this method is its labor intensity. It is difficult enough to establish fast, uninterrupted communication. In addition, there is a large workload of the information center. In addition, the deadlines for completing the assigned tasks are regulated, and it is not always possible to complete them on time. Such information processing is also complicated due to the presence of security measures that prevent possible unauthorized access.

What is the point of the decentralized method?

At the time of the advent of the PC, a decentralized method arose. It provides the ability to automate a specific workplace. Today there are 3 types of technologies for such data processing. The first is based on personal computers that are not connected to a local network. This information processing technology implies storing data in separate files. In order to get the indicators, it is necessary to overwrite the files on the computer. The negative aspects include the fact that there is no interconnection of tasks. It is impossible to process large amounts of information. In addition, this information processing is characterized by low security against hacking.

The second technology is based on computers that are combined into a local network, which leads to the formation of single data files. However, it will not be possible to cope with a large flow of information in such a situation. The third technology is based on computers connected to a local network, which also includes servers.

Working with large amounts of data

Distributed information processing is based on the fact that functions are divided between different computers that are connected to the same network. This method can be implemented in two ways:

  1. It is necessary to install a computer in each individual node of the network. In such a situation, processing will take place using one or more computers. It all depends on the real capabilities of the system, as well as on the needs.
  2. It is necessary to place most of the various processes within one system. A similar path is used when processing banking information in the presence of branches or branches.

Distributed information processing allows you to operate with data in any volume at a given time. There is a fairly high level of reliability. The time and costs for transfer of information. Systems are more flexible and software development is easier. The distributed method is based on specialized processes. In other words, each computer is designed to solve its own problem.

Using databases for storing and processing information

The integrated method implies the formation of an information model of a managed object. In other words, a distributed database is being created. This method allows us to make the information processing process the most convenient for the user. The database can be used by more than one person at the same time. But a large amount of information requires distribution. Due to this method, you can significantly improve the quality, reliability and processing speed. This is due to the fact that the technique is based on a single information array, which is entered into a computer once.

Methods of information processing have been described above. But with what technical means does this process take place? It is necessary to dwell on this issue in more detail.

What do technical means mean?

Technical means should be understood as a set of autonomous types of equipment that allows you to collect, accumulate, transmit, process and output data, as well as a set of office equipment, controls, maintenance devices, etc. The following requirements are imposed on all of the above systems:

  1. Technical means, which are based on different methods of information processing, should provide a solution to the problem with the minimum possible loss. It is necessary to achieve maximum accuracy and reliability.
  2. Requires technical compatibility, device aggregation.
  3. High reliability must be ensured.
  4. Purchase costs should be kept to a minimum.

Domestic and foreign industry produces just a huge set of technical means to help process information. They can differ from each other in the element base, design, use of a wide variety of data carriers, as well as operational parameters, etc.

Technical means can be:

  1. Auxiliary.
  2. The main ones.

What is meant by assistive devices?

In the first case, this is the equipment that ensures the performance of the basic tools. Also, auxiliary devices include devices that help simplify managerial work. They make it more comfortable. It can be office equipment and maintenance and repair products. Organizational devices include a large number of nomenclature means, from office supplies to devices for delivery, duplication, deletion, retrieval and storage of data. We are talking about all types of equipment, due to which the activity of a manager becomes easier, more convenient and more comfortable.

What is included in the set of basic types of devices?

Information processing technology can be based on fixed assets. They should be understood as devices aimed at automating work with data. In order to be able to establish control over certain processes, it is required to have some management data. Due to them, it will be possible to characterize the state, parameters of technological processes, quantitative and cost indicators.

The main information processing systems may include:

  1. Devices that record and collect data.
  2. Equipment that receives and transmits data.
  3. Data preparation tools.
  4. Input Devices, processing and displaying data.

Conclusion

This article covered such a topic as the collection and processing of information. It was decided to focus on working with data. This is a fairly urgent and complex task that requires high reliability, accuracy and reliability. We hope that this review has helped to understand what the information processing process is.

Top related articles