How to set up smartphones and PCs. Informational portal
  • home
  • Advice
  • Means for processing economic economic information. Automatic methods for collecting and recording data

Means for processing economic economic information. Automatic methods for collecting and recording data

Technology is a process determined by a combination of means and methods of processing, manufacturing, changing the state, properties, shape, or a certain product. Technology changes the quality or the original state of matter in order to obtain a tangible product. The goal of technology is to release a product that meets the needs of a person or a system.

Information is one of the most important resources of society along with such traditional material types of resources as oil, gas, minerals and others. This means that the process of its processing can be perceived as a technology (by analogy with the processes of processing material resources). The information technology workflow is shown in Fig. 4.1.

Figure 4.1. Information technology workflow

Thus, information technology (IT) is a process that uses a set of means and methods for collecting, processing and transmitting primary information to obtain information of a new quality about the state of an object, process or phenomenon (information product).

Information technology is a process consisting of clearly regulated rules for performing stages, operations and actions on data.

The main goal of information technology is to obtain the information necessary for the user as a result of targeted actions for the processing of primary information.

Information technology, like any other, must meet the following requirements:

1) ensure a high degree of decomposition of the information processing process into stages (phases), operations and actions;

2) include the entire set of elements necessary to achieve the goal;

3) be of a regular nature.

The stages, operations and actions of the technological process can be standardized and unified, which will allow for more efficient management of information processes.

By applying different technologies to the same material resource, different products can be obtained. The same will be true for information processing technology. Thus, information technology is a system of methods and methods for collecting, transferring, accumulating, processing, storing, presenting and using information.

Each of the phases of transformation and use of information listed in the definition of IT is implemented using a specific technology. In this sense, we can talk about information technology as a set of technologies - technologies for collecting information, transferring information, etc.

The information system is designed to store, search and issue information at the request of users. Economic IS (EIS) is designed to process economic information. The subject area of ​​which is accounting, statistics, banking, credit and financial, insurance and other types of economic activity.

To use the EIS in the workplace, it must be designed using information technology. It should be noted, however, that previously the EIG design process was separated from the process of processing the economic data of the subject area. Today it exists independently and requires highly qualified design specialists. However, there are created IT, available to any user, which allow you to combine the design process of individual EIS elements with the data processing process. For example, email, e-office, word and spreadsheet processors, etc. At the same time, the trend of creating information technologies accessible to any user continues.

The creation of new information technologies is not an end in itself. But technology is driving more powerful global forces, culture, politics, health needs, demographic needs, e-business, e-commerce, and custom-made products and services.

Thus, at the workplace of a specialist, both the elements of the EIS, developed by the designers, and information technologies are used, which allow the information worker to automate his activities.

Information technology is a set of methods, production processes and software and hardware, united in a technological chain that provides collection, storage, processing, output and dissemination of information to reduce the labor intensity of the processes of using information resources, increase their reliability and efficiency.

The set of methods and production processes of economic information systems determines the principles, techniques, methods and measures governing the design and use of software and hardware for data processing in the subject area.

The purpose of using information technology is to reduce the labor intensity of using information resources. Information resources are understood as a set of data that are valuable for an organization (enterprise) and act as material resources. These include data files, documents, texts, graphics, knowledge, audio and video information that allow objects of the real world to be displayed on a PC screen.

The process of data processing in the EIS is impossible without the use of technical means, which include a computer, input-output devices, office equipment, communication lines, network equipment.

The software provides data processing in the EIS and consists of general and application software and program documents necessary for the operation of these programs.

The main characteristics of the new information technology consist of:

1) methodology, namely, fundamentally new means of information processing; integral information systems; purposeful creation, transmission, storage and display of information;

2) the result, namely the new communication technology; new information processing technology; new technology for making management decisions.

It is quite natural for information technologies that they become obsolete and replaced by new ones. When introducing new information technology in an organization, it is necessary to predict the risk of lagging behind competitors as a result of the aging of IT over time, since information products, like other types of material goods, have an extremely high rate of replacement by new types or versions. The turnover periods range from several months to one year. If in the process of introducing new information technology this factor is not given due attention, then it is possible that even before the completion of the transfer of the organization to the new information technology, it will already become outdated and it is said to take measures to modernize it. Such problems with the implementation of information technology are usually associated with imperfect technical means, but the main reason for failure is the absence or weak processing of the methodology for using information technology.

When introducing information technology in an organization, it is necessary to choose one of two basic concepts that reflect the point of view on the existing structure of the organization and the role of automated information processing in it.

The first concept focuses on the existing structure of the organization. Information technology adapts (adjusts) to the organizational structure, and there is only a modernization of working methods. Communications do not change (poorly developed), only jobs are rationalized. There is a distribution of functions between technical workers and specialists.

The degree of risk from the introduction of new information technology is minimal, since the costs are insignificant and the organizational structure does not change.

The main disadvantage of such a strategy is the need for continuous changes in the form of information presentation, adapted to specific technological methods and technical means. Any operational solution "gets bogged down" at various stages of information technology.

The advantages of the strategy include the minimum degree of risk and costs.

The second concept focuses on the future structure of the organization. The existing structure needs to be modernized. This strategy assumes the maximum development of communications and the development of new organizational relationships. The productivity of the organizational structure of the company increases, since data archives are rationally distributed, the volume of information circulating through the system channels decreases, and a balance is achieved between the tasks being solved.

Its main disadvantages include:

Significant costs at the first stage associated with the development of a general concept and examination of all divisions of the company;

The presence of psychological tension caused by the alleged changes in the structure of the company and, as a result, changes in the staffing table and job responsibilities.

The advantages of this strategy are:

Rationalization of the organizational structure of the company;

Maximum employment of all employees;

High professional level;

Integration of professional functions through the use of computer networks.

New information technology in an organization should be such that information and its processing subsystems are linked together by a single database. In this case, two requirements are imposed. First, the structure of the information processing system must correspond to the distribution of powers in the firm. Secondly, the information within the system must function in such a way as to adequately reflect the levels of control.

How information technology and information system relate. Information technology is implemented within the information system. Information technology is your way of transforming information. Many such technologies can be used in an information system. This system is the environment for the implementation of technology. However, information technology is broader than an information system. She can exist outside of her.

  • A) Technologies focused on the received processing, transmission of information using technical means
  • C. Obtaining systematic information on the progress of production
  • I. Ensuring national security in the economic sphere.
  • II. THE MAIN STAGES OF STUDYING THE INDIVIDUAL FEATURES OF MILITARY SERVICES.
  • The main purpose of processing information sources is to establish factors and measure their influence on a particular indicator of the economy, as well as to reveal the causal relationships between various factors and economic processes.

    The process of processing information sources includes:

    · Validation;

    · Bringing indicators into a comparable form;

    · Simplification of digital data;

    · Carrying out analytical calculations;

    · Compilation of analytical tables, formation of conclusions;

    · Study of processed materials.

    Information validation is carried out in two stages.

    The first is formal-logical control, where the completeness of reporting, the correctness and timeliness of its compilation is checked, completeness is the coverage of all divisions of the object, the presence of all reporting forms, filling out all sections of the forms, all analytical tables for the program. Correctness is a check of the correspondence of lines and graphs, the names of codes according to a single classifier, the presence of signatures, dates and the inadmissibility of unspecified corrections.

    The second stage is a counting check, which is carried out in several stages. First of all, they check the continuity of indicators. The essence of such a check is to check the consistency of the amounts shown, for example, in the report for the previous quarter in the column at the end of the period, and the data that are reflected in the report for the reporting period in the column at the beginning of the period. Then the correctness of the arithmetic calculations is checked. If, in the end, a certain amount is indicated in the "Working capital" column, then it should represent the sum of inventories, cash and illegal settlements.

    After filling out the reporting forms, a knowledgeable accountant controls himself on the basis of the interconnection table, when they check the coincidence of the amounts reflected in different reporting forms. So, for example, if a certain amount is given in Form No. I "Balance" on the line "Authorized Capital", then it must also be in Form No. 5 "Appendix to the Balance Sheet". Such links, when checking the same figure given in different reporting forms, are called simple. There are also complex linkages when data provided in several reporting forms are used for verification. So, they make up the balance of marketable products, where they use the data of form No. 2, the general ledger, inventory acts, etc.



    To compile the balance of marketable products, use the following formula:

    Pp = OH + VP - OK + I - N,

    Where RP is the sale of products,
    VP - production output,
    OK - balances of unsold products at the end of the period,
    ОН - balances of unsold products at the beginning of the period.
    I and H - surplus and deficiencies of products identified during inventory.

    In practice, several methods are used to verify economic information. These include:

    1. "Counter" check. During this check, the identity of the amounts reflected in the reporting forms of the analyzed object and the amounts actually received on the accounts of the activity partners is specified. For example, the documents of the company reflect that it transferred 8 million rubles to the employment fund in March, therefore, the same amount should be reflected in the documents of the employment fund for March or April, depending on the date of transfer.

    2. The correctness of the calculation of individual indicators is checked. So, for example, the sale of inventory items to employees of an enterprise against wages must be preliminarily reflected in the accounts of the sale of products, works and services, or other sales. In other words, these amounts should increase the sales turnover, and, consequently, the taxable base for calculating taxes.



    3. When determining the "cost" indicators, the correctness of the attribution to the prime cost for each expense item is checked. All amounts included in D 20 accounts in the general ledger should be thoroughly checked, but settlement accounts 60, 62, 71, 76 are especially closely monitored.

    4. When determining the amounts reflected as material costs, the following is monitored:

    o the correctness of assigning deviations from the planned cost of procurement of materials to the cost;

    o the legality of writing off to production costs shortages in excess of the norms of natural attrition and losses.

    In the practical implementation of this kind of checks, they are guided by the current instructions and other regulations.

    The process of bringing indicators into a comparable form is caused by the fact that indicators are calculated in different estimates, often differ in structure, method of construction, are not based on the same bases, the influence of inflation is great, etc. The indicators are brought into a comparable form in several ways. First, you can recalculate the basic and actual volumes at the same price. Secondly, the actual data can be adjusted for the officially registered inflation index published by the statistical authorities. Thirdly, the base values ​​are conventionally calculated, converted to the actual volume and assortment. This technique is used for factor analysis of the cost per ruble of products, profits from the sale of products, works and services.

    Simplification of digital data is rounding, summing, etc. in order to reduce technical work and, in particular, computational efforts, and also to give indicators better visibility and clarity.

    However, such a procedure should not lead to “damage” to the quality of analytical calculations. For example, the analysis of the wages fund as a whole for the enterprise should be carried out in million rubles, and the analysis of the level of the average salary of one employee in thousand rubles. etc.

    The stage of analytical calculations is carried out using all the elements of the general analysis methodology and computer and computer technology. The elements of the general method of analysis should include a system of analytical indicators, a more interconnected study of economic processes, battle, detailing, grouping, elimination, generalization. The general analysis methodology is focused on a general assessment of the dynamics of the studied characteristic, taking into account the influence of positive and negative factors, determining the amounts of identified reserves and developing practical recommendations aimed at improving the results of work.

    To formalize the results of the analysis, to illustrate the conclusions, analytical tables are drawn up. The number of columns in the tables should be optimal using, possibly, several bases for comparison (the past period, data from another company, etc.). The spreadsheet should be neatly formatted in accordance with regulatory control. In the upper right corner there should be an indication "Table No." The number can be affixed both for the work as a whole, and for subsections "For example, Table 3.1. - the first table in the third subsection.

    At one interval, skipping 15-17 mm, indicate the name of the table from the red line. The head of the table is drawn by a line and numbered, while the columns for numbers are numbered in numbers, and indicators, units of measurement and other explanations in letters.

    The final stage of processing economic information is the stage of studying waste materials. At this stage, it is necessary to reveal all the interconnections and interdependence between individual indicators and factors, and the influence is determined, first of all, of quantitative, and then of qualitative factors.

    The technology of electronic processing of economic information includes a man-machine process of executing interrelated operations proceeding in an established sequence in order to transform the initial (primary) information into the result one. An operation is a complex of technological actions performed, as a result of which information is transformed. Technological operations are varied in complexity, purpose, implementation technique, performed on various equipment, by many performers. In the conditions of electronic data processing, operations that are performed automatically on machines and devices that read data, perform operations according to a given program in an automatic mode without human intervention, or retaining the functions of control, analysis and regulation for the user prevail.

    The construction of the technological process is determined by the following factors: the characteristics of the processed economic information, its volume, the requirements for the urgency and accuracy of processing, the types, quantity and characteristics of the technical means used. They form the basis for the organization of technology, which includes the establishment of a list, sequence and methods of performing operations, the order of work of specialists and automation equipment, the organization of workplaces, the establishment of time regulations for interaction, etc. The organization of the technological process should ensure its efficiency, complexity, reliability of functioning, high quality of work. This is achieved by using a systematic approach to the design of technology for solving economic problems. At the same time, there is a complex interconnected consideration of all factors, ways, methods of building technology, the use of elements of typification and standardization, as well as the unification of technological processes.

    Information can be viewed as a resource similar to material, labor and monetary resources. Information resources - a set of accumulated information recorded on material carriers in any form that ensures its transmission in time and space for solving scientific, industrial, managerial and other tasks.

    Collection, storage, processing, transmission of information in numerical form is carried out using information technology. The peculiarity of information technologies is that in them both the subject and the product of labor is information, and the tools of labor are the means of computer technology and communication.

    The main goal of information technology is the production of information necessary for the user as a result of targeted actions for its processing.

    It is known that information technology is a set of methods, production and software and technological means, united in a technological chain that provides collection, storage, processing, output and dissemination of information.

    The technology of automated processing of economic information is based on the following principles:

    Integration of data processing and the ability of users to work under the conditions of operation of automated systems for centralized storage and collective use of data (data banks);

    Distributed data processing based on advanced transmission systems;

    Rational combination of centralized and decentralized management and organization of computing systems;

    Modeling and formalized description of data, procedures for their transformation, functions and jobs of performers;

    Taking into account the specific features of the object in which the machine processing of economic information is implemented.

    There are two main types of organization of technological processes: subject and operational.

    Subject type organization of technology involves the creation of parallel operating technological lines specializing in information processing and solving specific sets of tasks (labor and wages accounting, supply and sales, financial transactions, etc.) and organizing operational data processing within the line.

    Operational (in-line) type construction of a technological process provides for the sequential transformation of the processed information, according to the technology, presented in the form of a continuous sequence of replacing each other operations performed in an automatic mode. This approach to the construction of technology turned out to be acceptable when organizing the work of subscriber points and automated workstations.

    The organization of technology at its individual stages has its own characteristics, which gives rise to the separation of out-of-machine and intra-machine technology. Out-of-machine technology(it is often called pre-base) combines the operations of collecting and recording data, recording data on computer media with control. Intra-machine technology is connected with the organization of the computing process in the computer, the organization of data arrays in the memory of the machine and their structuring, which gives reason to call it also intra-base. Considering that the subsequent chapters of the textbook are devoted to the means that make up the technical base of the out-of-machine and intra-machine information transformation, we will briefly consider only the features of the construction of the named technologies.

    The main stage of the technological process is associated with the solution of functional problems on a computer. Intra-machine technology for solving problems on a computer, as a rule, implements the following standard processes of transforming economic information: the formation of new arrays of information; ordering of information arrays; fetching some part of records from an array, merging and splitting arrays; making changes to the array; performing arithmetic operations on attributes within records, within arrays, over records of several arrays. The solution of each individual task or complex of tasks requires the following operations: input of the program for the machine solution of the problem and its placement in the computer memory, input of initial data, logical and arithmetic control of the entered information, correction of erroneous data, arrangement of input arrays and sorting of the input information, calculations according to the given algorithm, receiving output arrays of information, editing output forms, displaying information on the screen and on machine media, printing tables with output data.

    The choice of this or that technology option is primarily determined by the space-time characteristics of the tasks being solved, the frequency, urgency, requirements for the speed of message processing and depends both on the mode of interaction between the user and the computer dictated by practice, and the mode capabilities of technical means, primarily computers.

    There are the following modes of interaction between the user and the computer: batch and interactive (query, dialogue). The computers themselves can operate in various modes: single and multiprogram, time sharing, real time, teleprocessing. In this case, the goal is to meet the needs of users in the maximum possible automation of solving various problems.

    Batch mode was most widespread in the practice of centralized solution of economic problems, when a large proportion of the analysis of production and economic activities of economic objects of different levels of management.

    The organization of the computational process in batch mode was built without user access to a computer. Its functions were limited to the preparation of initial data on a complex of information-related tasks and their transfer to the processing center, where a package was formed, including a task for a computer for processing, programs, initial, normative-pricing and reference data. The package was entered into a computer and implemented in an automatic mode without the participation of the user and operator, which made it possible to minimize the execution time for a given set of tasks. In this case, the operation of the computer could take place in a single-program or multi-program mode, which is preferable, since the parallel operation of the main devices of the machine was ensured. Batch mode is currently being implemented for email.

    Interactive mode provides for direct interaction of the user with an information-computing system, may be in the nature of a request (usually regulated) or a dialogue with a computer.

    The query mode is necessary for users to interact with the system through a significant number of subscriber terminal devices, including those remote at a considerable distance from the processing center. This need is due to the solution of operational tasks, which are, for example, marketing tasks, tasks of personnel reassignment, tasks of a strategic nature, etc. In such cases, a computer implements a queuing system, operates in a time-sharing mode, in which several independent subscribers (users) with the help of input-output devices have direct and practically simultaneous access to a computer in the process of solving their problems. This mode makes it possible to provide each user with time to communicate with the computer in a differentiated manner in a strictly established manner, and turn it off after the end of the session.

    Among the most important characteristics of economic information, reflecting the requirements for it, can be named correctness, value, reliability, accuracy, relevance, completeness.

    They say that information is correct if it has such a form and content that ensure its unambiguous perception by all consumers.

    Value is understood as a property of information that reflects the extent to which it contributes to the achievement of the goals and objectives of its consumer (for example, a control system).

    The property of reliability connects the content side of information as a reflection of some objective reality with reality itself, and accuracy is determined by the measure of their proximity (distance) from each other.

    The concept of the relevance of information implicitly implies the possibility of changes over time in the state of the object to which it belongs. The relevance of information reflects its adequacy to the actual state of the reference object.

    Completeness of information reflects its sufficiency or inadequacy for making management decisions.

    1.3.4. Technology and methods of processing economic information

    The economic information system in its composition resembles an enterprise for the processing of data and the production of output information. As in any production process, in the EIS there is a technology for converting source data into result information. The concept of technology is defined as a system of interrelated methods of processing materials and methods of manufacturing products in the production process.

    Information technology (IT) is understood as a system of methods and methods for collecting, accumulating, storing, searching and processing information based on the use of computer technology.

    An ordered sequence of interrelated actions that are performed from the moment information appears until the result is obtained is called a technological process.

    Thus, the concept of information technology is inseparable from the specific environment in which it is implemented, i.e. from the technical and software environment. It should be noted that information technology is a fairly general concept and as a tool can be used by various users, both non-professionals in the computer field and developers of new IT.

    The functional part of the EIS is always associated with the subject area and the concept of information technology. Generally speaking, technology as a certain process is present in any subject area. So, for example, the technology of issuing a loan by a bank may have its own characteristics depending on the type of loan, type of collateral, etc. In the course of these technological processes, a bank employee processes the relevant information.

    The solution of economic and managerial problems is always closely related to the performance of a number of operations to collect the information necessary for solving these problems, to process it according to some algorithms and to issue it to the decision-maker (DM) in a convenient form. It is obvious that decision-making technology has always had an information basis, although data processing was carried out manually. However, with the introduction of computer technology in the management process, a special term information technology appeared.

    In order to distinguish terminologically the traditional technology for solving economic and managerial problems, we will introduce the term subject technology, which is a sequence of technological stages for modifying primary information into resultant information. For example, accounting technology assumes the receipt of primary documentation, which is transformed into the form of an accounting entry. The latter, changing the state of analytical accounting, leads to a change in the accounts of synthetic accounting and then the balance sheet.

    IT differ in the type of information processed (Figure 2.1), but can be combined into integrated technologies.

    Rice. 2.1. IT classification depending on the type of information processed

    The selection proposed in this figure is somewhat arbitrary, since most of these ITs allow supporting other types of information as well. So, in word processors, the ability to perform primitive calculations is provided, table processors can process not only digital, but also text information, and also have a built-in graphics generation apparatus. However, each of these technologies is still more focused on processing information of a certain type.

    It is obvious that the modification of the elements that make up the concept of IT makes it possible to form a huge number of them in various computer environments.

    And today we can talk about supporting IT (IT) and functional IT (FIT).

    Supporting IT - information processing technologies that can be used as a toolkit in various subject areas for solving various problems. Information technologies of the supporting type can be classified in relation to the classes of tasks for which they are oriented. The supporting technologies are based on completely different platforms, which is due to the difference in the types of computers and software environments, therefore, when they are combined on the basis of subject technology, the problem of system integration arises. It consists in the need to bring various IT to a single standard interface.

    Functional IT is such a modification of supporting IT, in which any of the subject technologies is implemented. For example, the work of an employee of the credit department of a bank using a computer necessarily involves the use of a set of banking technologies for assessing the creditworthiness of a borrower, forming a loan agreement and urgent obligations, calculating a payment schedule and other technologies implemented in any information technology: DBMS, word processor, etc. ... The transformation of a providing information technology in its pure form into a functional one (modification of some commonly used toolkit into a special one) can be done both by a specialist designer and by the user himself. It depends on how complex such a transformation is, i.e. on the extent to which it is available to the user himself; economist. These possibilities are expanding more and more, as the supporting technologies become more user-friendly from year to year. Thus, in the arsenal of an employee of the credit department, there may be both supporting technologies with which he constantly works: text and table processors, and special functional technologies: table processors, DBMS, expert systems that implement subject technologies.

    Subject technology and information technology influence each other. So, for example, the presence of plastic cards as a carrier of financial information fundamentally changes the subject technology, providing such opportunities that were simply absent without this carrier. On the other hand, subject technologies, filling them with specific content of IT, accentuate them on quite specific functions. Such technologies can be typical or unique, depending on the degree of unification of the technology for performing these functions.

    As an example, we can cite the banking technology of working with card index No. 3, which contains documents received for processing and not executed due to the closure of a personal account for reasons of financial control. In this case, the account is closed first. Then, if information technology is used, this record is marked with the number of the filing cabinet so that the rest of the documents that reduce the account balance would fall into this filing cabinet. In the structure of the operational and accounting department of the bank, the first and second functions can be performed either by one performer or by two different tellers. In addition, the processes for performing these functions can be separated in time. Thus, the mark in the personal account, made when it was temporarily closed by one clerk, is used by another clerk in the process of processing incoming documents for payment. At the same time, this note can be made by the operator who is the responsible executor for this account (opens, closes accounts, ensures account transactions, interest accrual, etc.).

    Classification of IT by the type of user interface (Figure 2.2) allows us to talk about the system and application interface. And if the latter is associated with the implementation of some functional IT, then the system interface is a set of methods for interacting with a computer, which is implemented by the operating system or its superstructure. Modern operating systems support the command, W1MP, and SILK interfaces. At present, the problem of creating a social interface has been posed.

    Rice. 2.2. IT classification by user interface type

    The command interface is the simplest. It provides a system prompt on the screen to enter a command. For example, on MS-DOS, the prompt looks like C: \>, and on UNIX, it's usually a dollar sign.

    WlMP interface stands for Windows Image Menu Pointer. The screen displays a window containing images of programs and a menu of actions. The pointer is used to select one of them.

    SlLK-ishperface stands for -Spich (speech) Image (image) Language (language) Knowledge (knowledge). When using the SILK interface on the screen by a speech command, there is a movement from one search image to another along semantic semantic connections.

    The public interface will include the best WIMP and SILK interfaces. It is assumed that when using the public interface, you will not need to understand the menu. Screen

    the images will unambiguously indicate the further path. Moving from one search image to another will take place along semantic semantic links.

    Operating systems (OS) are divided into single-program, multi-program and multi-user. Single-program operating systems include, for example, MS-DOS, etc. Multi-program operating systems, such as UNIX (XENIX), Windows, starting from version 3.1, DOS7.0, OS / 2, etc., allow multiple applications to run simultaneously. They differ in the time-sharing algorithm. If single-program systems operate either in batch mode or in interactive mode, then multi-program systems can combine the indicated modes. Thus, these systems provide batch and conversational technologies.

    Multi-user systems are implemented by network operating systems. They provide remote networking technologies as well as batch and conversational technologies for workplace communication. All three types of information technologies are most widely used in economic information systems.

    Most of the supporting and functional IT can be used by a managerial worker without additional intermediaries (programmers). In this case, the user can influence the sequence of application of certain technologies. Thus, from the point of view of participation or non-participation of the user in the process of performing functional IT, they can all be divided into packaged and interactive.

    Economic problems solved in batch mode are characterized by the following properties:

      the algorithm for solving the problem is formalized, the process of solving it does not require human intervention;

      there is a large amount of input and output data, a significant part of which is stored on magnetic media;

      the calculation is performed for most records of the input files;

      long time for solving the problem is due to large amounts of data;

      regulation, i.e. tasks are solved with a specified frequency. Dialogue mode is not an alternative to batch, but its development, if the use of batch mode allows you to reduce user intervention in the process of solving a problem, then the dialog mode assumes the absence of a rigidly fixed sequence of data processing operations (if it is not due to subject technology).

    A special place is occupied by network technologies that ensure the interaction of many users.

    Information technologies differ in the degree of their interaction with each other (Fig. 2.3). They can be implemented by various technical means: diskette and network interaction, as well as using various concepts of data processing and storage: distributed information base and distributed data processing.

    Rice. 2.3. Classification of IT according to the degree of their interaction

    User Interface Standard for Conversational IT

    The user interface includes three concepts: communication between the application and the user; communication of the user with the application; the language of communication. The language of communication is determined by the developer of the software application. The properties of the interface are: concreteness and clarity. The previously most common command interface had a number of shortcomings (multiple commands, lack of a standard for applications, etc.), which limited the range of its application. To overcome these shortcomings, attempts have been made to simplify it (eg NortonCommander (NC)). However, the real solution to the problem was to create a graphical shell for the operating system. Nowadays, almost all common operating systems use a graphical interface for their work. An example of this is the interface developed at Xerox's Palo Alto Research Center for Apple Macintosh computers. A little later, a graphical shell called MicrosoftWindows was developed that implements the WIMP technology and meets the CUA standard. The innovations were the use of the mouse, the choice of commands from the menu, the provision of separate windows for programs, the use of images in the form of icons to designate programs.

    The user-friendly interface and richness of possibilities make Windows the optimal system for everyday work. Windows applications use the same interface, so consistency minimizes the learning curve for any Windows application. The market launch of Windows 95 has further simplified the user experience as the interface has become even simpler, more documented, and includes built-in communication capabilities.

    Some of the most common information technologies

    The most common computer technologies are text data editing, graphic and tabular data processing.

    Word processors (or editors) are used to work with text.

    By now, many word processors have been developed. In general, they have the same purpose, but the capabilities provided and the means of their implementation are different. The same goes for GPUs and spreadsheets.

    Among the Windows word processors, as the most common environment, we can distinguish Write and Word. The technology for their use is based on the WIMP interface, but the capabilities of Word processors have been significantly expanded and, to some extent, it can be considered as a desktop publishing system.

    What features do word processors provide? This is typing, storing it on computer media, viewing and printing. Most processors have functions for checking spelling, selecting fonts and sizes, centering headings, paginating text, printing in one or more columns, inserting tables and figures into the text, using page-link templates, working with blocks of text, and changing the structure of a document.

    For quick viewing of the text, it can be assigned the status of a draft, and also the image scale can be changed. Navigating through the text is simplified by using bookmarks.

    With the help of formatting tools, you can create the appearance of the document, change the style, underline, italicize, resize characters, select paragraphs, align them left, right, center, and frame them.

    Before printing the document, you can preview it, check the text, select the paper size, set the number of copies when printing. 4

    Repeating sections of text, for example, an appeal in a letter or final words, can be designated as autotext, given a name. In the future, instead of this text, it is enough to indicate its name, and the word processor will automatically replace it.

    The need to enter graphs, diagrams, diagrams, pictures, labels into free text or a document has caused the need to create1 graphic processors. Graphics processors are tools that allow you to create and modify graphic images using the appropriate information technologies:

      commercial graphics;

      illustrative graphics;

      scientific graphics.

    Information technologies of commercial graphics provide the display of information stored in spreadsheet processors, databases and individual local files in the form of two- or three-dimensional graphs such as a pie chart, bar graph, line graphs, etc.

    IT illustrative graphics make it possible to create illustrations for various text documents in the form of regular - various geometric shapes (so-called vector graphics) - and irregular structures - user drawings (raster graphics). Processors that implement IT illustrative bitmap graphics allow the user to select the thickness and color of lines, a fill palette, a font for writing and overlaying text, and previously created graphic images. In addition, the user can erase, cut and move parts of the drawing. These tools are implemented in IT PaintBrush. But there is IT that allows you to view images in slide mode, special effects and bring them to life (CorellDraw, Storyboard, 3dStudio).

    IT scientific graphics are designed to serve the tasks of cartography, the design of scientific calculations containing chemical, mathematical and other formulas.

    Most GPUs conform to the W1MP user interface standard. The panel contains a menu of actions and bars of tools and colors. The toolbar consists of a set of graphic symbols required to construct almost any drawing. The color bar contains the color gamut of your computer monitor.

    Tabular documents make up the majority of the workflow of an enterprise of any type. Therefore, tabular IT is especially important in the creation and operation of EIS. A set of software tools that implement the creation, registration, storage, editing, processing of spreadsheets and their issuance for printing is commonly called a spreadsheet processor. A spreadsheet is a two-dimensional array of rows and columns located in the computer's memory.

    Such spreadsheet processors as SupcrCalc, VisiCalc, Lotus1-2-3, QuattroPro have become widespread. Processor Excel was created for Windows, technology to work with. which is similar to working with any Windows application using the WIMP interface.

    The spreadsheet allows you to solve most of the financial and administrative tasks, such as payroll and other accounting tasks; forecasting sales, market growth, income;

    analysis of interest rates and taxes; preparation of financial declarations and balance sheets; keeping accounting books for accounting of payments; estimate calculations; accounting of money checks; budgetary and statistical calculations.

    The basic unit of a spreadsheet is the named worksheet where it resides. The intersection of a row with a column is called a cell or field. There are two options for addressing cells: absolute and relative. Absolute addressing is most commonly used. The cell address (identifier) ​​is a letter indicating the column and a digit indicating the row number. Both are visible on the worksheet. In case of relative addressing, the signed increment from the beginning of the desired cell is indicated in the upper status line. The bottom line of the worksheet gives an explanation of the selected menu action. The upper part contains the action menu, the toolbar and the totalizer line, where all reproduced actions are reflected.

    Column width and row height are given by default. However, it is possible to format a cell, column, row, sheet. However, you can change the style of the text, which improves the appearance of the document ^ without using a text editor.

    Data in the form of numbers, text, or formulas is entered in the cell marked with the text cursor. To indicate a block of cells, it is enough to indicate the address of the upper left cell of the block diagonal, the address of the lower right cell of the diagonal, or, conversely, put a dot or colon between them. You can set the block by selection.

    Editing tables allows you to copy, delete, clear a cell, block, sheet, and many other functions listed in the Edit and Paste action menus. You can insert a picture, graph, diagram, or any other object prepared by another program into a table using OLE technology.

    Most spreadsheets have tools for creating graphs and charts, tools for editing them and inserting them in the right place on the sheet. In addition, they have a large number of built-in functions - mathematical, statistical and others. This greatly simplifies the process of calculations and expands the range of applications. The user is given the opportunity to redefine the toolbar, the view of the worksheet, change the scaling, enable scrollbars, switches, menus. Service functions of the Excel spreadsheet processor allow you to check the spelling of text, protect data from reading or writing. It is possible to create dialog boxes or refer to dynamic libraries. Note that Excel has a macro authoring tool -VisualBasic. It is an object oriented programming language. Its difference, for example, from C ++ or Pascal, is that in VisualBasic it is not possible to create new types of objects or generate descendants of existing ones: However, the user receives a large set of ready-made objects: workbooks, sheets, cells, diagrams, etc.

    All table processors allow you to create databases and provide a convenient means of working with them.

    There is one file type in MicrosoftExcel5.0 - a workbook, which consists of worksheets, chart sheets and macros, but all the sheets are attached to the workbook. This approach simplifies the work with several documents due to quick access to each sheet through the tabs at the bottom of the sheet, allows you to work with sheets combined into a group, for example, a group of index cards for a product. Moreover, if a group of actions is performed on one sheet, these actions are automatically repeated on all sheets of the group, which simplifies the design of several sheets of the same type in structure. Bulky links allow you to create a summary document based on data from multiple sheets without entering cumbersome formulas with external links. Microtechnology "PivotTable Wizard" allows you to select the required data from the document, present it as a pivot table, changing the structure, appearance, adding summary rows, grouping and sorting. The workbook can include information about the topic, author, keywords. It can also be used when searching for a file on disk or when figuring out its purpose.

    When executing all functions in the Excel processor, you can use a multi-window system that allows you to perform parallel actions. All objects created by the user (generated tables, pivot tables, macros, database selections, charts and graphs) can be saved to disk as a file or printed.

    At one workplace, a user, as a rule, deals with different types of information. The use of an individual software tool for processing each type of data complicates the technological process of work, makes it difficult to transfer data for processing by several means. Therefore, at first, integrated packages appeared that combined various IT: text, spreadsheet and graphics processors, a database management system, such as FrameWork, Simphony, etc. A set of technologies Works-2 was developed for the Windows shell. Their purpose is to facilitate the movement of information between different applications - parts of a common package. Further, three-dimensional graphics, an information manager, electronic document recognition systems, and e-mail were added to the integrated packages. This package is NovellPerfectOffice3.0 [23] for Windows. It includes: a modern word processor (WordPerfect6.1); a spreadsheet with the ability to use a database, build graphs and diagrams (QuattroPro4.1); a program for creating slide shows, presentation graphics, similar in capabilities to CorelDRAW (Presentations3.0); personal information manager (Infocentral1.1); an electronic document distribution system (EYY standard), which allows documents to be moved around the network and viewed even in places where there is no PerfectOffice (Envoy1.0a) and a scheduling tool (GroupWise4.1Client), which is used for group work with information and implements built-in communications, and application of e-mail.

    In the domestic development-electronic office SKAT (complex trade automation system) in the LotusNotes for Windows system, a database management system, e-mail, information security tools and application development tools are integrated: text and graphic editors, spreadsheets. The SKAT package implements subsystems: warehouse of components, warehouse of finished products, invoices, contracts and other documents, purchase orders, list of firms, price list, reference books, system setup, documentation.

    Digital office LinkWorks provides centralized data storage based on relational DBMS and document management in the framework of client-server network technology. This integrated package, in addition to a relational database, contains text, graphics and table processors, which, interacting with each other, implement an object-oriented approach. The latter consists in the fact that the user works with the same objects as before, before purchasing this package (contracts, invoices, price lists).

    The package is mobile and works in the environment of various OS, it provides interaction with global systems (via TCP / IP or DECnet) and e-mail.

    Information network technologies

    In the 60s. the first computer networks (BC) appeared. In fact, they began a kind of technical revolution, comparable to the appearance of the first computers, since an attempt was made to combine the technology of collecting, storing, transferring and processing information on a computer with communication technology.

    One of the first networks that influenced their further development was ARPA, created by fifty US universities and firms. Currently, it covers the entire territory of the United States, part of Europe and Asia. The ARPA network has proven the technical feasibility and economic feasibility of developing large networks for more efficient use of computers and software.

    In the 60s. in Europe, the international EIN and Euronet networks were first developed and implemented, then national networks emerged. In 1972, the IIASA network was introduced in Vienna, in 1979 17 countries of Europe, the USSR, the USA, Canada, and Japan joined it. It is designed to carry out fundamental work on energy, food, agriculture, health care, etc. In addition, thanks to new technology, the network has allowed all national institutions to develop links with each other.

    In the 80s. a statistical information teleprocessing system (STOSI) was put into operation, serving the Main Computing Center of the Central Statistical Administration of the USSR in Moscow and republican computing centers in the union republics.

    Currently, there are more than 200 registered global networks in the world, 54 of which were created in the USA, 16 in Japan.

    With the advent of microcomputers and personal computers, local computer networks arose. They made it possible to raise the management of a production facility to a qualitatively new level, increase the efficiency of using computers, improve the quality of processed information, implement paperless technology, and create new technologies. The combination of LAN and global networks has opened access to world information resources.

    All computers connected to the network are divided into main and auxiliary ones. Main computers are subscriber computers (clients). They carry out all the necessary computing work and determine the resources of the network. Auxiliary computers (servers) are used to transform and transfer information from one computer to another via communication channels and switching machines (host-computers). Higher requirements are imposed on the quality and power of servers, and any PC can act as a host machine.

    Client - an application that sends a request to the server. He is responsible for processing, displaying information and transmitting requests to the server. Any computer can be used as a client computer.

    Server is a personal or virtual computer that performs the functions of servicing the client and allocates system resources: printers, databases, programs, external memory, etc. The network server supports the performance of the functions of a network operating system, the terminal one - the performance of the functions of a multiuser system. The database server handles database queries on multiuser systems. It is a means of solving network problems in which local networks are used for joint processing of data, and not just for organizing the collective use of remote external devices.

    Host-computer - a computer installed in the nodes of the network and deciding the issues of switching in the network. The switching network is formed by many servers and host-computers, connected by physical communication channels, which are called backbone. Coaxial and fiber-optic cables, twisted pair cables are used as trunk channels.

    According to the method of information transmission, computer networks are divided into channel switching networks, message switching networks, packet switching networks and integral networks.

    Circuit switching networks were the first to appear. For example, to transmit a message between clients B and E (Fig. 2.4), a direct connection is formed, which includes channels of one of the groups: 3, 5,7; 1, 2.4, 6; 1, 2, 5, 7; 3,4, 6. This connection must remain unchanged throughout the session. The ease of implementation of this method of transferring information entails its disadvantages: low channel utilization, high cost of data transmission, increased waiting time for other customers.

    Rice. 2.4. An example of a computer network: L, V, C, D, E, F - subscriber points; KM - communication machines; 1-7 - trunk channels

    In message switching, information is transmitted in chunks called messages. A direct connection is usually not established, and the transmission of the message begins after the release of the first channel, and so on, until the message reaches the addressee. Each server receives information, assembles it, checks, routes and sends messages. The disadvantages of message switching are low data transfer rates and the impossibility of conducting a dialogue between clients, although the transmission cost is reduced.

    When switching packets, the exchange is performed by short packets of a fixed structure. A packet is a part of a message that conforms to a certain standard. Small packet length prevents blocking of communication lines, does not allow queues to grow in switching nodes. This ensures fast connections, low error rates, reliability, and efficient network utilization. But when transmitting a packet, a routing problem arises, which is solved by software and hardware methods. The most common methods are fixed routing and shortest-queue routing. Fixed routing assumes the presence of a route table in which the route from one client to another is fixed, which provides ease of implementation, but at the same time uneven network load. The shortest queue method uses several tables in which the channels are prioritized. Priority is a function that is the reciprocal of the distance to the addressee. Transmission starts on the first free channel with the highest priority. With this method, the packet transmission delay is minimal.

    Currently, software and hardware routing tools have been developed. A repeater is the simplest type of device for connecting similar LANs, it relays all received packets from one LAN to another. A communication device that allows a LAN to be connected to the same and different signal systems is called a bridge. A communication device, similar to a bridge (router), carries out packet transmission in accordance with certain protocols, provides a LAN connection at the network level. Gateway - a device for connecting a LAN to the global network.

    Networks providing circuit, message and packet switching are called integral networks. They connect several switching networks. Some of the integral channels are used exclusively, that is, for direct connection. Direct channels are created for the duration of a communication session between different switching networks. At the end of the session, the forward link splits into independent trunk links. An integrated network is effective if the volume of information transmitted through direct channels does not exceed 10-15%.

    When developing computer networks, the problem arises of coordinating the interaction of computers of clients, servers, communication lines and other devices. It is solved by establishing certain rules called protocols. The implementation of the protocols together with the implementation of the server management is called the network OS. Some of the protocols are implemented in software, some in hardware. The International Organization for Standardization (ISO) was created to standardize the protocols. She introduced the concept of open systems architecture, which means the possibility of interaction between systems according to certain rules, although the systems themselves can be created on various technical means. The basis of the architecture of open systems is the concept of a level of logical decomposition of a complex information network. The system is divided into a number of subsystems, or levels, each of which performs its own functions. ISO has established seven such levels.

    The first layer, physical, defines some of the physical characteristics of the channel. These are the requirements for the characteristics of the connector cables (RS, EIA, X.21) and the electrical characteristics of the signal (for example, the model V.22 bis provides a baud rate of 2400). In 1994, the V.32 standard was approved in Europe for operation on any channel. It defines ten procedures according to which the modem, after testing the line (initially according to the V.21 standard), selects the carrier frequencies and bandwidth (11 combinations) corresponding to the line quality (11 combinations), etc. According to the type of characteristics, the networks are divided into analog (V.21, etc.) , for example, there is a regular telephone, and digital, for which the ISDN standard has been developed, which is widespread abroad.

    The second layer, the channel layer, controls the transfer of data between two network nodes. It provides control over the correct transmission of interlocked information. Each block is supplied with a checksum. Recent developments have moved this control into the hardware environment. A modem operating on one of the error correction protocols and detecting one requests a re-transmission. To increase the exchange rate, data is compressed by the type of archiving using the same algorithms, for example, the algorithm used in the ARC archiver, or the Siempel algorithm in the PKZIP archiver. When a message is received, it expands. The length of the transmitted block can vary depending on the quality of the channel. Currently, Protocols V.42 bis (CCITT), MNP5, MNP7 are used.

    The third and layer, network, provides flow control, routing. It applies to data blocking and addressing agreements. One channel can transmit information from several modems to increase its load. This layer includes Protocols X.25 and X.75 (space). The IP protocol is used to connect heterogeneous networks of various technologies.

    The fourth level, transport, is responsible for the standardization of data exchange between programs located on different computers in the network (TP0.TP1).

    The fifth level, the session level, defines the rules for the dialogue of application programs, restart, and checking the access rights to network resources.

    The sixth level, representative, defines data formats, alphabets, codes for the representation of special and graphic characters (ASCII, EBCDIC, ASN.1..X.409).

    The seventh layer, application, defines the service layer. For example, the X.400 Protocol is related to the standardization of e-mail. Known technical means such as telex, telefax, videotex, teletex, etc. In this case, telex supports the standard of information transmission speed adopted in 1988 at 50 baud. Teletex already provides 1200 baud.

    Standardization extends to the logical level of the transmitted information. First of all, it is a standard for the form of transmitted documents. The SWIFT standard is widespread in the banking system. It defines the location and purpose of the fields in the document. A fundamental point when using this and other computer standards for documentation is the official recognition (de jure) of a document transmitted through communication channels legally full.

    In April 1989, the 44th session of the United Nations Economic Commission for Europe announced the next decade as a period of large-scale implementation in international trade of the Universal Electronic Data Interchange for Administration, Commerce and Transport (UN / EDIFACT). On January 1, 1995, the European Union "(EU) switched to the obligatory use of EDIFACT in the exchange of documents and information between the EU government agencies operating in English, French, German, Spanish. The Central Bank of the Russian Federation in 1993, during negotiations with the European Bank for Reconstruction and Development (EBRD) faced a lack of use of SWIFT, since working with European banks requires a constant means of communication for all participants. EDIFACT, as such a tool, is a structured language for describing various types of commercial activities. Using elements and segments of standard information messages, you can compose a description of any business document, format its electronic display and transmit it to the subscriber. The message received by him is expanded in the usual form and can be printed in the form of a hard copy of the document. The use of this scheme reduces circulation costs in trade by 30%. In Russia, in August 1994, the decree government (N540) made a decision to create an efficient trade center using international standards and means of communication, the cost of which is $ 1 million. Further creation of regional centers will be carried out on the basis of partial contributions of regional administrations and entrepreneurs of the region, banks financing foreign trade operations. The leading organizations for the dissemination of EDIFACT in Russia are V / O "InformVES", Roskominform, the Central Bank of the Russian Federation, the State Customs Committee, the Association of Users of Electronic and Information Transmission, the Ministry of Transport, the Russian Academy of Sciences, etc.

    Each level solves its own problems and provides service to the level above it. The rules for the interaction of different systems of the same level are called a protocol, the rules for the interaction of neighboring levels in one system are called an interface. Each protocol must be transparent to adjacent layers. Transparency is the property of transferring information encoded in any way, to be understood by the interacting layers.

    Networks are divided into public, private, and commercial. According to ISO recommendations for the physical layer, the following classes of public networks are defined: up to 1000 km - average length; up to 10,000 km long; up to 25,000 km - the longest ground-based; up to 80,000 km - trunk lines via satellite; up to 160,000 km - international trunk lines via two satellites.

    Local networks are divided into centralized and peer-to-peer. The centralized ones use a file server. Workstations are not in contact with each other. The number of users is more than ten. In peer-to-peer networks, network management is such that each node can act as both a workstation and a file server. Workstations can be combined and shared databases on a file server. Such networks are not expensive! not, but the number of users is small. The most common local network operating systems are. UNIX - for creating medium and large networks with hundreds of users; NetWare3.11 - for creating medium networks from 20 to 100 users within one building; VINES - for creating large distributed LANs; LANManager - for medium and large "■ networks with a number of users from 25 to 200.

    The technology of the computer method for sending and processing information messages has become no less widespread, providing operational communication between the leadership of working groups, employees, scientists, business people, businessmen and everyone else. This technology is called e-mail.

    E-mail is a special software package for storing and sending messages between computer users. A paperless postal relationship service is realized through e-mail. It is a system for collecting, registering, processing and transmitting any information (text documents, images, digital data, sound recordings, etc.) over computer networks and performs functions such as editing documents before transmission, storing them in a special bank; forwarding of correspondence; checking and correcting transmission errors; issue of confirmation of receipt of correspondence by the addressee; receiving and storing information in your "mailbox"; viewing received correspondence.

    "Mailbox" is a specially organized file for storing correspondence. The mailbox consists of two baskets: sending and receiving. Any user can refer to another user's retrieval basket and dump information there. But he cannot look at it. The mail server collects information from the mailing basket for sending to other users. Each mailbox has a network address. To forward correspondence, you can establish a connection with the addressee's mailbox in on-line mode. For example, in the SpnnlMail network, a user, having registered and received a certain status, through telephone channels can enter the nearest network node and communicate with the necessary subscribers in on-line mode. This method is inconvenient, since it is necessary to wait until the recipient's computer is turned on. Therefore, the more common method is to designate individual computers as post offices, called mail servers. At the same time, all recipient computers are connected to the nearest mail server, which receives, stores and forwards postal items on the network until they reach the addressee. Sending to the addressee is carried out as soon as he gets in touch with the nearest mail server in the off-line mode. An example is the Relcom network. The user will send the message along with the address via a telephone channel via a modem to the nearest mail server in on-line mode. The message is registered, put into a queue, and through the first free channel will be transmitted to the next mail server until the addressee picks it up in his mailbox. Mail servers implement the following functions: ensuring fast and high-quality delivery of information, managing the communication session, checking the reliability of information and correcting errors, storing information on demand and notifying the user about the correspondence received at his address, registering and accounting for correspondence, checking passwords when requesting correspondence, support reference books with user addresses.

    Sending messages to the user can be performed in individual, group and general modes. In individual mode, the addressee is a separate user's computer and the correspondence contains his address. In group mode, correspondence is sent simultaneously to a group of addressees. This group can be formed in different ways. Mail servers have group recognition facilities. For example, the address could be “get everyone interested in this topic,” or a mailing list. In general mode, correspondence is sent to all users - mailbox owners. Through the last two modes, you can organize a teleconference, electronic bulletin boards. To avoid overloading mailboxes, mail servers store address directories containing filters for group and general messages.

    E-mail supports word processors for viewing and editing correspondence, information retrieval systems for determining the addressee, means for maintaining a list of information sent, means for providing advanced types of services: fax, telex, etc.

    E-mail moaci is organized in the local network within the enterprise to ensure internal exchange of information For example, her.mail companyLotusDevelopment (branch of IBM) It serves to automate intra-office operations. Focuses on DOS, Windows, OS / 2, Macintosh, UNIX. It can provide interchange with other e-mails over global computer networks. For example, hermail can be connected through any channels, including satellite, via Protocols X.25, X.75 to MHS, Sprint, Relcom, MCIMail, Profs, AT&T, Easylink, 3ComMaiI, SoftSwitch and other networks.

    If earlier independent e-mail packages were used, now there is a tendency to include it in integrated packages, for example, the electronic office of Novell for Windows-PerfectOffice3 0Windowb-95 entered the domestic market in August 1995. It itself and most applications contain built-in communication capabilities

    Most global computer networks support e-mail. In modern integrated packages, object-oriented technology is used, and the user's work is reduced to working with the menu. The mailbox is complemented by a trash can, where the user can place unnecessary correspondence. However, if necessary, he can take it away from there or finally throw it away.

    E-mail is used in all business areas, reducing the time for organizing transactions. To expand the scope of services, systems for interaction of e-mail with fax and telex networks have already been created. For example, the DECfaxMail system enables the exchange of fax messages over a telephone line with powerful e-mail systems such as Digital, ccMail, MSMail, MSWordforWindows. E-mail also penetrates the household level, becoming a means of communication between neighbors from the same house, street, and different countries.

    Network technologies make it possible to create geosystems for access to any world repositories of information of any type.

    Distributed data processing and storage technologies

    With the use of information technologies of computer networks, it becomes possible to implement the territorial distribution of production. For the administration of the company, it makes no difference where exactly the production is located, in this building, 100 m or 10,000 km away. Very different problems appear, such as intercontinental supply, standard time, etc. Since a planetary distribution of industrial production becomes possible, transnational companies can be created that implement world merchandise exports within the firm. At the same time, the metropolis, having invested 5 - 7% of the turnover in the economy of another country, gets the opportunity to control 50 - 60% of its economy. This is explained by the fact that due to the investment of high-tech technologies, the metropolitan country gets the opportunity to influence and even exercise control over the economic and political development of another country. For example, 80% of all international lending transactions are carried out by US banks. The foreign exchange reserves of the central banks of western cipans are 75% of US dollars, and 55% of international trade settlements are carried out in US dollars. Those. The United States pays with reproducible resources: agricultural products, information technology, scientific and technical knowledge, dollars. This becomes possible thanks to the latest network technologies and the development of communications.

    One of the most important network technologies is distributed data processing. Personal computers are located at workplaces, i.e. at the places of origin and use of information. They are connected by communication channels. This made it possible to distribute their resources to separate functional areas of activity and change the technology of data processing in the direction of decentralization. Distributed data processing made it possible to increase the efficiency of meeting the changing information needs of an information worker and thereby ensure the flexibility of his decisions. Advantages of distributed data processing: a large number of interacting users !!, performing the functions of collecting, registering, storing, transferring and issuing information, removing peak loads from a centralized database by distributing the processing and storage of local databases on different computers; providing an information worker with access to the computing resources of the computer network; ensuring symmetric data exchange between remote users.

    The introduction of the classification of data representation models into hierarchical, network and relational ones affected the architecture of database management systems and their processing technologies. The architecture of a DBMS describes its functioning as the interaction of two types of processes: a client and a server.

    Distributed processing and distributed database are not synonymous. If, during distributed processing, work with a database is carried out, then it is understood that the presentation of data, their meaningful processing, work with the database at the logical level are performed on the client's personal computer, and keeping the database up-to-date on the server. In the case of using a distributed database, the latter is located on several servers. You can work with it on the same personal computers or on others, and you need to use a network DBMS to access remote data.

    In a distributed processing system, a client can send a request to its own local database or a remote one. Remote request is a single request to one server. Several remote requests to one server are combined into a remote transaction. If separate requests of a transaction are processed by different servers, then the transaction is called distributed. In this case, one transaction request is processed by one server. A distributed DBMS allows multiple servers to process one request. Such a request is called distributed. Only distributed query processing supports the concept of a distributed database.

    Databases are automated repositories of promptly updated information. If in the 70s. Since there was a trade in "raw" information, data, nowadays automated analytical systems have been created that trade the results of the analysis of "raw" information. Such bases are called "gray" oil (brain). For example, in the United States, firms united in the Information Industry Association, which made it possible to provide 80% of the world's information services.

    Databases have been created for all areas of human activity: financial, economic, scientific and technical, electronic documentation, credit, statistical, marketing, newspaper reports, government orders, patent information, bibliographic, etc. In this case, the bases are divided into commercial and public.

    The organization of data processing depends on the way it is distributed. There are centralized, decentralized and mixed ways to distribute data

    Centralized data organization is the easiest to implement (Figure 2.5). One server hosts a single copy of the database. All database operations are handled by this server. Data access is performed using a remote request or a remote transaction. The advantage of this method is that the database can be easily kept up to date, and the disadvantage is that the size of the database is limited by the size of the external memory; all requests are directed to a single server with associated communication costs and latency. Hence the limitation on parallel processing. The base can be inaccessible to remote users when communication errors appear and completely breaks down when the central server fails.

    Rice. 2.5. Centralized data organization

    Decentralized organization of data assumes 1 splitting the infobase into several physically distributed Each client uses its own database, which can be either part of the general infobase (Figure 2.6), or a copy of the infobase as a whole (Figure 2.7), which leads to its duplication for each client ...

    Fig 2 6 Decentralized organization of data by way of distribution

    Fig 2.7 Decentralized data organization by way of duplication

    With data distribution based on partitioning, the database is hosted on multiple servers. The existence of copies of individual parts is unacceptable. Advantages of this method: most requests are satisfied by local databases, which reduces the response time; data availability and storage reliability are increasing; the cost of fetch and update requests is reduced compared to centralized distribution; the system will remain partially operational if one server fails. There are also disadvantages: some of the remote requests or transactions may require access to all servers, which increases the waiting time and the cost of service; it is necessary to have information about the placement of data in various databases. However, availability and reliability will increase. Dismembered databases are most suitable for the case of joint use of local and global computer networks.

    The method of duplication consists in the fact that a complete database is located in each server of the computer network. This ensures the highest reliability of data storage. Disadvantages of this method: increased requirements for the amount of external memory, complication of updating the bases, since synchronization is required in order to reconcile copies. Advantages - all requests are performed locally, which provides quick access. This method is used when the reliability factor is critical, the base is small, and the update rate is low.

    A mixed organization of data storage is also possible, which combines two methods of distribution: partitioning and duplication (Fig. 2.8), while acquiring both the advantages and disadvantages of both methods. It becomes necessary to store information about where the data is on the network. At the same time, a compromise is reached between the amount of memory for the base as a whole and for the base in each server in order to ensure the reliability and efficiency of its work; parallel processing is easy to implement, i.e. Serving a distributed query, or transaction Despite the flexibility of the mixed method of organizing data, there remains the problem of the interdependence of factors affecting the performance of the system, the problem of its reliability and the fulfillment of the memory requirement. The mixed method of organizing data can be used only if there is a network DBMS.

    Rice. 2.8. Mixed data organization

    In databases for collective use, the database servers become the central technological link. Database server software provides the implementation of multi-user applications, centralized storage, data integrity and security. The performance of database servers is an order of magnitude higher compared to file servers used in local networks. Local area networks were created to share expensive peripheral equipment. Using a database server made it possible for many users to access the same files. This was the prerequisite for the creation of network DBMS.

    The power of file server-based network DBMSs is currently insufficient. In a loaded network, performance inevitably drops, security and data integrity are compromised. The performance issue is not because the 386 processors are not powerful enough, but because the file servers are all-or-nothing. Full copies of database files are moved back and forth over the network. Problems with security and integrity arose due to the fact that from the very beginning file servers were not designed taking into account the preservation of data integrity and their recovery in the event of a disaster.

    Client-server technology, being more powerful, has replaced file-server technology. It made it possible to combine the advantages of single-user systems (high level of interactive support, user-friendly interface, low price) with the advantages of larger computer systems (integrity support, data protection, multitasking)

    In the classical sense, a DBMS is a set of programs that allow you to create and maintain a database in an up to date state. Functionally, a DBMS consists of three parts: a kernel (database), a language, and programming tools.

    Programming tools refer to a client interface, or an external interface. They can include a data processor in a query language. A language is a collection of procedural and non-procedural commands supported by a DBMS. The most commonly used languages ​​are SQL and QBE. The kernel performs all other functions that are included in the concept of "database processing"

    The basic idea of ​​a client-server jumulomsh is to place servers on powerful machines, and client applications using the language on less powerful machines. This will use the resources of a more powerful server and less powerful client machines. I / O to the database is not based on physical data splitting, but on logical, i.e. the server does not send a complete copy of the database to clients, but only logically necessary portions, thereby reducing network traffic. Network traffic is the flow of messages on the network. In client-server technology, client programs and their requests are stored separately from the DBMS. The server processes client requests, selects the necessary data from the database, sends them to clients over the network, updates information, ensures the integrity and safety of data.

    Let's consider the main types of distributed data processing technology.

    1. Client-server technology oriented towards a stand-alone computer, i. E. both the client and the server are located on the same computer. In terms of functionality, such a system is similar to a centralized DBMS. Neither distributed processing nor distributed DBMS are supported.

    2. Client-server technology focused on centralized distribution. When using this technology, the client gains access to the data of a single remote server, this can only be read, dynamic access to the data is realized through remote transactions and queries, their number should be small, so as not to decrease the performance of the system.

    3. Client-server technology focused on a local area network. This technology is characterized by the following features: a single server provides access to the database; the client forms a process responsible for the meaningful processing of data, their presentation and logical access to the database; database access is slowed down as client and server are connected via LAN.

    4. Client-server technology focused on changing data in one place In the case of this technology, distributed transaction processing is implemented; remote servers are not interconnected by a computer network, i.e. there is no coordinator server; the client can change1 data only in its local database; there is a danger of a "deadly embrace", that is, a situation when task A is waiting for records locked by task B, and task B is waiting for records locked by task A. Therefore, a distributed DBMS must have a means of controlling the coincidence of conflicting requests. Data distribution implements the partitioning method.

    5. Client-server technology, focused on changing data in several places. Unlike the previous technology, there is a coordinator server that supports the protocol for transferring data between different servers. It is possible to process distributed transactions in different remote servers. This will create the prerequisites for the development of a distributed DBMS. A mixed distribution strategy is implemented by transferring copies using a DBMS.

    6. Client-server technology focused on a distributed DBMS. It provides a strategy for splitting and duplicating, allowing faster access to data. The distributed DBMS provides client independence from the server location, global optimization, distributed database integrity, distributed administration.

    In all technologies, there are two ways to link client applications and. database servers: direct and indirect. With a direct connection, the client application communicates directly with the database server, and with an indirect connection, access to the remote server is provided by means of the local database. It is possible to combine both methods.

    The use of client-server technology makes it possible to transfer part of the work from the server to the client's computer equipped with tools for performing his professional duties. Thus, this technology allows you to independently increase the capabilities of the database server and improve the client's tools. The disadvantage of the client-server technology lies in the increased requirements for the penetration of the computer-server, in the complication of managing the computer network, and in the absence of a network DBMS, in the complexity of organizing distributed processing.

    The operating environment of the database server is understood as the capabilities of the computer OS and the network OS. Each database server can run on a specific type of computer and network OS. Server operating systems are DOS version 5.0, XENIX, UNIX, WindowsNT, OS / 2, etc. Currently, about ten servers are most commonly used, in particular SQL-server, SQLBASE-scrvcr, ORACLE-servcr, etc. future

    Database servers are designed to support a wide variety of application types. Object-oriented tools, spreadsheets, word processors, graphics packages, desktop publishing, and other information technologies can be used to interface with the database server.

    Hypertext technology

    In 1945 W. Bush, scientific adviser to President H. Truman, having analyzed the ways of presenting information in the form of reports, reports, projects, schedules, plans and realizing the ineffectiveness of such a presentation, proposed a way of placing information on the principle of associative thinking. On the basis of this principle, a model of the hypothetical machine MEMEX was developed. Twenty years later, T. Nelson implemented this principle on a computer and called it hypertext.

    Typically, any text is presented as one long string of characters that is read in one direction. Hypertext technology means that the text is presented as multidimensional, i.e. with a hierarchical structure of the network type. The material of the text is divided into fragments. Each fragment visible on the computer screen, supplemented by numerous connections with other fragments, allows you to clarify information about the object under study and move in one or more directions along the selected connection

    Hypertext possesses a nonlinear network form of organizing material, divided into fragments, for each of which a transition to other fragments is indicated according to certain types of links. When establishing links, one can rely on different bases (keys), but in any case, we are talking about the semantic, semantic proximity of the linked fragments. By following these links, you can read or master the material in any order, not just one. The text loses its isolation, becomes fundamentally open, new fragments can be inserted into it, indicating for them the connections with the existing ones. The structure of the text is not destroyed, and in general, hypertext does not have an a priori given structure. Thus, hypertext is a new technology for presenting unstructured freely growing knowledge. This is how it differs from other models of information presentation.

    Hypertext is understood as a system of information objects (articles), interconnected by directed links, forming a segment. Each object is associated with the information panel of the screen, on which the user can associatively select one of the links. Objects do not have to be textual, they can be graphic, musical, using animation, audio and video equipment. Hypertext processing has opened up new opportunities for mastering information that are qualitatively different from the traditional ones. Instead of searching for information by the corresponding search key, the hypertext technology allows moving from one information object to another, taking into account their semantic and semantic coherence. Information processing according to the rules of formal inference in hypertext technology corresponds to memorizing the path of movement along the hypertext network.

    Hypertext technology is focused on processing information not instead of a person, but together with a person, i.e. becomes copyright. Ease of use lies in the fact that the user himself determines the approach to the study or creation of material, taking into account his individual abilities, knowledge, level of qualifications and training. Hypertext contains not only information, but also the apparatus for its effective search. In terms of the depth of information formalization, hypertext technology occupies an intermediate position between documentary and factographic information systems.

    Structurally, hypertext consists of information material, a hypertext thesaurus, a list of main topics and an alphabetical dictionary.

    Informative material is subdivided into informative articles, consisting of an article title and text. The title contains the subject or name of the described object. An informational article contains traditional definitions and concepts, should occupy one panel and be easily visible so that the user can understand whether it is worth reading it carefully or going to other, close in meaning articles. The text included in the informational article may be accompanied by explanations, examples, documents, objects of the real world. A quick glance at the text of the article is simplified if this auxiliary information is visually different from the main one, for example, highlighted or highlighted in a different font.

    A hypertext thesaurus is an automated dictionary that displays semantic relationships between lexical units of a descriptive information retrieval language and is designed to search for words by their semantic content. The term thesaurus was introduced in the 13th century. Florentine B. Lots for the title of the encyclopedia. From Latin this word is translated as treasure, reserve, wealth. A hypertext thesaurus consists of thesaurus entries. A thesaurus entry has a title and a list of titles of related thesaurus entries, where the type of relationship and titles of thesaurus entries are indicated. The title of the thesaurus article coincides with the name of the informational article and is the name of the object, the description of which is contained in the informational article. Unlike traditional descriptor thesauri, the hypertext thesaurus contains not only simple, but also compound names of objects. Forming a thesaurus hypertext entry means indexing tckcui. The completeness of the links reflected in the thesaurus article and the accuracy of establishing these links ultimately determine the completeness and accuracy of the search when referring to this hypertext article. There are the following types of relationship or relationship: species - genus, genus - species, object - process, process - object, whole - part, part - whole, cause - effect, effect - cause, etc. The user receives more general information on the generic type of connection, and by specific - specific information without repeating general information. from generic topics. Thus, the depth of text indexing depends on generic relations. The related thesaurus article heading list is a local reference tool that links only to immediate family members. A hypertext thesaurus can be represented as a network: the nodes contain textual descriptions of an object (informational articles), the edges of the network indicate the existence of a connection between objects and the type of relationship. In hypertext, the search apparatus is not divided into a thesaurus and an array of search images-documents, as in conventional information retrieval systems. In hypertext, the entire search apparatus is realized as a hypertext thesaurus.

    The Topics List contains the titles of all help articles for which there are no Genus-Species, Part-Whole references. It is desirable that the list occupy no more than one screen panel.

    The alphabetical dictionary includes a list of the titles of all informational articles in alphabetical order.

    Hand-drawn hypertext has been used for a long time; these are reference books, encyclopedias, as well as dictionaries equipped with a developed system of links. The area of ​​application of hypertext technologies is very wide. These are publishing, library work, training systems, development of documentation, laws, reference manuals, databases, knowledge bases, etc. The most common systems are HyperCard, HyperStudio, SuperCard, Apple's QuickTime for Macintosh personal computers, Linkway-for IBM; from domestic -FLEXISII2.05, an automated system for the formation and processing of hypertext (ASFOG), etc. In most modern software products, all help is based on the use of hypertext technology based on the menu.

    Microsoft has released the Microsoft AssistantforWord utility for creating and editing hypertext documents in HyperTextMarkupLanguage (HTML) and converting WinWord files to HTML.

    Multimedia technology

    Multimedia is an intrinsic technology that provides work with still images, video images, animation, text and sound. One of the first tools for creating multimedia technology was hypertext technology, which provides work with text information, images, sound, and speech. In this case, hypertext technology acted as an authoring software tool.

    The emergence of multimedia systems was facilitated by technical progress: the operational and external memory of computers increased, wide graphic capabilities of computers appeared, the quality of video equipment increased; there were laser CDs, etc.

    Television, video and audioaparshura, unlike computers, deals with an analog signal. The problem of joining dissimilar equipment with a computer and controlling it arose completely. Displaying a still picture on a screen with a resolution of 512 x 482 dots (pixels) will require 250 KB for storage. At the same time, the image quality is low. It took the development of software and hardware methods for data compression and decompression. Such devices and techniques have been developed with compression ratios of 100: 1 and 160: 1. This made it possible to place about an hour of full-fledged dubbed video on one CD. JPEG and MPEG are considered to be the most advanced compression and scanning methods. Were developed sound cards (SoundBluster), multimedia cards, which the device but implement the algorithm for converting an analog signal into a discrete one. A read-only storage device has been connected to the CDs

    Since Jobe in 1988, yes, a fundamentally new type of personal computer -NeXT, in which the basic means of mule and media systems are embedded in the architecture, hardware and software. New powerful central processors 68030 and 68040, a signal processor DSP sounds, images, speech synthesis and recognition, image compression, work with color.The amount of RAM was 32 MB, erasable optical disks were used, standard built-in network controllers that allow you to connect to the network, methods of compression, scanning, etc. hard drive memory - 105 MB and 1.4 GB.

    The technology of working with NeXT is a new step in human-machine communication. Until now, we have worked with the WIMP interface (window, image, menu, pointer). NeXT makes it possible to work with the SILK interface (speech, image, language, knowledge). The NeXT includes a multimedia electronic mail system that allows the exchange of messages such as speech, text, graphic information, etc.

    Many operating systems support multimedia technology - Windows3.1, DOS7.0, OS / 2. The Windows 95 operating system has included hardware multimedia support, which allows users to play back digitized video, audio, animation graphics, and connect a variety of musical synthesizers and instruments. Wmdows-95 has developed a special version of the file system to support high-quality audio, video and animation playback. Media files are stored on a CD-ROM, hard disk, or network server. Digitized video is usually stored in files with the .AVI extension, audio information - in files with the .WAV extension, audio in the form of MIDI interface with the .M1D extension. To support them, a file subsystem has been developed that provides information transfer from CD-ROMs at an optimal speed, which is essential when playing audio and video information.

    Even from such a brief listing of the capabilities of multimedia technology, it is clear that there is a convergence of the market for computers, software, consumer goods and capital goods of both. There is a trend in the development of multimedia accelerators. Multimedia accelerator - software and hardware tools that combine the basic capabilities of graphics accelerators with one or more multimedia functions that usually require additional devices to be installed in a computer. Multimedia functions include digital filtering and video scaling, hardware digital video compression and scanning, acceleration of graphics operations related to three-dimensional graphics (3D), support for "live" video, the presence of a composite video output, and TV signal (television) output to a monitor. A graphics accelerator is also a software and hardware means of accelerating graphic operations: transferring a data block, painting an object, supporting a hardware cursor. There is a development of microcircuitry in order to increase the performance of electronic devices and minimize their geometric dimensions. The microcircuits that act as components of the sound card are combined on a single microcircuit the size of a matchbox. There is no limit to this.

    By 1991, more than 60 software packages with multimedia technology had been developed. At the same time, the standard did not exist, and in the same year, fvliuosoft and IBM shovrsnio introduced two standards, IBM \ uzhi-iaistandard Multimediai \ licosoftMPC. Standards Nowadays! standards for CD-ROM drives, SoundBluster-sound cards, MIDI-nntsrfsis-standard for connecting various musical synthesizers, DCl-interface - an interface with display drivers that allow playback of full-screen video information, MCI-interface - an interface for controlling various multimedia devices, standards for graphic adapters Apple partnered with FuiiFilm to develop the first industry standard, IEEEP1394, for the FIREWire chipset, enabling digital interfaces for many consumer products, such as video cameras, for multimedia applications.

    The emergence of multimedia systems has revolutionized education, computer training, it and other spheres of professional activity. the ability to dynamically track individual needs of the global market, which is reflected in the trend towards small-scale production The multimedia phenomenon democratizes scientific, artistic and industrial creativity

    Samos has widespread use of multimedia technology in the field of education Video encyclopedias have been created on many school subjects, museums, cities, travel routes. Their number continues to grow. Game situational simulators have been created, which has decreased! learning time Thus, the game process merges with learning, and in the regulation we have a "theater of communication" and about \ 1 aema realizes creative self-expression. a speech recognition board is connected to the computer Video games provide a tool for manipulating the public consciousness negative here is the cult of violence Multimedia technology will create preconditions for the development of the "home industry", leading to a reduction in production space, increases the productivity of work Special prospects open up Multimedia for distance learning Many universities are currently engaged in work multimedia technologies (MIU, MESI, MEI, Yaroslavsky 1 U, etc.) Of interest is the experience of the Moscow State University of Economics, Statistics and Informatics, where the IIPiu of Guiuamuioii nd England, Germany! ol India, Sweden

    Automated workplace of a managerial worker as part of the EIS

    As a rule, the user-economist is well acquainted with subject technology, the sequence of operations on data and the structure of their interconnections.The latter can be expressed in both computational and relational forms.

    Functional technology is a synthesis of supporting and subject technologies, carried out according to certain rules.It is a kind of data transformation environment and at the same time part of the EIS, it is based on a platform that consists of technical, software (DBMS, OS, etc.), organizational (personnel) and information parts

    Ultimately, a user-economist, a user-manager can use both individual IT and their totality, combined into a certain complex.The complex of supporting and functional information technologies that support the fulfillment of the goals of a managerial employee, a decision-maker is implemented on the basis of automated workstations ( AWP) The purpose of the AWP is to provide information support for the formation and adoption of decisions to achieve the goals set for the decision maker

    With the advent of personal computers, it became possible to install them directly on the worker's workplace and equip them with new tools oriented to the non-programmer user.The personal computer, equipped with a set of professionally oriented functional and providing information technologies and located directly at the workplace, began to be called an automated workplace, the purpose of which - information support of the decisions made. In other words, AWP is a certain part of the EIS, isolated in accordance with the object management structure and the existing target distribution system and designed as a self-supporting software and hardware complex.

    An automated workstation contains a fully functional information technology or part of it. Which part of the FIT is assigned to this or that AWP is determined primarily by the decomposition of goals in the object management structure. Such distribution of FIT to AWS should not violate the requirements of the subject technology itself. The imposition of FIT on the management structure allows you to create a distributed system for solving subject problems. The distribution between the computers of the FIT participants can concern either the stored data, or the processes of processing this data.

    The decision support system assumes active dialogue between the user and the EIS, taking into account the education, specifics, style and experience of the user's work.

    Usually there are three phases of decision making (Fig. 2-9):

      intellectual - the study of the environment in which the decision will be made;

      design - developer<а и опенка возможных альтернатив действий; выбор - принят не решения, т с. выбор одной альтернативы.

    Fig 2.9. The cycle of developing alternatives

    Decision support is always targeted and can be reflected in the form:

      a set of information that allows the user to assess the current situation and develop solutions;

      preparation of possible decisions, one of which will be made by a managerial worker;

      assessing the change in the state of the control object when making a decision, i.e. the answer to the question: "What will happen if?"

    It should be noted that in most cases only the first opportunity is realized in the AWS - the preparation of information for analyzing the situation, on the basis of which the employee could carry out such an analysis and then develop a management decision.

    Preparation of decisions without the direct participation of an employee is possible only in an expert system (ES), which is designed to answer the question: "How to do it?" ES is a system designed to recreate the experience and knowledge of high-level professionals and use this knowledge in the management process. Such systems are developed for operation in narrow areas of application, since their use requires large computer resources for processing and storing knowledge. The construction of expert systems is based on a knowledge base, which is based on knowledge representation models. Due to the large financial and time costs in Russian EIS, expert systems are not widely used.

    The EIS that supports the decision-making process of management personnel should be built in such a way as to support the implementation of the goals they face. One of the most common forms of EIS organization is a system of interconnected and interacting automated workstations.

    When using any information technology, you should pay attention to the availability of data protection means, programs, computer systems. Therefore, the degree of protection of automated workplaces can serve as one of the signs of their classification.

    When classifying information technologies according to the type of information carrier, paper and paperless technologies are distinguished. Paper technology uses paper media as input and output documents. Paperless technology involves the use of network technologies based on local and global computer networks, advanced office equipment, and electronic documents.

    When choosing information technology, a number of factors should be taken into account: total sales (only one out of ten packages is in demand on the market); increasing the productivity of the user (the user does only what the computer cannot do); reliability; the degree of information and computer security; required resources of memory and other devices; functional capacity (provided capabilities); ease of use; time for training; the quality of the intelligent interface; the ability to connect to a computer network; the price. Consideration should also be given to the software in use and the interface to it.

    If we take the organizational structure of management as a criterion, then we can conditionally single out the AWP of the manager, the AWP of the managerial employee of the middle and operational levels. In accordance with the principles of selective distribution of information, these persons need completely different informational support.

    The leader needs generalized, reliable and complete information to make the right decisions. He needs tools for analysis and planning of various areas of the enterprise. These tools include economic and mathematical, statistical methods; methods of modeling, analysis of various areas of the enterprise, forecasting. Of the supporting technologies required: tabular, graphic, word processors, e-mail, DBMS.

    AWS for middle and operational level managers is used to make decisions and implement professional activities in a specific subject area: AWP for storekeepers, tellers, bank employees, employees of insurance companies, etc. For each such direction, it is possible to determine the composite AWP. For example, an accountant's workstation is focused on all areas of accounting, but separate workstation for settlements with personnel for remuneration, accounting for fixed assets, etc. workers.

    The range of AWPs and the totality of IT included in them are influenced by: the management structure that has developed in the institution; subject area technologies; distribution of duties and goals among employees. In other words, the AWP nomenclature is a function of the management structure of the institution, the content of the AWP is a function of the goals of the decision maker being realized. Does domain technology have a decisive impact on the structure of the AWP? To answer this question, it is necessary to classify AWPs on the basis of ^ "inclusion or non-inclusion in them, explicitly or implicitly, of subject technologies. Most software tools that support decision-making in a particular area include such technologies. This inevitably makes the software product less flexible , requires it to be parameterized more deeply so that it can be adapted without reprogramming and thus sold to as many customers as possible.

    Some, doubtful in our opinion, the advantage of rigid inclusion of functional and supporting technologies in the software product is the possibility of using a specialist in the subject area of ​​low qualifications, since the user's actions are here declarative, not procedural. Thus, in-depth knowledge of subject technologies is not required from him, they were laid in the workstation by the developer

    However, in other products, subject technologies are classified on the basis of typification, unification for a given class of tasks and are included in the body of the EIS in the form of a certain library, the elements of which may or may not be available to various users. In this case, the elements begin to be procedural in nature, since the user himself must know at what point which IT should be used.

    Let's give one more definition of technology - presented in the design form, i.e. in the form of formalized representations (technical descriptions, drawings, diagrams, instructions, manuals, etc.), a concentrated expression of scientific knowledge and practical experience, allowing you to rationally organize any process to save labor costs, energy, material resources or social time required to implement this process.

    It seems appropriate to distinguish three main classes of technologies:

    • o production - ensure the optimization of processes in the field of material production of goods and services and their social distribution;
    • o informational - are designed to improve the efficiency of processes taking place in the information sphere of society, including science, culture, education, mass media and information communications;
    • o social - focused on the rational organization of social processes.

    P.G. Kuznetsov suggested using the concept of social time, introduced by Academician V.G. Afanasyev, as a universal measure of the costs of social labor. On the basis of their ideas, it is possible to propose the use of the concept of social time as a general indicator for quantifying the characteristics of any type of technology. Indeed, the goal of technology is the rational organization of some production, social or informational process. In this case, savings can be achieved not only of the astronomical time necessary for the implementation of this process, but also of material resources, energy or equipment that ensure this process. The costs of social labor for the production and delivery of these supporting means to the place of implementation of the technological process we are considering, in turn, can also be expressed by a certain amount of costs of social time. From this follows a well-grounded conclusion - social time is a universal general indicator of any technological processes.

    According to the above definition information technology - it is a concentrated expression of scientific knowledge and practical experience, presented in a design form, which makes it possible to rationally organize one or another information process to save labor, energy or material resources.

    Information processes are widely used in various fields of activity of modern society. They are often components of other, more complex processes - social, management, production.

    The main distinguishing feature of information technologies lies in their targeted focus on optimizing information processes, the output of which is information. As a general criterion for the effectiveness of information technology, we will use the saving of social time required for the implementation of the information process, organized in accordance with the requirements and recommendations of this technology.

    The criterion of saving social time requires, first of all, the improvement of the most massive information processes, the optimization of which should give the greatest benefit due to their wide and repeated use.

    Basic methods of processing economic information

    One of the main purposes of information technology is the collection, processing and provision of information for making management decisions. In this regard, it is convenient to consider the methods of processing economic information in terms of the phases of the life cycle of the management decision-making process: 1) diagnostics of problems; 2) development (generation) of alternatives; 3) choice of solution; 4) implementation of the solution.

    diagnosing problems , provide its reliable and most complete description. They include (Fig. 2.2) methods of comparison, factor analysis, modeling (economic and mathematical methods, methods of queuing theory, theory of reserves, economic analysis) and forecasting (qualitative and quantitative methods). All these methods collect, store, process and analyze information, record the most important events. The set of methods depends on the nature and content of the problem, the timing and funds that are allocated at the stage of setting.

    Rice. 2.2. Methods Used in the Problem Diagnosis Phase of the Decision Cycle

    Methods identifying (generating ) alternatives are shown in Fig. 2.3. At this stage, methods of collecting information are also used, but in contrast to the first stage, which looks for answers to questions like "What happened?" and "For what reasons?", here they define how the problem can be solved, with the help of what management actions.

    When developing alternatives (methods of management actions to achieve the set goal), methods of both individual and collective problem solving are used. Individual methods are characterized by the least time investment, but these solutions are not always optimal. When generating alternatives, an intuitive approach or methods of logical (rational) problem solving are used. Problem solving experts are recruited to assist the decision maker (DM) and are involved in developing alternatives (Figure 2.4). Collective problem solving is carried out according to the brainstorming / storming model (Fig. 2.5), Delphi and nominal group technique.

    Rice. 2.3. Methods used in the "Identify (generate) alternatives" phase of the decision cycle

    Rice. 2.4.

    Rice. 2.5.

    In a brainstorming session, we are dealing with an unlimited discussion, which is conducted mainly in groups of 4-10 participants. Alone brainstorming is also possible. The greater the difference between the participants, the more fruitful the result (due to different experiences, temperaments, work spheres).

    Participants do not need deep and lengthy preparation and experience in this method. However, the quality of the ideas put forward and the time taken will show how familiar the individual participants or target groups are with the principles and basic rules of this method. It is positive that the participants have knowledge and experience in the area under consideration. The duration of a session within the framework of a brainstorming session can be selected in the range from several minutes to several hours, the generally accepted duration is 20-30 minutes.

    When using the brainstorming method in small groups, one should strictly adhere to two principles: refrain from evaluating ideas (here quantity turns into quality) and observe four basic rules - criticism is excluded, free association is encouraged, the number of options is desirable, combinations and improvements are sought.

    Choice of alternatives occurs most often in conditions of certainty, risk and uncertainty (Fig. 2.6). The difference between these states of the environment is determined by the amount of information, the degree of the decision maker's knowledge of the essence of phenomena, the conditions for making decisions.

    Rice. 2.6. Methods Used in the Select Alternatives Phase of the Decision Cycle

    Certainty conditions are such conditions for making decisions (the state of knowledge about the essence of phenomena) when the decision maker can determine in advance the result (outcome) of each alternative proposed for choice. This situation is typical for tactical short-term decisions. In this case, the decision maker has detailed information, i.e. comprehensive knowledge of the situation to make a decision.

    Risk conditions characterized by such a state of knowledge about the essence of the phenomenon, when the decision maker knows the probabilities of possible consequences of the implementation of each alternative. Risk and uncertainty conditions are characterized by the so-called conditions of multivalued expectations of the future situation in the external environment. In this case, the decision maker must make a choice of an alternative without having an accurate understanding of the factors of the external environment and their influence on the result. Under these conditions, the outcome, the result of each alternative is a function of conditions - environmental factors (utility function), which is not always able to predict the decision maker. To provide and analyze the results of the selected alternative strategies, a decision matrix is ​​used, also called a payment matrix.

    Uncertainty conditions represent such a state of the environment (knowledge of the essence of the phenomena), when each alternative can have several results, and the probability of these outcomes is unknown. The uncertainty of the decision-making environment depends on the relationship between the amount of information and its reliability. The more uncertain the external environment, the more difficult it is to make effective decisions. The decision-making environment also depends on the degree of dynamics, mobility of the environment, i.e. the speed of changes in the conditions for making a decision. Changes in conditions can occur as a result of the development of the organization, i.e. her acquisition of the ability to solve new problems, the ability to update, and under the influence of factors external to the organization that cannot be regulated by the organization. The choice of the best solution in conditions of uncertainty essentially depends on the degree of this uncertainty, i.e. on what information the decision maker has. The choice of the best solution in conditions of uncertainty, when the probabilities of possible variants of conditions are unknown, but there are principles of approach to assessing the results of actions, provides the use of the following four criteria: Wald's maximin criterion; minimax Savage criterion; Hurwitz pessimism-optimism criterion; Laplace's test or Bayesian test.

    At implementation of solutions apply methods of planning, organizing and monitoring the implementation of decisions (Fig. 2.7). Drawing up a plan for implementing a solution involves getting an answer to the questions "what, to whom and with whom, how, where and when to do it?" The answers to these questions should be documented. The main methods used in planning management decisions are network modeling and separation of duties (Figure 2.8). The main tools for network modeling are network matrices (Fig. 2.9), where the network schedule is combined with a calendar-scale time grid.

    Rice. 2.7. Methods Used in the Implementation Phase of the Decision Cycle

    Rice. 2.8.

    Rice. 2.9.

    1-4 - operation number

    TO methods of organizing implementation of the decision include the methods of compiling an information table for the implementation of decisions (ITRR) and methods of influence and motivation.

    Control methods implementation of decisions is subdivided into control over intermediate and final results and control over deadlines (operations in ITRR). The main purpose of control is to create a system of guarantees for the implementation of decisions, a system to ensure the highest possible quality of decisions.

    Top related articles