How to set up smartphones and PCs. Informational portal
  • home
  • OS
  • Development of natural language interfaces and machine translation. "Informatics and Computer Engineering"

Development of natural language interfaces and machine translation. "Informatics and Computer Engineering"

Kolomna Institute (branch)

State educational institution of higher

vocational education

"MOSCOW STATE OPEN UNIVERSITY"

Department of Informatics and Information Technologies

"APPROVED"

Educational-methodical

Council of KI (f) MGOU

Chairman of the board

Professor

A.M. Lipatov

"___" ____________ 2010

P.S. Romanov

FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

Study guide by discipline direction

"Informatics and Computer Engineering»

For higher students educational institutions

Kolomna - 2010

Have

Published in accordance with the decision of the educational and methodological council of the Kolomna Institute (branch) of GOU VPO "MGOU" dated __________ 2010 city ​​No. ________

DK 519.6

P69 Romanov P.S.

The basics artificial intelligence... Tutorial. - Kolomna: KI (f) MGOU, 2010 .-- 164 p.

The tutorial covers the basics of artificial intelligence. The basic concepts of artificial intelligence are presented. The provisions of the theory of fuzzy sets are presented. The main intelligent systems, their purpose, classification, characteristics, problems of creation, examples.

The textbook is intended for students of higher educational institutions studying in the direction of "Informatics and Computer Engineering". It can be used in the study of intelligent information systems by students of other specialties.

Reviewer: Doctor of Technical Sciences, Professor V.G. Novikov

© Romanov P.S.

© KI (f) MGOU, 2010

Introduction ……………………………… ............ …………………………………… ... 5

Chapter 1. Basic concepts of artificial intelligence ................................ 6

§ 1.1. Basic terms and definitions .............................................. ..... 6

§ 1.2. The history of the development of AI systems .............................................. .............12

§ 1.4. Main directions of development and application

intelligent systems ................................................ ................ 25

Chapter 2. Provisions of the theory of fuzzy sets ........................................... 32

§ 2.1. Fuzzy set. Operations on fuzzy sets ... ..32

§ 2.1.1. Basic operations on fuzzy sets .................... 35

§ 2.2. Construction of the membership function ........................................... 38

§ 2.2.1. Some methods for constructing a membership function ... 39

§ 2.3. Fuzzy numbers ................................................ ................................. 44

§ 2.4. Operations with fuzzy numbers (L-R) -type .................................... 46

§ 2.5. Fuzzy and linguistic variables ........................................ 47

§ 2.6. Fuzzy relationship ................................................ ........................50

§ 2.7. Fuzzy logic ................................................ ................................ 51

§ 2.8. Fuzzy conclusions ................................................ .............................. 53

§ 2.9. Automation of information processing using

fuzzy systems ................................................ .................................. 59

Chapter 3. Basic intelligent systems ........................................... 64

§ 3.1. Data and knowledge ............................................... ................................ 64

§ 3.2. Knowledge Representation Models ............................................... ......... 66

§ 3.3.1. Production rules ................................................ ............... 69

§ 3.3.2. Frames ................................................. ......................................... 72

§ 3.3.3. Semantic networks ................................................ ...................... 74

§ 3.4. Expert systems. Subject areas ................................... 76

§ 3.5. Purpose and scope of expert systems ................. 77

§ 3.6. Methodology for the development of expert systems ................................. 81

§ 3.7. Main expert systems ............................................... ......... 86

§ 3.8. Difficulties in the development of expert systems and their ways

overcoming ................................................. ..................................... 90

§ 3.9. Purpose, classification of robots .............................................. 94

§ 3.10. Examples of robots and robotic systems .......................... 97

§ 3.10.1. Home (household) robots ............................................. .... 97

§ 3.10.2. Rescue robots and research robots ................... 99

§ 3.10.3. Robots for industry and medicine ........................... 100

§ 3.10.4. Military robots and robotic systems .................. 101

§ 3.10.5. The brain as an analog-digital device ................................ 104

§ 3.10.6. Robots - toys ............................................... .................... 104

§ 3.11. Problems of technical implementation of robots ............................... 105

§ 3.12. Adaptive industrial robots .......................................... 114

§ 3.12.1. Adaptation and training ............................................... ............. 114

§ 3.12.2. Classification of adaptive control systems

industrial robots ................................................ ... 117

§ 3.12.3. Examples of adaptive robot control systems ............ 123

§ 3.12.4. Problems in the creation of industrial robots ................... 128

§ 3.13. Neural network and neurocomputer technologies ...................... 132

§ 3.13.1. General characteristics of the direction .................................... 132

§ 3.13.2. Neuropackages ................................................. ......................... 140

§ 3.14. Neural networks................................................ ............................ 147

§ 3.14.1. Perceptron and its development .............................................. ..... 147

3.14.1.1. McCulloch-Pitts mathematical neuron ................ 147

3.14.1.2. Rosenblatt's Perceptron and Hebb's Rule ...................... 148

3.14.1.3. Delta rule and letter recognition ............................. 150

3.14.1.4. Adaline, Madaline and the Generalized Delta Rule .......... 152

§ 3.14.2. Multilayer perceptron and reverse algorithm

error propagation ................................................ ..... 155

§ 3.14.3. Types of activation functions ............................................. 160

Introduction

Science called "artificial intelligence" is included in the complex of computer science, and the technologies created on its basis belong to information technologies. The task of this science is to provide reasonable reasoning and actions using computing systems and other artificial devices. Artificial intelligence (AI) has existed as an independent scientific area for just over a quarter of a century. During this time, the attitude of society towards specialists engaged in such research has evolved from skepticism to respect. In advanced countries, work in the field of intelligent systems is supported at all levels of society. There is a strong opinion that it is these studies that will determine the nature of the information society that is already replacing the industrial civilization, which reached its peak in the 20th century. Over the past years of the formation of AI as a special scientific discipline, its conceptual models have been formed, specific methods and techniques belonging only to it have accumulated, and some fundamental paradigms have become established. Artificial intelligence has become a completely respectable science, no less honorable and necessary than physics or biology.

Artificial intelligence is an experimental science. The experimental nature of AI lies in the fact that creating certain computer representations and models, the researcher compares their behavior with each other and with examples of solving the same problems by a specialist, modifies them on the basis of this comparison, trying to achieve a better correspondence of the results. In order to modify programs in a “monotonous” manner to improve results, you need to have reasonable initial representations and models. They are delivered by psychological studies of consciousness, in particular, cognitive psychology.

An important characteristic of AI methods is that it deals only with those mechanisms of competence that are verbal in nature (allow symbolic representation). By no means all the mechanisms that a person uses to solve problems are as follows.

The book presents the basics of AI, which make it possible to navigate in a large number of publications devoted to the problems of artificial intelligence and to obtain the necessary knowledge in this area of ​​science.

Development of artificial intelligence

History of artificial intelligence started not so long ago. In the second half of the 20th century, the concept was formulated artificial intelligence(artificial intelligence) and several of its definitions are proposed. One of the first definitions, which, despite the significant breadth of interpretation, has not lost its relevance, is the presentation of artificial intelligence as: "A way to make a computing machine think like a person."

The relevance of the intellectualization of computing systems is due to the need of a person to find solutions in such realities of the modern world as inaccuracy, ambiguity, uncertainty, fuzzy and unreasonable information. The need to increase the speed and adequacy of this process stimulates the creation of computing systems, through interaction with the real world by means of robotics, production equipment, instruments and other hardware, can contribute to its implementation.

Computing systems based on purely classical logic - that is, algorithms for solving known problems, face problems, encountering uncertain situations. In contrast to them, living beings, although they lose in speed, are able to make successful decisions in such situations.

An example of artificial intelligence

An example is the 1987 stock market crash, when computer programs sold hundreds of millions of dollars worth of shares in order to make a profit of several hundred dollars, which actually created the prerequisites for the crash. The situation was corrected after the transfer of full control over exchange trading to protoplasmic intelligent systems, that is, to people.

Defining the concept of intelligence as a scientific category, it should be understood as the suitability of the system for learning. Thus, one of the most concretized, in our opinion, definitions of artificial intelligence is interpreted as the ability of automated systems to acquire, adapt, modify and replenish knowledge in order to find solutions to problems, the formalization of which is difficult.

In this definition, the term "knowledge" has a qualitative difference from the concept of information. This difference is well reflected by the representation of these concepts in the form information pyramid in Fig. 1.

Figure 1 - Information pyramid

It is based on data, the next level is occupied by information, the level of knowledge completes the pyramid. As you move up the information pyramid, data volumes transform into the value of information and then into the value of knowledge. That is, information arises at the moment of interaction between subjective data and objective methods of their processing. Knowledge is formed on the basis of the formation of distributed relationships between heterogeneous information, while creating a formal system - a way of reflecting them in precise concepts or statements.

It is the support of such a system - a system of knowledge, in such an up-to-date state, which makes it possible to build action programs for finding solutions to the tasks assigned to them, taking into account specific situations that are formed at a certain point in time in the environment, is the task of artificial intelligence. Thus, artificial intelligence can also be imagined as a universal over-algorithm capable of creating algorithms for solving new problems.

,

Tutorial. Magnitogorsk: MAGU, 2008. 282 pp. In the tutorial, models of knowledge representation, the theory of expert systems, the basics of logical and functional programming are presented. Much attention is paid to the history of the development of artificial intelligence. The presentation of the material is accompanied by a large number of illustrations, exercises and questions for self-control are offered.
The work is focused on full-time and part-time students enrolled in the areas of "Informatics", "Physics and mathematics education (Profile - informatics)." Introduction to artificial intelligence.
The history of the development of artificial intelligence as a scientific direction.
The main directions of research in the field of artificial intelligence.
Philosophical aspects of the problem of artificial intelligence.
Questions for self-control.
Literature.
Knowledge representation models.
Knowledge.
Logical model of knowledge representation.
Semantic networks.
Frames.
Production model.
Other models of knowledge representation.
Exercises.
Questions for self-control.
Literature.
Expert systems.
The concept of an expert system.
Types of expert systems and types of tasks to be solved.
The structure and modes of operation of the expert system.
Expert systems development technology.
Expert system tools.
Intelligent information systems.
Exercises.
Questions for self-control.
Literature.
Prolog as a logical programming language.
Concept of logic programming.
Representation of knowledge about the subject area in the form of facts and rules of the Prolog knowledge base.
Descriptive, procedural and machine sense of a Prolog program.
Basic programming techniques in Prolog.
Visual Prolog environment.
Exercises.
Literature.
Concept of functional programming.
History of functional programming.
Properties of functional programming languages.
Functional programming tasks.
Exercises.
Answers for self-test.
Literature.
ssary.
Annex 1.
Appendix 2.
Appendix 3.

The file will be sent to selected email address. It may takes up to 1-5 minutes before you received it.

The file will be sent to your Kindle account. It may takes up to 1-5 minutes before you received it.
Please note you "ve to add our email [email protected] to approved e-mail addresses. Read more.

You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you "ve read. Whether you" ve loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.

Ministry of Education and Science of the Russian Federation GOU VPO "Magnitogorsk State University" O.E. Maslennikova, I.V. Popova Fundamentals of Artificial Intelligence Textbook Magnitogorsk 2008 UDC 681.142.1.01 LBC Z97 M Reviewers: Doctor of Physical and Mathematical Sciences, Professor S.I. Kadchenko Doctor of Technical Sciences, Professor A, S. Sarvarov M Maslennikova O.E., Popova I.V. Fundamentals of artificial intelligence: textbook. manual / O.E. Maslennikova, I.V. Popov. - Magnitogorsk: MAGU, 2008 .-- 282 p. ISBN 978-5.86781-609-4 The tutorial describes the models of knowledge representation, the theory of expert systems, the basics of logical and functional programming. Much attention is paid to the history of the development of artificial intelligence. The presentation of the material is accompanied by a large number of illustrations, exercises and questions for self-control are offered. The work is focused on full-time and part-time students studying in the areas of "Informatics", "Physics and Mathematics Education (Profile - Informatics)". UDC 681.142.1.01 BBK Z97 ISBN 978-5.86781-609-4  Maslennikova O.E., Popova I.V., 2008  Magnitogorsk State University, 2008 -2- CONTENTS CHAPTER 1. INTRODUCTION TO ARTIFICIAL INTELLIGENCE ..... .............. 5 1.1. THE HISTORY OF THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE AS A SCIENTIFIC DIRECTION ........................................... .................................................. ........... 9 1.2. MAIN DIRECTIONS OF RESEARCH IN THE FIELD OF ARTIFICIAL INTELLIGENCE ........................................... .................................................. ............ 13 1.3. PHILOSOPHICAL ASPECTS OF THE PROBLEM OF ARTIFICIAL INTELLIGENCE ....... 16 QUESTIONS FOR SELF-CONTROL .................................. .......................................... 21 REFERENCES ...... .................................................. ................................................. 21 CHAPTER 2. KNOWLEDGE REPRESENTATION MODELS ......................................... 22 2.1. KNOWLEDGE................................................. .................................................. ....... 22 2.2. LOGICAL MODEL OF KNOWLEDGE REPRESENTATION .......................................... 25 2.3. SEMANTIC NETWORKS ................................................ .................................. 58 2.4. FRAMES ................................................. .................................................. ...... 59 2.5. PRODUCT MODEL ..................................... ....................................... 62 2.6. OTHER KNOWLEDGE REPRESENTATION MODELS .............................................. .... 64 EXERCISES ............................................ .................................................. ......... 78 QUESTIONS FOR SELF-CONTROL ..................................... ....................................... 83 REFERENCES ......... .................................................. .............................................. 84 CHAPTER 3. EXPERT SYSTEMS ................................................ .......... 86 3.1. CONCEPT OF EXPERT SYSTEM .............................................. ............... 86 3.2. TYPES OF EXPERT SYSTEMS AND TYPES OF SOLVED PROBLEMS ............................. 89 3.3. STRUCTURE AND OPERATING MODES OF THE EXPERT SYSTEM .............................. 99 3.4. TECHNOLOGY OF EXPERT SYSTEMS DEVELOPMENT ...................................... 102 3.5. EXPERT SYSTEM TOOLS .......................... 113 3.6. INTELLIGENT INFORMATION SYSTEMS ................................. 129 EXERCISES ............ .................................................. ....................................... 135 QUESTIONS FOR SELF-CONTROL ....... .................................................. ................. 136 REFERENCES ............................... .................................................. ...................... 138 CHAPTER 4. PROLOGUE AS A LANGUAGE OF LOGICAL PROGRAMMING ................... .................................................. ........... 139 4.1. INTRODUCTION ABOUT LOGICAL PROGRAMMING ............................ 139 4.2. SUBMISSION OF KNOWLEDGE ABOUT THE SUBJECT AREA TYPE OF FACTS AND RULES OF THE PROLOGUE KNOWLEDGE BASE ...................................... ................................................ 140 4.3 ... DESCRIPTIVE, PROCEDURAL AND MACHINE SENSE OF THE PROGRAM ON THE PROLOGUE ......................................... .................................................. ............ 148 4.4. BASIC PROGRAMMING TECHNIQUES IN PROLOGUE ............................. 151 4.5. VISUAL PROLOG ENVIRONMENT ............................................... ................................. 154 EXERCISES ............... .................................................. .................................... 194 REFERENCES ............ .................................................. ......................................... 197 -3- CHAPTER 5. PRESENTATION OF FUNCTIONAL PROGRAMMING. .................................................. ............................. 199 5.1. HISTORY OF FUNCTIONAL PROGRAMMING ................................. 200 5.2. PROPERTIES OF FUNCTIONAL PROGRAMMING LANGUAGES ............... 203 5.3. FUNCTIONAL PROGRAMMING TASKS .................................. 207 EXERCISES ........... .................................................. ........................................ 210 ANSWERS FOR SELF-TEST ...... .................................................. ..................... 210 REFERENCES ........................... .................................................. .......................... 211 GLOSSARY ...................... .................................................. ............................. 213 APPENDIX 1 .................. .................................................. .......................... 221 APPENDIX 2 ..................... .................................................. ....................... 252 APPENDIX 3 ........................ .................................................. .................... 265-4- FOREWORD B Lately there is an increase in interest in artificial intelligence, caused by increased requirements for information systems. Humanity is steadily moving towards a new information revolution, comparable in scale with the development of the Internet. Artificial intelligence is a direction of computer science, the purpose of which is to develop hardware and software tools that allow a non-programmer to set and solve their own, traditionally considered intellectual tasks, communicating with a computer in a limited subset of natural language. The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers for a long time there were disputes about the nature of man and the process of knowing the world, neurophysiologists and psychologists developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions of optimal calculations and representation of knowledge about the world in formalized form; finally, the foundation of the mathematical theory of computation - the theory of algorithms - was born and the first computers were created. The purpose of this manual is to outline the main directions and methods used in artificial intelligence, as well as to determine the possibility of their use in professional pedagogical activity. This tutorial is divided into five chapters. The first provides a short introduction to artificial intelligence: it examines the history of its development as a scientific direction, highlights the main areas of artificial intelligence, considers such philosophical aspects of the problem as the possibility of existence, safety, and the usefulness of artificial intelligence. The second chapter is devoted to the description of the classical models of knowledge representation: logical, semantic, frame, production and neural network. The third chapter deals with theoretical and practical issues of developing expert systems; describes the XpertRule wrapper. The fourth chapter describes the basic principles of programming in the Prolog language, describes the Visual Prolog environment. The fifth chapter describes the basics of functional programming with examples in the LISP language. The manual contains a large number of illustrations, exercises and questions for self-control. For the convenience of studying the material, a glossary is provided. -5- CHAPTER 1. INTRODUCTION TO ARTIFICIAL INTELLIGENCE Artificial intelligence (AI) is a new direction of informatics, the subject of study of which is any human intellectual activity that obeys well-known laws. Figuratively, this direction is called "the eldest son of computer science", since many unsolved problems are gradually finding their solution within the framework of artificial intelligence. It is known that the subject of informatics is information processing. The field of AI includes such cases (tasks) from this processing that cannot be performed using simple and accurate algorithmic methods, and of which there are a great many. AI relies on knowledge about the human thinking process. At the same time, it is not known exactly how the human brain works, however, to develop effectively working programs with AI elements, the knowledge about the features of human intelligence that science has today is already sufficient. At the same time, AI does not try to copy exactly the work of the human brain, but tries to simulate its functions using computer technology. Since its inception, AI has been developing as an interdisciplinary direction interacting with computer science and cybernetics, cognitive sciences, logic and mathematics, linguistics and psychology, biology and medicine (Fig. 1). Informatics and Cybernetics. Many specialists came to AI from computer science and cybernetics. Also, many combinatorial problems that cannot be solved by traditional methods in computer science have migrated to the field of AI. In addition, the results obtained in AI are borrowed in the creation of software and become part of Computer Science (informatics). Cognitive Sciences. Cognitive sciences are the sciences of knowledge. AI is also about knowledge. But cognitive sciences use not only information and neurobiological approaches, but also consider the social and psycholinguistic aspects of the use of knowledge. Logic and Mathematics. Logic underlies all known formalisms for representing knowledge, as well as programming languages ​​such as Lisp and Prolog. Methods of discrete mathematics, game theory, and theory of operations are used to solve AI problems. In turn, AI can be used to prove theorems, solve problems in different areas mathematics: geometry, integral calculus. Psychology and Linguistics. Recently, AI specialists have become interested in the psychological aspects of human behavior in order to model it. Psychology helps to build models of value assessments, subjective decision-making. Of interest is the psychology of communication -6- "man-computer", psycholinguistics. Computational linguistics is a part of AI that is based on mathematical methods of processing natural and artificial languages, on the one hand, and on the phenomenology of language, on the other hand. Biology and medicine allows you to better study and understand the work of the brain, vision systems, hearing and other natural sensors and give a new impetus to modeling their work. Rice. 1. Interaction of AI with other disciplines There is no single definition of AI, just as there is no single definition of natural intelligence. Among the many points of view on this scientific field, three now dominate. 1. Research in the field of AI is fundamental research, within the framework of which models and methods for solving problems are developed, which were traditionally considered intelligent and previously not amenable to formalization and automation. 2. AI is a new direction of informatics, associated with new ideas for solving problems on a computer, with the development of a fundamentally different programming technology, with the transition to a computer architecture that rejects the classical architecture, which dates back to the first computers. 3. As a result of work in the field of AI, many applied systems are born that solve problems for which previously created systems were not suitable. -7- An example with a calculator can be used to illustrate the first approach. In the beginning of the century arithmetic calculations with ambiguous numbers were the lot of a few gifted individuals and the ability to perform such arithmetic actions in the mind was rightfully considered a unique gift of nature and was an object scientific research... Nowadays, the invention of the calculator has made this ability available even to a third grader. The same is in AI: it enhances the intellectual capabilities of a person, taking on the solution of tasks that were not previously formalized. To illustrate the second approach, we can consider the history of an attempt to create a fifth generation computer. In the mid-1980s, Japan announced the start of an ambitious project to create a fifth-generation computer. The project was based on the idea of ​​hardware implementation of the PROLOGUE language. However, the project ended in failure, although it had a strong influence on the development and spread of the PROLOGUE language as a programming language. The reason for the failure was the hasty conclusion that one language (albeit fairly universal) can provide a single solution for all problems. Practice has shown that so far a universal programming paradigm for solving all problems has not been invented and is unlikely to appear. This is due to the fact that each task is a part of the subject area, requiring careful study and a specific approach. Attempts to create new computer architectures continue and are associated with parallel and distributed computing, neurocomputers, probabilistic and fuzzy processors. Work in the field of creating expert systems (ES) can be attributed to the third, most pragmatic direction in AI. Expert systems are software systems that replace a human specialist in narrow areas of intellectual activity that require the use of special knowledge. The creation of an ES in the field of medicine (such as MYCIN) allows the dissemination of knowledge to the most remote areas. Thus, in combination with telecommunication access, any rural doctor can receive advice from such a system, which replaces his communication with a specialist on a narrow issue. In Russia, AI has found its supporters almost since its inception. However, this discipline did not receive official recognition at once. AI has been criticized as a sub-branch of cybernetics, considered "pseudoscience". Until a certain point in time, the shocking name "artificial intelligence" also played a negative role. So, in the Presidium of the Academy of Sciences there was a joke that "those who lack the natural are engaged in artificial intelligence." However, today AI is an officially recognized scientific direction in Russia, the journals "Control Systems and Machines" and "AI News" are published, scientific conferences and seminars. There is the Russian Association of AI, numbering about 200 members, the president of which is D.A. Pospelov, Doctor of Technical Sciences, and the honorary president of RAS Academician G.S. Pospelov. There is the Russian Institute for Artificial Intelligence under the Council of the President of the Russian Federation for Informatics and Computer Science. Within the framework of the Russian Academy of Sciences, there is a Scientific Council on the problem of "Artificial Intelligence". With the participation of this Council, a lot of books on AI topics, translations have been published. The well-known works of D.A. Pospelov, Litvintseva and Kandrashina - in the field of knowledge representation and processing, E.V. Popov and Khoroshevsky - in the field of natural language processing and expert systems, Averkin and Melikhov in the field of fuzzy logic and fuzzy sets, Stefanyuk - in the field of learning systems, Kuznetsov, Finn and Vagin - in the field of logic and knowledge representation. In Russia, there is a traditionally strong computer linguistic school, which originates from the work on the model "SmyslText" by Melchuk. Famous computer linguists include Apresyan, Gorodetsky, Paducheva, Narinyani, Leontyeva, Chaliapin, Zaliznyak Sr., Kibrik Sr., Baranov and many others. others 1.1. The history of the development of artificial intelligence as a scientific direction The idea of ​​creating an artificial likeness of the human mind for solving complex problems and modeling the thinking ability has been in the air since ancient times. In ancient Egypt, a "reviving" mechanical statue of the god Amun was created. In Homer's Iliad, the god Hephaestus forged machine-like creatures. In literature, this idea has been played up many times: from Galatea Pygmalion to Pope Carlo's Pinocchio. However, the ancestor of artificial intelligence is considered to be the medieval Spanish philosopher, mathematician and poet R. Llull (c. 1235-c. 1315), who in the XIV century. tried to create a machine for solving various problems based on a general classification of concepts. In the XVIII century. G. Leibniz (1646 - 1716) and R. Descartes (1596 - 1650) independently developed this idea, proposing universal languages ​​for the classification of all sciences. These ideas formed the basis of theoretical developments in the field of artificial intelligence (Fig. 2). The development of artificial intelligence as a scientific direction became possible only after the creation of computers. This happened in the 40s. XX century At the same time N. Wiener (1894 - 1964) created his fundamental works on a new science - cybernetics. The term artificial intelligence was proposed in 1956 at a seminar with the same name at Stanford University (USA). The seminar was devoted to the development of logical, not computational problems. Soon after the recognition of artificial intelligence as an independent branch of science, there was a division into two main areas: neurocybernetics and black box cybernetics. And only at the present time -9- tendencies towards the unification of these parts again into a single whole have become noticeable. In the USSR in 1954 at the Moscow State University under the guidance of Professor A. A. Lyapunov (1911 - 1973) the seminar "Automata and Thinking" began its work. Major physiologists, linguists, psychologists, mathematicians took part in this seminar. It is generally accepted that it was at this time that artificial intelligence was born in Russia. As well as abroad, the directions of neurocybernetics and cybernetics of the "black box" have emerged. In 1956 -1963. there were intensive searches for models and algorithms of human thinking and the development of the first programs. It turned out that none of the existing sciences - philosophy, psychology, linguistics - can offer such an algorithm. Then cybernetics proposed to create their own models. Various approaches have been developed and tested. The first AI research was related to the creation of a program for playing chess, as the ability to play chess was believed to be an indicator of high intelligence. In 1954, the American scientist Newell conceived the idea of ​​creating such a program. Shannon proposed, and Turing refined, a method for creating such a program. The Americans Shaw and Simon, in collaboration with a group of Dutch psychologists from Amsterdam, led by de Groot, created such a program. Along the way, a special language IPL1 (1956) was created, designed to manipulate information in symbolic form, which was the predecessor of the Lisp language (MacCarthy, 1960). However, the first artificial intelligence program was the Theoretical Logic program, designed to prove theorems in propositional calculus (August 9, 1956). The chess program was created in 1957 (NSS - Newell, Shaw, Simon). Its structure and the structure of the Logic theorist program formed the basis for the creation of the GPS-General Problem Solving program. This program, by analyzing the differences between situations and constructing goals, is good at solving puzzles like Tower of Hanoi or calculating indefinite integrals. EPAM (Elementary Perceiving and Memorizing Program) - elementary program for perception and memorization, conceived by Feigenbaum. In 1957, an article by Chomsky, one of the founders of computational linguistics, appeared on transformational grammars. In the late 50s. the labyrinth search model was born. This approach presents the problem as some graph reflecting the state space1, and in this graph, the search for the optimal path from the input data to the resulting data is carried out. A lot of work has been done to develop this model, but in solving practical tasks the idea was not widely disseminated. 1 The state space is a graph, the vertices of which correspond to situations encountered in the problem (“problem situations”), and the solution of the problem is reduced to finding a path in this graph. - 10 - Early 60s. - the era of heuristic programming. Heuristics is a rule that is theoretically unjustified, but it allows you to reduce the number of searches in the search space. Heuristic programming is the development of an action strategy based on known, predefined heuristics. In the 60s, the first programs were created that worked with queries in natural language. The BASEBALL program (Green et al., 1961) answered queries about the results of past baseball games, the STUDENT program (Bobrow, 1964) had access to the solution of algebraic problems formulated in English. Rice. 2. Milestones in the development of AI as a scientific direction Great hopes were pinned on work in the field of machine translation, the beginning of which is associated with the name of the Russian linguist Belskaya. However, it took researchers many years to realize that automatic translation is not an isolated problem and requires such a necessary step as understanding to be successful. Among the most significant results obtained by domestic scientists in the 60s, M. Bongard's "Bark" algorithm should be noted, which simulates the activity of the human brain in pattern recognition. In 1963 - 1970 methods of mathematical logic began to be connected to the solution of problems. A new approach to formal logic, based on bringing reasoning to contradiction, appeared in 1965 - 11 - (J. Robinson). On the basis of the method of resolutions, which made it possible to automatically prove theorems in the presence of a set of initial axioms, the Prolog language was created in 1973. In the USSR in 1954 - 1964. separate programs are created and the search for solutions to logical problems is investigated. In Leningrad (LOMI - Leningrad Branch of the Steklov Mathematical Institute), a program is being created that automatically proves theorems (ALPEV LOMI). It is based on the original reverse conclusion of S.Yu. Maslov, similar to the method of Robinson's resolutions. In 1965-1980. is developing new science- situational management (corresponds to the representation of knowledge in Western terminology). The founder of this scientific school is Professor D.A. Pospelov. Special models have been developed for representing situations - knowledge representation. Abroad, research in the field of AI is accompanied by the development of new generation programming languages ​​and the creation of more and more sophisticated programming systems (Lisp, Prolog, Plannar, QA4, Macsyma, Reduce, Refal, ATNL, TMS). The results obtained are beginning to be used in robotics, when controlling robots, stationary or mobile, operating in real three-dimensional space. This raises the problem of creating artificial organs of perception. Until 1968, researchers worked mainly with separate "microspaces", they created systems suitable for such specific and limited areas of application as games, Euclidean geometry, integral calculus, the "world of cubes", processing of simple and short phrases with a small vocabulary ... Almost all of these systems used the same approach - a simplification of combinatorics based on reducing the necessary enumeration of alternatives on the basis of common sense, using numerical estimation functions and various heuristics. The early 1970s saw a quantum leap in research on artificial intelligence. There are two reasons for this.  First. All researchers gradually realized that all previously created programs lack the most important thing - in-depth knowledge in the relevant field. The difference between an expert and an ordinary person is that the expert has experience in the field, i.e. accumulated knowledge over the years.  Secondly. A specific problem arises: how to transfer this knowledge to the program, if its immediate creator does not possess this knowledge. The answer is clear: the program itself must extract them from the data received from the expert. Research on problem solving and natural language understanding has one thing in common a common problem- knowledge representation. By 1970, there had been - 12 - created many programs based on these ideas. The first of these is the DENDRAL program. It is designed to generate structural formulas of chemical compounds based on information from a mass spectrometer. The program was developed at Stanford with the participation of Nobel laureate D. Lederberg. She gained experience in the process of her own functioning. The expert laid down in it many thousands of elementary facts, presented in the form of separate rules. The system in question was one of the first expert systems and the results of its work are amazing. The system is currently supplied to consumers together with a spectrometer. In 1971, Terry Vinograd developed the SHRDLU system, which simulates a robot that manipulates cubes. You can speak English with the robot. The system is interested not only in the syntax of phrases, but also correctly understands their meaning thanks to the semantic and pragmatic knowledge of its "world of cubes". Since the mid-1980s, commercialization of artificial intelligence has been taking place abroad. Annual capital expenditures are growing, and industrial expert systems are being created. There is a growing interest in self-learning systems. In our country, 1980-1990. active research is being carried out in the field of knowledge representation, languages ​​of knowledge representation, expert systems (more than 300) are being developed. The REFAL language is being created at Moscow State University. In 1988, the AII - the Association for Artificial Intelligence was created. More than 300 researchers are its members. President of the Association - D.A. Pospelov. Largest centers - in Moscow, St. Petersburg, Pereslavl-Zalessky, Novosibirsk. 1.2. The main directions of research in the field of artificial intelligence At present, AI is a rapidly developing and highly ramified scientific field. More than 40 conferences are held annually in computational linguistics alone in the world. Almost every European country, as well as the USA, Canada, Japan, Russia, Southeast Asia, regularly hosts national conferences on AI. In Russia, this event is held every two years under the auspices of the Russian Association for AI (RAII). In addition, the International Joint Conference on AI (IJCAI) is held every two years. More than 3 thousand periodicals publish scientific results in this area. There is no complete and strict classification of all areas of AI; an attempt to classify the tasks that AI solves is shown in Fig. 3. According to the classification of D.A. Pospelov in AI, there are two dominant approaches to research in the field of AI: neurobionic and informational (Fig. 4 and 5). - 13 - Problems General Formal Expert Perceptions Games (Chess, Go, Puzzles) Engineering Natural Language Processing Mathematics Scientific Analysis Common Sense Reasoning Geometry Financial Analysis Robot Control Program Verification Medical Diagnostics Fig. 3. Tasks of AI Proponents of the first set themselves the goal of artificially reproducing the processes that take place in the human brain. This area is at the intersection of medicine, biology and cybernetics. At the same time, they study the human brain, identify the ways of its work, create technical means for repeating biological structures and the processes occurring in them. The field of AI can be roughly divided into five large sections: - neural-like structures; - programs for solving intellectual problems; - knowledge-based systems; - intellectual programming; - intelligent systems. Each of the sections can be represented as follows (see Figure 4-9). - 14 - Fig. 4. Neuro-like structures Fig. 5. Programs for solving intellectual problems Fig. 6. Knowledge based systems - 15 - Fig. 7. Intelligent programming Fig. 8. Intelligent systems 1.3. Philosophical aspects of the problem of artificial intelligence The main philosophical problem in the field of artificial intelligence is related to the search for an answer to the question: is it possible or not to model human thinking. In the event that a negative answer to this question is ever received, then all other questions in the field of AI will not make the slightest sense. Therefore, when starting an artificial intelligence study, a positive answer is assumed in advance. Evidence of the possibility of modeling human thinking. 1. Scholastic: consistency of artificial intelligence and the Bible. Apparently, even those who are far from religion know the words of Holy Scripture: "And the Lord created man in his own image and likeness ...". Based on these words, we can conclude that since the Lord, firstly, created people, and secondly, they are inherently similar to him, then people are quite capable of creating someone in the image and likeness of man. 2. Biological. The creation of a new mind by biological means is quite common for a person. Observing the children, we see that - 16 - they acquire most of the knowledge through learning, and not as laid down in them in advance. This statement has not been proven at the modern level, but according to external signs, everything looks exactly like this. 3. Empirical. What previously seemed to be the pinnacle of human creativity - playing chess, checkers, recognizing visual and sound images, synthesizing new technical solutions, in practice it turned out to be not so challenging task... Now the work is carried out not at the level of the possibility or impossibility of implementing the above, but on finding the most optimal algorithm - often these problems are not even referred to as problems of artificial intelligence. It is hoped that a complete simulation of human thinking is also possible. 4. Possibility of self-reproduction. The ability to reproduce itself has long been considered the prerogative of living organisms. However, some phenomena occurring in inanimate nature (for example, crystal growth, synthesis of complex molecules by copying) are very similar to self-reproduction. In the early 1950s, J. von Neumann began a thorough study of self-reproduction and laid the foundations of the mathematical theory of "self-reproducing automata." He also proved theoretically the possibility of their creation. There are also various informal proofs of the possibility of self-replication, but for programmers, the most striking proof is perhaps the existence of computer viruses. 5. Algorithmic. The fundamental possibility of automating the solution of intellectual problems using a computer is provided by the property of algorithmic universality. This property of a computer means that it is possible to programmatically implement (i.e., represent in the form of a computer program) any algorithms for converting information on them. Moreover, the processes generated by these algorithms are potentially feasible, that is, they are feasible as a result of a finite number of elementary operations. The feasibility of algorithms depends on the tools available, which may change with advances in technology. So, in connection with the advent of high-speed computers, such algorithms have become practically feasible, which were previously only potentially feasible. In addition, the content of this property has a predictive nature: whenever in the future any prescription is recognized by the algorithm, then regardless of the form and by what means it will be initially expressed, it can also be set in the form of a machine program. However, one should not think that computing machines and robots can, in principle, solve any problem. The analysis of a variety of problems led mathematicians to a remarkable discovery. The existence of such types of problems was rigorously proved for which a unified efficient algorithm that solves all problems is impossible. of this type; in this sense, it is impossible to solve problems of this type with the help of computers. This fact contributes to a better understanding of what machines can and cannot do. Indeed, the statement about the algorithmic undecidability of a certain class of problems is not just an admission that such an algorithm is unknown and has not yet been found by anyone. Such a statement is at the same time a forecast for all future times that this kind of algorithm is not known to us and will not be indicated by anyone, or, in other words, that it does not exist. AI can be considered in a number of tools (intellectual and non-intellectual) that were created and mastered by mankind on the way of its historical development. These include:  hand tools;  machine tools and machines;  language and speech;  calculating devices;  VT and telecommunications facilities. Philosophers argue that tool making (in the broadest sense of the word) is the most important species activity that distinguishes our ancestors from other primates. Human beings stand out among animals for their ability to produce knowledge and tools. No other technological or socio-political invention has caused such a gigantic gap in the development of the homo sapience species from other species of living nature. The development of computer technology can be broadly divided into two areas: digital processing and symbolic processing. The first direction has made information much more convenient for storage, processing and transmission than all previous improvements in paper technology. The computer has surpassed all computing tools of the past (abacus, abacus, adding machine) in speed, variety of functions, ease of use. Consistently expanding the scope of automation in the field of monotonous mental work, digital information processing has expanded the scope of the printing press and the industrial revolution to new frontiers. The second branch of computer technology, sign processing (Newell and Simon's term) or artificial intelligence, allowed the computer to mimic the processes of sensory perception and orientation, reasoning and problem solving, natural language processing, and other human abilities. In other words, AI is a new kind of toolkit, an alternative to existing ones. This reality has forced AI philosophers to move away from the question "Is it possible to create an intelligent machine?" to the problem of the influence of intellectual tools on society. Including the possible social effect of the development of AI, namely: - 18 - increasing the level of intelligence of the whole society, which will give new discoveries, inventions and a new understanding of humanity itself.  changing the situation when the majority of people are the means and instrument of production. The next philosophical question of AI is the purpose of creation. In principle, everything we do in practical life is usually aimed at doing nothing else. However, for enough high level life (a large amount of potential energy) of a person in the first roles is no longer laziness (in the sense of a desire to save energy), but search instincts. Let's say that a person has managed to create an intellect that exceeds his own (if not in quality, so in quantity). What will happen to humanity now? What role will the person play? What is it for now? And in general, is it necessary in principle to create AI? Perhaps the most acceptable answer to these questions is the concept of an “intelligence enhancer” (AI). According to S.L. Sotnik, an analogy with the president of the state is appropriate here - he is not obliged to know the valence of vanadium or the Java programming language to make a decision on the development of the vanadium industry. Everyone does their own thing - a chemist describes a technological process, a programmer writes a program; the economist tells the president that by investing in industrial espionage, the country will receive 20%, and in the vanadium industry - 30% per annum. I think that with such a formulation of the question, anyone will be able to make the right choice. In this example, the president is using a biological AI - a group of specialists with their protein brains. But even now, non-living UIs are also used - for example, computers, on-board calculating devices. In addition, a person has long been using power amplifiers (US) - a concept that is largely analogous to AI. Cars, cranes, electric motors, presses, cannons, airplanes and much, much more serve as power amplifiers. The main difference between UI and CS is the presence of will: the former can have his own "desires" and act differently from what is expected of him. Thus, the problem of the security of AI systems arises. How can we avoid the negative consequences that accompany any new achievement of scientific and technological revolution? This problem has haunted the minds of mankind since the time of Karel Čapek, who first used the term “robot”. Other science fiction writers also contributed a lot to its discussion. The most famous - a series of stories by science fiction writer and scientist Isaac Asimov, who can find the most elaborate and accepted by most people solution to the security problem. We are talking about three laws of robotics. 1. A robot cannot harm a person or, by its inaction, allow harm to be done to a person.  - 19 - 2. The robot must obey the commands given to it by the human, unless these commands are contrary to the first law. 3. The robot must take care of its safety, as far as it does not contradict the first and second laws. Subsequently, Azimov adds "Zero Law" to this list: "A robot cannot harm humanity or, by its inaction, allow harm to be done to humanity." At first glance, such laws, with their full observance, should ensure the safety of mankind. However, upon closer examination, some questions arise. First, the laws are formulated on human language, which does not allow their simple translation into algorithmic form. Suppose this problem is solved. Now, what does an AI system mean by “harm”? Will she not decide that the very existence of man is sheer harm? After all, he smokes, drinks, grows old over the years and loses his health, suffers. Wouldn't it be a lesser evil to quickly end this chain of suffering? Of course, you can introduce some additions related to the value of life, freedom of expression. But these will no longer be the simple three laws that were in the original. Next: what will the AI ​​system decide in a situation where saving one life is possible only at the expense of another? Especially interesting are those cases when the system does not have complete information about who is who. However, despite the listed problems, these laws are a pretty good informal basis for checking the reliability of the security system for AI systems. So is there really no reliable security system? Based on the MI concept, the following option can be proposed. According to numerous experiments, despite the lack of reliable data about what each individual neuron in the human brain is responsible for, many of the emotions usually correspond to the excitation of a group of neurons (neural ensemble) in a completely predictable area. Reverse experiments were also carried out, where irritation of a specific area produced the desired result. These could be emotions of joy, oppression, fear, aggressiveness. Thus, it seems possible to take the degree of satisfaction of the human host brain as a target function. If you take measures to exclude self-destructive activities in a state of depression, as well as provide for other special conditions psyche, you get the following. Since it is assumed that a normal person will not harm himself and, for no particular reason, others, and IA is now part of a given individual (not necessarily a physical community), then all three laws of robotics are automatically fulfilled. At the same time, security issues are shifted to the field of psychology and law enforcement, since - 20 - the system (trained) will not do anything that its owner would not want. Self-check questions 1. What is artificial intelligence? 2. What scientific fields does artificial intelligence interact with? 3. Describe the approaches to understanding the subject of artificial intelligence as a scientific discipline. 4. Describe the current state of AI in Russia. 5. Describe the "pre-computer" stage of development of artificial intelligence 6. Describe the development of artificial intelligence in the 40s. XX century 7. Describe the development of artificial intelligence in the 50s. XX century 8. Describe the development of artificial intelligence in the 60s. XX century 9. Describe the development of artificial intelligence in the 70s. XX century 10. Describe the development of artificial intelligence in the 80s. XX century 11. Describe the main tasks of artificial intelligence. 12. What sections are distinguished in the field of artificial intelligence? 13. Give evidence of the possibility of modeling human thinking. 14. What is the basis for the transition to the problem of the influence of intellectual tools on society? 15. What caused and how can the problem of security of artificial intelligence systems be solved? Literature 1. Luger, J., F. Artificial intelligence: strategies and methods for solving complex problems: trans. from English / George F. Luger. - M .: Publishing House "Williams", 2003. - 864 p. 2. Fundamentals of artificial intelligence / B.V. Kostrov, V.N. Ruchkin, V.A. Fulin. - M .: "DESS", "Techbook", 2007. - 192 p. 3. Site of the Russian Association for Artificial Intelligence. - Access mode: http://www.raai.org/ 4. Sotnik, S.L. Fundamentals of designing artificial intelligence systems: lectures. - Access mode: http://newasp.omskreg.ru/intellect/f25.htm 5. Russell, S. Artificial intelligence: a modern approach / Stuart Russell, Peter Norvig. - M .: Publishing House"Williams", 2006. - 1408 p. - 21 - CHAPTER 2. KNOWLEDGE REPRESENTATION MODELS 2.1. Knowledge What kinds of knowledge are needed to provide "intelligent" behavior? The "secret" of the phenomenology of the knowledge model lies in the world around us. V general case the knowledge representation model should provide a different description of objects and phenomena that make up the subject area in which an intelligent agent has to work. A subject area is a part of reality associated with solving a problem. Intelligent agent - a system (person, program) with intellectual abilities. Knowledge is the identified patterns of the subject area (principles, connections, laws). Knowledge has a more complex structure than data (metadata). In this case, knowledge is set both extensionally (i.e., through a set of specific facts corresponding to a given concept and relating to the subject area), and intensionally (i.e., through properties corresponding to a given concept, and a diagram of connections between attributes). Types of knowledge Objects. Usually, a person presents knowledge in terms of facts about the objects around him. For this reason, there must be ways of representing objects, classes (categories, types) of objects, describing the properties and interaction of objects. One way to classify objects is through a class hierarchy. In addition, it is necessary to distinguish between abstract objects that are used to designate groups (sets, classes) of individuals. Example "Birds have wings" "Doves are birds" "Snow is white" "This book is new" - an individual object Situations - all kinds of interactions between objects. Example “It rained yesterday” “The train was 10 minutes late” An example of the classification of situations proposed by Paducheva is shown in fig. 9. In addition, in order to be able to describe situations by themselves, the representation model should allow describing the location of events on the time axis, as well as their causal relationship. Situations Static States Constant properties and relationships Dynamic Processes Stable Incidents Temporary Results Events Fig. 9. An example of the classification of situations proposed by Paducheva In presenting the hierarchy of objects and relationships, the main difficulty is the choice of the foundation, ie. properties (attribute) by which division occurs. Usually, even if a person can easily distinguish between different types of objects and situations in life, the attempt at verbal classification is a big problem. Procedures. Behavior (ex: cycling) requires knowledge that is beyond declarative knowledge of objects and the relationships between them. This is knowledge about how to do this or that action, which is called procedural knowledge, or experience (skill). Similar to cycling, most conscious behaviors (eg, communication, understanding, or theorem proving) involve procedural knowledge, and it is often difficult to clearly distinguish between knowledge of a procedure and knowledge of an object. Example The term "pedagogy" - describes a situation of lack of procedural knowledge in a person who pretends to be a specialist Meta-knowledge - knowledge about knowledge: about the volume and origin of knowledge about a particular object, about the reliability of specific information, or about the relative importance of certain facts. Meta-knowledge also includes - 23 - what people know about their own ability as a knowledge processor: strength, weakness, level of experience in various fields, and a sense of progress in solving problems. Classification of knowledge By depth:  Superficial knowledge (a set of empirical associations and cause-and-effect relationships between the concepts of the subject area).  Deep knowledge (abstractions, images, analogies, which reflect the understanding of the structure of the subject area and the relationship of individual concepts). By way of existence:  Facts (well-known circumstances).  Heuristics (knowledge from expert experience). In terms of rigidity:  Rigid knowledge (allows you to get unambiguous clear recommendations for given initial conditions).  Soft knowledge (allows multiple, vague decisions and different options recommendations). By forms of presentation:  Declarative knowledge (facts in the form of sets of structured data).  Procedural knowledge (algorithms in the form of fact processing procedures). By the method of acquisition:  Scientific knowledge (obtained in the course of systematic training and / or study).  Everyday, everyday knowledge (acquired in the course of life). To place a knowledge base in order to use it for solving applied problems, it is necessary to formally describe it using mathematical models. As already mentioned, knowledge representation is possible using declarative and procedural models. Typical declarative models usually include network and frame models; to procedural - logical and production. From the point of view of the approach to the representation of knowledge in a computer, knowledge representation models can be classified as follows: Based on the heuristic approach: "troika", production, frame, network model Based on the theoretical approach: based on formal logic and based on "human logic" - modal and ambiguous. - 24 - 2.2. Logical Model of Knowledge Representation Basic Concepts of Logic Most people think that the word “logical” means “reasonable”. Thus, if a person thinks logically, then his reasoning is justified, so he does not allow hasty conclusions. Logic is the science of the forms and methods of correct thinking. This means that given the required amount of true facts, the conclusion must always be true. On the other hand, if the inference is invalid, it means that a false conclusion has been obtained based on the true facts. It is necessary to separate the concepts of formal logic and informal. A distinctive feature of informal logic is that it is used in everyday life. A complex logical proof is a chain of inferences, in which one conclusion leads to another, etc. In formal logic, also called symbolic logic, it is important how the inference is carried out, how other factors are taken into account that provide proof of the truth or falsity of the final conclusion in an admissible way. Logic also needs semantics to give meaning to symbols. Formal logic uses semantics based not on the use of words that carry an emotional load, but on the choice of meaningful names for variables, like programming. Like mathematics, logic directly studies not empirical, but abstract objects. This raises the question: What is the nature or ontological status of abstract objects? What kind of abstract objects are we talking about? In logic (classical), two fundamental types of abstract objects are distinguished: - concepts (properties); - relationships. Concepts can be simple or complex. Complicated concepts are a collection of relatively simpler concepts (simple properties) linked by one or another relationship. More complex abstract objects are judgments, the structural elements of which are also concepts and certain relations. Judgments, in turn, are the structural elements of reasoning (systems of judgment), and inferences are the structural elements of concepts and theories (systems of reasoning). In Fig. 10 shows the hierarchy of types of abstract objects in classical logic. The specificity of logic lies in the fact that it studies the most general, universal relations, or interconnections, between abstract objects. In accordance with this, there is the following object - 25 - definition of logic: "Logic is the science of universal (generally valid) relationships between concepts, judgments, inferences and other abstract objects." Concepts and theories (systems of inference) Inferences (system of judgments) Judgments Concepts (properties) Relationships Fig. 10. Hierarchy of types of abstract objects in classical logic Example "Student" is a concept. "Diligence" is a property. "Diligent student", "4th year student" - relationships. "A person studies at the university" - a judgment. “If a person studies at an institute, then he is either a student or a graduate student” - the conclusion. "First-order predicate calculus theory" is a concept. Concept Concepts are the essence of abstract objects accessible to human understanding as simple and complex properties (attributes) of empirical objects. The concept is contrasted with such entities as: "word", "perception", "empirical object". The concept is a universal unit of thinking and the basis of intellectual activity. The most important characteristics concepts are content and scope. All logical characteristics and logical operations are the result of derivative knowledge from the law of inversely proportional relationship between the content and scope of the concept. Any concept has the scope of a concept (conceptual scope) and an addition to the scope of the concept (Fig. 11, 12). The scope of a concept is a set (set) of all those empirical (individual objects) that have a given concept (as a property, feature). - 26 - The addition to the volume - the totality of all those empirical objects that are not inherent in this concept. Concept X a1 a2 V Volume a3 Fig. 11. Concept X, volume of concept X, volume element (a1, a2, a3) X Not X Fig. 12. Scope and its addition Example Concept: factual data model. Scope of the concept: relational, network, hierarchical data models Addition of the scope: documentary data models (descriptor, thesaurus, data models oriented to the document format). Concepts can be of the following types: 1) by volume: a. single (U = 1 element, Kamaz); b. general (U> 1 element, Moscow Automobile Plant); 2) by the existence of elements: a. non-empty (student); b. empty (kolobok); 3) by the structure of the elements: a. non-collective (North Pole); b. collective (debtor); 4) by content: a. irrelevant (audience); b. correlative (parents); 5) by the presence of qualities, properties, relationships a. positive (virtue); b. negative (offense); 6) by the quality of the elements: a. registered (journal " Open systems ", 1/2008); b. unregistered (intelligentsia), abstract; 7) by the nature of the object: a. concrete (handle); - 27 - b. abstract (model). On the basis of the listed types, it is possible to give a logical description of any concept, that is, to show the use of the concept in all seven senses. For example, the concept of a debtor is general, non-empty, collective, correlative, positive, unreported and specific. Basic methods of comprehending concepts The main methods of comprehending a concept include: - abstraction; - comparison; - generalization; - analysis; - synthesis. Abstraction is a mental separation (understanding) of a certain property or relationship by abstraction from other properties or relationships of an empirical object. Comparison is the establishment of similarities or differences between objects. Generalization - mental isolation of a certain concept by comparing any other concepts. Abstraction, comparison and generalization are techniques closely related to each other. They can be called "cognitive procedures". Comparison is impossible without regard to abstraction. Generalization presupposes comparison and at the same time is nothing but a kind of complex abstraction, etc. Analysis is the mental division of an empirical or abstract object into its constituent structural components (parts, properties, relationships). Synthesis is the mental union of various objects into a certain integral object. Examples 1. Comparison of people by height assumes abstraction to highlight the “growth” property of the concept “person”. 2. Generalization: "chair" and "table" - "furniture". Relationship of concepts To explain the relationship between concepts, you can use diagrams in the form of Euler circles (Fig. 13). Examples Uniform (equal volume): Kazan is the capital. Independent (intersection): passenger - student. Submission: tree - birch. Opposite (contrast): white and black. - 28 - Conradication: white - not white. Subordination (subcontracting): officers (major-captain). The logical division of a concept is the division of the scope of a concept into non-intersecting parts based on some attribute. Concepts X, Y Incompatible M (X) M (Y) =  Compatible M (X) M (Y)  Independent Contradictory Y = Not-X X Y X M (X) M (Y); M (X) M (Y) M (X); M (X) M (Y) M (Y) M (X) M (Y) = U Contrary Identity (uniform) X, YXYM (X) = M (Y) M (X) M (Y) UX slave to YXYM (X) M (Y) = M (X) Fig. 13. Correlation of concepts In this case, there are: - generic concept X; - division members (species concepts A and B); - division base (i.e. sign). - 29 - Three rules of logical division. 1. The rule of incompatibility. The scope of species concepts should not overlap (i.e., division members should not be incompatible with each other); 2. The rule of consistency. You cannot divide on several grounds at once; 3. Rule of proportionality. The sum of the volumes of specific concepts must be equal to the volume of the generic concept. Dichotomous division (the most strict type) - division of concepts according to the principle of contradictory (A, not-A). Classifications are certain systems (ordered aggregates) of species concepts. Classifications are used to find new relationships between concepts, as well as to systematize existing knowledge. Example 1. Periodic table is an example of the scientific classification of chemical elements. 2. An example of the classification of information systems (IS) is shown in the figure below. Division bases: functional purpose. A, B, C are examples of information systems according to this classification. IS Factographic systems Artificial intelligence systems Document systems IS "University" Lingvo "Consultant Plus" А В С Fig. 14. An example of classification. Techniques for comprehending concepts (abstraction, comparison, generalization, analysis, synthesis, division) are universal and fundamental cognitive procedures that have not yet been successfully modeled within the framework of artificial intelligence. This is one of the fundamental sections of classical logic that should be integrated into knowledge base theory. After that, the tasks of modeling such mental acts as hypotheses, teaching declarative knowledge will become available, inference procedures will become more capacious. - 30 - Judgment Judgment is a structurally complex object that reflects the objective relationship between the object and its property. Such entities as "proposal", "perception", "scenes from the real world" are opposed to judgment. Example. The following sentences express the same judgment: - "The shark is a predatory fish"; - "All sharks are predatory fish." “Sharks are predatory fish.” Classical logic considers the structure of a simple judgment in a slightly different interpretation than is accepted in modern logicolinguistic studies. So, in accordance with the concepts of classical logic about the structure of a judgment, a simple judgment is an abstract object, the main structural elements of which are: - an individual concept (IC); - predicate concept (PC); - predication relation (RP). Examples The sentence is given: "Plato is a philosopher." In this sentence, expressing the judgment S: "Plato" is a logical subject, that is, a symbol denoting an individual concept of a judgment S. "Philosopher" is a logical predicate, that is, a symbol denoting a predicate concept of a judgment S. “To be” is a subject-predicate link, i.e. a symbol denoting a predication relation. Thus, the following intermediate conclusion can be drawn: - an individual concept is a system of concepts considered as a conceptual entity, some empirical object; - predicate concept - a concept considered as a property of a particular empirical object; - the relationship of predication - the relationship that connects the individual and predicate concepts of some empirical object into a holistic abstract object. In addition, several types of simple judgments can be distinguished (See Fig. 15). There are several ways to formalize elementary judgments. - 31 - 1st method. Natural language, which is traditionally considered cumbersome and imprecise, but no formal method has yet been invented that could compare in its versatility with natural language. Simple judgments Attributive About relations Existence Monks, as a rule, are modest Magnitogorsk south of Chelyabinsk There are blue spruces Fig. 15. Types of simple judgments 2nd way. Traditional Aristotelian logic. 3rd way. Contemporary symbolic logic. The main types of complex judgments In addition to judgments expressed in Aristotelian logic by statements of the form A, E, I, O (see p. Aristotle's logic), there are also various kinds of complex judgments. The more complex the judgment, the more difficult it is to accurately formalize it by means of traditional Aristotelian logic, and in some cases such formalization is simply impossible. Therefore, it is advisable to analyze the logical structure of complex judgments by means of modern symbolic logic, including by means of propositional logic and predicate logic (see the corresponding paragraphs of the paragraph). The main types of complex judgments are - conjunctive; - disjunctive; - implicative; - modal: o aletic (necessary, perhaps accidentally); o epistemic (I know, I believe, I believe, I believe); o deontic (decided, prohibited); o axiological (good, bad); o temporal (in the past, earlier, yesterday, tomorrow, in the future); - questions: o whether - questions; o what are the questions. There is also a continuity of logic classes and artificial intelligence methods. - 32 - Inference By inference (in traditional logic) is meant a form of thinking through which a mental transition (called "inference") is carried out from one or more judgments (called "premise") to some other judgment (called "conclusion") ... Thus, an inference is a complex abstract object in which, with the help of certain relations, one or more judgments are combined into a single whole. To denote inference in logic, the term syllogism is used. Syllogisms are formal and informal. The first formal syllogisms were used by Aristotle. The syllogistics developed by him (the theory of formal syllogisms, i.e. inferences) had a significant impact on the development of ancient and scholastic logic, served as the basis for the creation of a modern logical theory of inference. To consolidate the concepts of logic, you must complete the exercises on page 78. Laws of Logic The most important logical laws include: - identities (any object is identical only to itself); - no contradictions (statements contradictory to each other cannot be true at the same time); - the excluded third (of the two statements that are condradically to each other, one is true, the other is false, and the third is not given); - sufficient reason (any true statement has a sufficient reason, by virtue of which it is true, and not false). Let's consider in more detail each of the indicated positions. I. Law of Identity The Law of Identity proves that every thought is identical to itself, “A is A” (A → A), where A is any thought. For example: "Table salt NaCl consists of Na and Cl". Violations of this law may result in the errors listed below. Amphibole (from the Greek amphibolos - ambiguity, duality) is a logical error based on the ambiguity of linguistic expressions. Another name for this error is “thesis substitution”. Example “They say correctly that the language will bring you to Kiev. And I bought a smoked tongue yesterday. Now I can safely go to Kiev. " - 33 - Equivocation is a logical error based on the use of the same word in different meanings. Equivocation is often used as an artistic rhetorical device. In logic, this technique is also called "concept substitution." Example “The old sea wolf is really a wolf. All wolves live in the forest. " The error here is due to the fact that in the first judgment the word "wolf" is used as a metaphor, and in the second premise - in its direct meaning. Logomachia is a dispute about words, when during the discussion the participants cannot come to single point view due to the fact that they did not clarify the initial concepts. Thus, the law of identity expresses one of the most important requirements of logical thinking - certainty. II. The law of non-contradiction This law expresses the requirement for the consistency of thinking. The law of no contradiction says: two judgments, one of which states something about the subject of thought ("A is B"), and the other denies the same thing about the same subject of thought ("A is not B"), cannot be simultaneously true , if at the same time the attribute B is affirmed or denied about the subject of thought A, considered at the same time and in the same respect. For example, the judgments “Kama is a tributary of the Volga” and “Kama is not a tributary of the Volga” cannot be simultaneously true if these judgments refer to the same river. There will be no contradiction if we affirm something and deny the same about the same person, which, however, is considered at different times. So, the judgments "This person is a student of the Moscow State University" and "This person is not a student of the Moscow State University" can be simultaneously true if the first of them means one time (when this person is studying at the Moscow State University), and in the second - another ( when he graduated from university). The law of non-contradiction indicates that of two opposing propositions, one is necessarily false. But since it applies to both contrary and contradictory judgments, the question of the second judgment remains open: it cannot be both true and false: paper cannot be white and non-white. III. The Law of the Excluded Third The Law of the Excluded Third states that two conflicting judgments cannot be simultaneously false: one of them is necessarily true; the other is necessarily false; the third judgment is excluded, i.e. either A is true or not-A. - 34 - The law of the excluded third formulates an important requirement for your thinking: you cannot deviate from recognizing as true one of two conflicting statements and look for something third between them. If one of them is recognized as true, then the other must be recognized as false and not look for the third. Example: animals can be either vertebrates or non-vertebrates, there can be nothing third. IV. The law of sufficient reason The content of this law can be expressed as follows: in order to be considered completely reliable, any provision must be proven, i.e. sufficient grounds must be known on the basis of which it is believed to be true. A sufficient reason may be another, already proven practice, recognized as true thought, the necessary result of which is the truth of the position being proved. Example. The rationale behind the judgment “The room has become warmer” is the fact that the mercury of the thermometer has expanded. In science, sufficient grounds are considered: a) provisions on certified facts of reality, b) scientific definitions, c) previously proven scientific provisions, d) axioms, as well as e) personal experience. Inference Inference is the derivation of a certain formula based on many other logical formulas by applying inference rules. Interpreter logical expressions, using logical inference, he himself builds the necessary chain of calculations based on the initial description. The value of the logical approach lies in the possibility of constructing an interpreter, the work of which does not depend on logical formulas. Rules in logical representation have the form: P0 ← P1,…, Pn. Р0 is called the goal, and Р1, Р2, ..., Рn - the body of the rule. Predicates P1, P2, ..., Pn are the conditions that must be met in order to achieve the goal P0 successfully. Let's analyze the basics of logical inference using the example of performing the procedure for determining the correctness of reasoning. Definition of Logically Correct Reasoning When we say that one sentence D logically follows from another P, we mean the following: whenever a sentence P is true, then sentence D is also true. In propositional logic, we are dealing with formulas P and D, depending on some variables X1, X2, .., Xn. Definition. We will say that the formula D (X1, X2, ..., Xn) logically follows from the formula P (X1, X2, ..., Xn) and denote P├ D if for - 35 - any sets of values ​​X1, X2 , ..., Xn under the condition P (X1, X2, ..., Xn) = I2, the condition D (X1, X2, ..., Xn) = I. Formula P is called a premise, and D is the conclusion of logical reasoning ... Usually, logical reasoning uses not one premise P, but several; in this case the reasoning will be logically correct from the conjunction of premises the conclusion logically follows. Checking the correctness of logical reasoning The first way is by definition: a) write down all premises and conclusions in the form of formulas of the logic of statements; b) compose a conjunction of formalized premises P1 & P2 & ... & Pn ,; c) check according to the truth table whether the conclusion D follows from the formula P1 & P2 & ... & Pn. The second method is based on the following sign of logical consequence: "Formula D logically follows from formula P if and only if formula P | - D is a tautology." Then checking the correctness of logical reasoning is reduced to answering the question: is a formula a tautology? This question can be answered by constructing a truth table for the formula, or by reducing this formula using equivalent transformations to a well-known tautology. The third way to check the correctness of logical reasoning will be called abbreviated, since it does not require a full enumeration of the values ​​of variables to build a truth table. To substantiate this method, let us formulate a condition under which logical reasoning is incorrect. The reasoning is incorrect if there is a set of values ​​of the variables X01, X02, .., X0n such that the premise D (X01, X02, .., X0n) = Л 3, and the conclusion P (X01, X02, .., X0n) = AND. Example. The reasoning is given: “If it is raining, then the cat is in the room or in the basement. The mouse is in the room or in the burrow. If the cat is in the basement, then the mouse is in the room. If the cat is in the room, then the mouse is in the mink, and the cheese is in the refrigerator. Now it is raining and the cheese is on the table. Where is the cat and where is the mouse? " Let's introduce the following designations: D - "it is raining"; K - "cat in the room"; Р - "cat in the basement"; M - "mouse in the room"; H - "mouse in the mink"; X - "" cheese in the refrigerator "; ¬X -" cheese on the table. "We get the following scheme of reasoning: D → K | R M | N K → H & X 2 3 True False - 36 - R → M D & ¬X --- -? Let us use the rules of inference 1) D & ¬X├D; 2) D & ¬X├¬X; 3) D → K | P, D├ K | P. Next, consider two options: Option A. Let K. 4a) K, K → H & X, K├ H & X; 5a) H & X ├ X; 6a) ¬X, X├X & ¬X - we got a contradiction, which means that the assumption was wrong and this option is impossible. Then 4b) Р, Р → М├М; 5b) Р, М├Р & М The conclusion is Р & М, ie "the cat is in the basement, and the mouse is in the room" Example Check the correctness of the reasoning in an abbreviated way. ? The reasoning is given: "If there is frost today, then I will go to the skating rink. If there is a thaw today, I will go to the disco. Today there will be frost or thaw. Therefore, I will go to the disco." M - "today there will be frost"; K - "I will go to the skating rink"; O - "today there will be a thaw"; D - "I will I'm going to the disco. " The scheme of reasoning has the form: M → K O → D M | O --- D The reasoning is logically correct if for any sets of values ​​of variables (M, K, O, D), the days of which all the premises are true, the conclusion is also true. Suppose the opposite: there is a collection (M0, K0, O0.A0) such that the premises are true, but the conclusion is false. Using the definitions of logical operations, we will try to find this set. We are convinced that the assumption is valid for the values ​​of the variables - 37 - М0 = И, К0 = И, О0 = Л, Д0 = Л (Table 1). Therefore, the reasoning is not logically correct. Table 1 Scheme for solving logical problem No. 1 2 3 4 5 6 7 True M0 → K0 O0 → A0 M0 ˅ O0 M0 K0 False Notes we assume that the premises are true, A0 O0 and the conclusion is false from 2.4 and the definition of implication from 3, 5 and defining the disjunction from 1, 6 and defining the implication Another way to solve the problem: build a truth table for the formula (M → K) & (O → D) & (M˅O) → D and make sure that it is not a tautology. Then, on the basis of logical consequence, the reasoning is not logically correct. Since four expression variables (M, K, O, D) are involved in the reasoning, the truth table will contain 16 lines, and this method is laborious. With the help of the rules of inference, it is possible to construct a logically correct reasoning, but it is not always possible to prove the incorrectness of logical reasoning. Therefore, for this task, the most convenient is an abbreviated way of checking the correctness of logical reasoning. To consolidate the rules of inference, it is necessary to complete the exercises on page 78. The main sections of modern symbolic logic In the development of classical logic, there are three main stages: ancient logic (about 500 BC - early AD), scholastic logic ( early AD - first half of the XIX century), modern symbolic logic (mid. XIX-XX centuries) Modern symbolic logic is divided into main sections, the essence of which is disclosed below. Propositional logic (propositional calculus). It studies simple judgments, considered without taking into account their internal structure, as well as elementary inferences that are most accessible to human understanding. In natural language, such simple judgments are represented by sentences that are considered only from the point of view of their truth or falsity, and inferences - by the corresponding systems of statements. - 38 - Predicate logic (predicate calculus). More complex objects of research are judgments considered taking into account their internal structure. The section of logic, in which not only connections between judgments are studied, but also the internal conceptual structure of judgments, is called "predicate logic." Metalogic. Metalogic is an extension of predicate logic. The subject of her study is the entire sphere of relations as a whole, all those universal relations that can take place between concepts, judgments, inferences, as well as the symbols denoting them. The following paragraphs of the paragraph present the key positions of propositional logic and first-order predicates. In order to better understand modern logics, it is necessary to consider the basic provisions determined by the syllogisms of Aristotle. Aristotle's logic In Aristotle's logic, the structure of elementary judgments is expressed by structures: - S is P (1); - S is not P (2), where S is some logical subject (from Lat. Subjectum); P - some logical predicate (from Latin Predicatum). The types of judgments of Aristotle's logic are listed below. 1. Generally affirmative judgments - A "All S is P" - All poets are impressionable people. The words "is", "is not" play the role of a subject-predicate link. From the statements (1) and (2) with the help of the words “all” and “some” statements of the form are constructed: - all S is P: Type A (Affirmo); - some S are P: Type I (AffIrmo); - all S are not P: Type N (Nego); - some S are not P: Type O (NegO). 2. General negative judgments - E (N) "No S is P" - No person is omniscient. 3. Partially Affirmative - I "Some S are P" - Some people have curly hair. 4. Partial Negative Judgment - About “Some Ss are not Ps” - Some people do not know how to listen. Statements like A, E, I, O are simple categorical statements that form the foundation of all Aristotelian logic. Between the truth and falsity of statements of the type A, E, I, O there is a functional-integral relationship, which is usually depicted in the form of a logical square (Fig. 16, Table 2). - 39 - When using a logical square, it is important to consider the following subtlety: the word "some" is understood in a broad sense - as "some, and maybe all." Table 2 Truth table for judgments of the logic of Aristotle 16. Logical square Explanations to Aristotle's logical square In the left upper corner the logical square contains statements of type A (generally affirmative). In the upper right corner - statements of type E (generally negative). In the lower left corner (under A) - statements of type I (partly affirmative). In the lower right corner (under E) - statements of type O (partial negative). Statements of types A and O, as well as statements of types E and I, are in relation to contradictoriness, or contradiction (diagonal relations). Statements of types A and E are in relation to contrariness, or opposition. - 40 - Statements of type I are subordinate (therefore, imply) statements of type A. Statements of type O are subordinate to statements of type E. While contradictory statements have opposite truth meanings (one is true, the other is false), contradictory statements cannot be simultaneously true, but they can be false at the same time. With the help of a logical square, you can deduce opposite, contradictory and subordinate judgments, establishing their truth or falsity. Example 1. Any judgment is expressed in the sentence A → 1. 2. No judgment is expressed in the sentence E → 0. 3. Some judgments are not expressed in the sentence O → 0. 4. Some judgments are expressed in the sentence I → 1. In addition, using Aristotle's logical square, it is possible to establish the types of relations between judgments: 1) obtaining inference knowledge; 2) comparisons different points view on debatable issues; 3) editing texts and in other cases. Formalisms of the propositional calculus Many models of knowledge representation are based on the formalisms of the propositional and predicate calculus. A rigorous presentation of these theories from the point of view of classical mathematical logic is contained in the works of Schenfield and Tiese; in Pospelov one can find a popular presentation of these theories, which can be recommended as an initial introduction. According to Teyse, logical propositions are a class of natural language sentences that can be true or false, and propositional calculus is a branch of logic that studies such sentences. A natural question arises: What about the sentences of the language, about the truth of which nothing definite can be said? Example. "If it rains tomorrow, I will stay at home." For now, let's just assume that all sentences with which we have to deal belong to the class of logical statements. Statements will be denoted in capital letters Latin alphabet and an index, if required by the presentation. Samples of notation of statements: S, S1, S2, H, H1, H2. As noted, a logical statement is either true or false. A true statement is assigned a logical value - 41 - TRUE (or AND), false - a logical value FALSE (or L). Thus, the truth value forms a set (I, L). In propositional calculus, five logical connectives are introduced (Table 3), with the help of which, in accordance with the rules of construction, logical formulas are drawn up. Table 3 Logical connectives Common Name Type Other notation notation Negation Unary -, ~, NOT, NOT  Conjunction ^ Binary &,., AND, AND * Disjunction  Binary OR OR Implication  Binary => -> Equivalence  Binary<=> <->~ * Note: not to be confused with the truth value of I. The set of rules for constructing logical formulas based on statements includes three components: - basis: every statement is a formula; - induction step: if X and Y are formulas, then X, (X ^ Y), (X  Y), X Y and X  Y are formulas; - constraint: the formula is uniquely obtained using the rules described in the basis and the induction step. Formulas are designated by capital letters of the Latin alphabet with indices. Sample logical formulas are shown in the example. Examples a) T = S1 ^ S2; b) N = H1H2. Expression a) can be read as follows: "Logical formula T is a conjunction (logical connective And) of logical statements S1 and S2". The interpretation of expression b) is as follows: "The logical formula N is a disjunction (logical connective OR) of the negation (NOT) of the logical statement H1 and the logical statement H2". The truth value of a logical formula is a function of the truth values ​​of its constituent statements and can be determined unambiguously using truth tables. Below are the truth tables for negation and binary connectives (Tables 4, 5) Thus, if the truth values ​​for the statements from example a) are known, for example S1 = И, S2 = Л, then the truth value for the formula - 42 - T can be found at the intersection of the second row and the third column in Table 5, that is, T = L. Table 4 Truth table for negation ¬X И Л Л И Table 5 Truth table for binary connectives XYX ^ YX YXYXY AND AND AND AND I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I? ... Any logic is a formal system for which must be defined: - the alphabet of the system - a countable set of symbols; - system formulas - a subset of all words that can be formed from the symbols included in the alphabet (usually a procedure is set that allows you to compose formulas from the symbols of the alphabet of the system); - system axioms - a selected set of system formulas; - system inference rules - a finite set of relationships between system formulas. The dictionary of predicate calculus in the standard presentation includes the following concepts: - variables (we will denote them last letters English alphabet u, v, x, y, z); - constants (we will denote them by the first letters of the English alphabet a, b, c, d): o individual constants; o functional constants; o predicate constants; - statements; - 43 - - logical connectives (¬ (negation), conjunction, disjunction, implication); - quantifiers: (existence, universality); - terms; - functional forms; - predicate forms; - atoms; - formulas. Individual constants and individual variables They are similar to constants and variables from mathematical analysis, with the only difference that the area of ​​their change is individuals, not real numbers. In the theory of artificial intelligence, named constants and variables in the agent's memory corresponding to objects and concepts in the real world are usually called concepts. In first-order languages, there are only individual variables, so they are simply called variables. As will be shown below, the use of first-order languages ​​and the rejection of the use of higher-order languages ​​imposes additional restrictions on the class of natural language sentences under consideration. Individual constants will be denoted by lowercase letters a, b, c, u, v, w of the Latin alphabet with indices or mnemonic names taken from the text. To indicate variables, lowercase will be used letters x, y, z , latin alphabet with indices. Example. Individual constants: a1, b1, c, u, v1, seller_w, k22, buy_l, m10, book_a1. Variables: x, y2, z33. Predicate Constants Predicate constants are used to denote the relationship that the predicate describes. A predicate constant does not change its truth value. It is associated with a suitable number of arguments or parameters called terms, forming a predicate form. The designation of the predicate constant is mnemonic names or the letter of the Latin alphabet P with indices. The language of predicates contains the language of statements, since a statement is nothing more than a predicate constant without arguments, or a null-place predicate form. The semantic area of ​​the predicate form coincides with the area of ​​change of the statement, i.e. (And, L). Functional constants The functional constant (f, g, h) as well as the predicate constant, when combined with a suitable number of terms, forms a functional form. The difference between a functional form and a predicate form is that - 44 - its semantic domain is made up of a set of individual constants. A null-ary functional constant is simply an individual constant. logical connectives in predicate calculus serve to form formulas. Quantifiers. In predicate calculus, two quantifiers are used: the generality quantifier () and the existence quantifier (). The expression xP reads as "for every x P is true." The expression xP reads as “there is such x for which P is true”. A term is an expression formed from variables and constants, possibly using functions. Terms, forms, atoms and formulas in predicate calculus are constructed using the following rules: - any variable or constant is a term; - if t1, ..., tn are terms, and f is an n-ary functional symbol, then f (t1, ..., tn) is a term; - there are no other terms. In fact, all objects in the first-order predicate logic are represented precisely in the form of terms. If a term does not contain variables, then it is called a basic or constant term. The term (t1, t2 ... tn) is any variable and any functional form. A functional form is a functional constant paired with a suitable number of terms. If f is a functional local constant and t1 ..., tn are terms, then the corresponding form is usually denoted by f (t1, ..., tn). If n = 0, then just f is written. A predicate form is a predicate constant concatenated with a suitable number of terms. If p is the corresponding m -ary constant and t1,. .., tn are terms, then the corresponding form is denoted by p (t1, ..., tm). An atom is a predicate form or some equality, i.e. expression of type (s = t), where s and t are terms. An atomic or elementary formula is obtained by applying a predicate to terms, more precisely, it is the expression p (t1, ..., tn), where p is an n-ary predicate symbol (formula), and t1, ..., tn are terms. The concept of a formula is defined recursively (inductively) by the following rules: - an atom is a formula; - if A is a formula, A is a formula; - if A and B are formulas, then (A ^ B), (A  B), (A  B) and (A  B) are formulas; - if A is a formula and x is a variable, then xA and xA are formulas. Let us represent the alphabet of predicate logic in terms of concepts. Constants. They serve as names for individuals (as opposed to names for aggregates): objects, people, or events. Constants are represented - 45 - by symbols like Jacques_2 (Appendix 2 to the word Jacques indicates a very specific person among people with that name), Book_22, Sending_8. Variables. Indicate the names of aggregates, such as a person, a book, a package, an event. The Book_22 symbol represents a well-defined instance, and the book symbol indicates either the set of "all books" or "the concept of a book." Symbols x, y, z represent the names of collections (specific sets or concepts). Predicate names (predicate constants). They define the rules for joining constants and variables, for example, grammar rules, procedures, mathematical operations. Symbols like the following phrases are used for predicative names: Send, Write, Plus, Divide. Functional names (function constants) represent the same rules as predicates. In order not to be confused with predicate names, functional names are written in lowercase letters: phrase, send, write, plus, divide. The symbols that are used to represent constants, variables, predicates and functions are not “Russian words”. They are the symbols of some representation - the words of the "object language" (in our case, the language of predicates). The presentation must exclude any ambiguity of the language. Therefore, the names of individuals contain numbers ascribed to the names of the aggregates. Jacques_1 and Jacques_2 represent two people with the same name. These representations are the essence of the concretization of the name of the collection "Jacques". A predicate is a predicate name along with a suitable number of terms. The predicate is also called the predicate form. Example. In Russian: Jacques sends the book to Marie, logically: Sending (Jacques_2, Mari_4, Book_22). Fuzzy Logic Appearance fuzzy logics, the theory of fuzzy sets and other "fuzzy" theories is associated with the work of the American scientist Zadeh. Zade's main idea was that the human mode of reasoning, based on natural language, cannot be described within the framework of traditional mathematical formalisms. These formalisms are characterized by a strict unambiguous interpretation, and everything that is associated with the use of a natural language has a multivalued interpretation. Zade's goal was to build a new mathematical discipline based not on classical set theory, but on fuzzy set theory. Consistently pursuing the idea of ​​fuzziness, according to Zade, one can build fuzzy analogies of all basic mathematical concepts and create the necessary formal apparatus for modeling human reasoning and the human way of solving problems (Fig. 17). - 46 - Creation of the theory of fuzzy sets - Solution Mathematical theory of fuzzy sets - The basis of the mechanism Formalization of the reasoning of the human way - Task Thesis - a person in his daily life - The problem thinks and makes decisions on the basis of fuzzy concepts Fig. 17. Logic of the emergence of the theory of fuzzy sets Currently, the theory of fuzzy sets and fuzzy logic (fuzzy set & fuzzy logic) is strong place among the leading areas of artificial intelligence. The concept of "fuzziness", applied initially to sets, and then in logic, was successfully extended to other areas of mathematics and computer science and now already exist: - the theory of fuzzy relations; - theory of fuzzy sets; - theory of fuzzy measures and integrals; - theory of fuzzy numbers and equations: - theory of fuzzy logic and approximate reasoning: - theory of fuzzy languages; - theory of fuzzy algorithms; - theory of fuzzy models of optimization and decision making. The following packages are most popular with Russian customers: 1) CubiCalc 2.0 RTC - one of the most powerful commercial expert systems based on fuzzy logic, which allows you to create your own applied expert systems; 2) CubiQuick - the academic version of the CubiCalc package; 3) RuleMaker - program automatic extraction fuzzy rules from input data; 4) FuziCalc - spreadsheet with fuzzy fields, which allows you to make quick estimates with inaccurately known data without accumulating errors; 5) OWL - a package containing the source texts of all known types of neural networks, fuzzy associative memory, etc. The main “consumers” of fuzzy logic in the Russian market are: bankers, financiers and specialists in the field of political and economic analysis. - 47 - Most human tasks do not require high precision. It is often necessary to find a reasonable compromise between the concepts of "accuracy" and "importance" when communicating with the real world. For example: to make a decision to cross the street, a person does not estimate the speed of an approaching car with an accuracy of tenths of a meter per second. He defines for himself the speed of the car as “very fast”, “fast”, “slow”, etc. uses linguistic variables to indicate speed. In the theory of fuzzy sets, the following ways of formalizing fuzzy concepts are proposed. The first method (based on the works of Zadeh) involves rejection of the main statement of classical set theory that some element can either belong or not belong to a set. In this case, a special characteristic set function is introduced - the so-called membership function, which takes values ​​from an interval. This method leads to continual logic. In the second, more general way of formalizing fuzziness, it is assumed that the characteristic functions of a set take values ​​not from an interval, but in a finite or infinite distributive lattice. This generalization is called fuzzy sets in the sense of Gauguin. The third way is P-fuzzy sets. With this generalization, each element of the universal set is associated not with a point in the interval, but with a subset or part of this interval. The algebra of P-fuzzy sets can be reduced to a class algebra. The fourth way is heterogeneous fuzzy sets. Here, in the general case, the elements of the universal set are assigned values ​​in different distributive lattices. Each item can be associated with the most appropriate grade. Moreover, the values ​​of the estimates themselves can be fuzzy and given in the form of functions. The general idea of ​​fuzzy logic is obtained. Now about everything in more detail. Let's consider the conceptual apparatus, which is based on the concept of "linguistic variable". Definition of a linguistic variable (intuitive) 4 If the variable can take on the meanings of words in a natural language (for example, "small", "fast", etc.) ), then this variable is defined as a linguistic variable. Words, the meanings of which are assumed by a linguistic variable, usually denote fuzzy sets. 4 Intelligent information systems: Methodological instructions for laboratory practice on the course "Intelligent information systems" for students of the specialty 071900 - Information systems in economics / Ufimsk. state Aviation tech. un-t; comp .: G.G. Kulikov, T.V. Breikin, L.Z. Kamalova. - Ufa, 1999.-40 p. - 48 - A linguistic variable can take on its values ​​either words or numbers. Definition of a linguistic variable (formal) A linguistic variable is called a five (x, T (x), X, G, M), where x is the name of the variable; T (x) - a set of names of linguistic values ​​of the variable x, each of which is a fuzzy set on the set X; G is a syntactic rule for the formation of names of values ​​of x; M is a semantic rule for associating each value of a value with its concept. The purpose of the concept of a linguistic variable is to say in a formal way that a variable can take as meanings a word from a natural language. In other words, each linguistic variable consists of: - name; - the set of its values, which is also called the base term set T. Elements of the base term set are the names of fuzzy variables; - universal set X; - syntactic rule G, according to which new terms are generated using words of natural or formal language; - a semantic rule P, which assigns to each value of a linguistic variable a fuzzy subset of the set X. For example, if we say “ fast speed", Then the variable" speed "should be understood as a linguistic variable, but this does not mean that the variable" speed "cannot take real values. A fuzzy variable is described by a set (N, X, A), where N is the name of the variable, X is a universal set (area of ​​reasoning), A is a fuzzy set on X. The values ​​of a linguistic variable can be fuzzy variables, i.e. the linguistic variable is at a higher level than the fuzzy variable. The main approach to the formalization of fuzziness is as follows. A fuzzy set is formed by introducing a generalized concept of belonging, i.e. extension of the two-element set of values ​​of the characteristic function (0,1) to the continuum. This means that the transition from the complete belonging of an object to a class to its complete non-belonging occurs not abruptly, but smoothly, gradually, and the belonging of an element to a set is expressed by a number from the interval. - 49 - Fuzzy set (NM), is defined mathematically as a set of ordered pairs composed of elements x of the universal set X and the corresponding degrees of membership μа (x) or (since the membership function is an exhaustive characteristic of NM) directly in the form of a function by the Universal set X of a fuzzy set A is called the domain of definition of the membership function μа. In fig. 18 shows the main varieties of membership functions. Rice. 18. Type of membership functions By the type of membership functions, they are distinguished into: - submodal (Fig. 1. c); - amodal (Fig. 1. a); - multimodal (Fig 1. m); - unimodal (Fig 1.u). - An example. 1) A = ((x1,0.2), (x2,0.6), (x3,1), (x4,0.8)); 2) A = 0.2 | x1 + 0.6 | x2 + 1 | x3 + 0.8 | x4. 3) The same example can be presented in the form of a table. Table 6 A = Table describing the membership function x1 x2 x3 x4 0.2 0.6 1 0.8 Example "A lot of tall people" B real life such a concept as "the growth of a tall person" is subjective. Some believe that a tall person should be more than 170 cm in height, others - more than 180 cm, and others - more than 190 cm. Fuzzy sets allow us to take into account such a blurred assessment. - 50 - Let x be a linguistic variable denoting "a person's height", its function of belonging to a set of tall people A: X (0,1), where X is a set that includes all possible values ​​of a person's height, is given as follows: Then the set of "tall people" is given by the expression A = (x | A (x) = 1), x ϲ X. This is graphically shown in Fig. 19 (solid line), i.e. depends on the individual making the assessment. Let the membership function A: X (0,1) have the form shown in the figure by the dotted line. Rice. 19. A fuzzy set of tall people Thus, a person with a height of 145 cm will belong to the set A with a degree of belonging A (145) = 0, a person with a height of 165 cm - A (165) = 0.3, a height of 185 cm -A (185) = 0.9, and a height of 205 cm - A (205) = 1. Example. "Are you cold now?" Temperatures of + 60oF (+ 12oC) are perceived by humans as cold, and temperatures at + 80oF (+ 27oC) are perceived as heat. Temperatures of + 65oF (+ 15oC) seem to some to be low, to others quite comfortable. We call this group of definitions the function of belonging to the sets describing the subjective perception of temperature by a person. Machines are not capable of such fine gradation. If the standard for determining cold is "temperature below + 15oC", then + 14.99oC will be regarded as cold, but + 15oC will not. In fig. 20.A graph is presented to help you understand how a person perceives temperature. It is just as easy to create additional sets describing the perception of temperature by a person. For example, you can add sets such as "very cold" and "very hot". Similar functions can be described for other concepts such as open and closed states, chiller temperature — 51 — or chiller tower temperature. Rice. 20. Fuzzy set "Temperature" Thus, the following conclusions can be drawn about the essence of the concept of "fuzzy set": 1) fuzzy sets describe indefinite concepts (fast runner, hot water, hot weather); 2) fuzzy sets allow for the possibility of partial belonging to them (Friday is partly a day off (shortened), the weather is rather hot); 3) the degree of belonging of an object to a fuzzy set is determined by the corresponding value of the membership function on the interval (Friday belongs to weekends with a degree of membership of 0.3); 4) the membership function associates the object (or logical variable) with the value of the degree of its membership in a fuzzy set. Curve shapes for assignment of membership functions There are over a dozen typical curves for assignment of membership functions. The most widespread are: triangular, trapezoidal and Gaussian membership functions. The triangular membership function is determined by a triple of numbers (a, b, c), and its value at the point x is calculated according to expression (1).  bx 1  b  a, a  x  b;  c  x MF (x)  , b  x  c; c  b  0, in all other cases   - 52 - (1) For (ba) = (cb) we have the case of a symmetric triangular membership function (Fig. 21), which can be uniquely specified by two parameters from the triple (a , b, c). Rice. 21. Triangular membership function Similarly, to set the trapezoidal membership function, you need four numbers (a, b, c, d).  bx 1  b  a, a  x  b;  1, b  x  c; MF (x)   d  x, c  x  d; d c 0, in all other cases  (2) When (b-a) = (d-c), the trapezoidal membership function takes on a symmetric form (Fig. 22). Rice. 22. Trapezoidal membership function The set of membership functions for each term from the base term-set T are usually depicted together on one graph. In fig. 23, the formalization of the imprecise concept of "Human age" is presented. So, for a 48-year-old person, the degree of belonging to the set "Young" is 0, "Average" - 0.47, "Above average" - 0.20. - 53 - Fig. 23. Description of the linguistic variable "Human age" Basic operations on fuzzy sets The basic operations on IS from the class of all IS F (X) = ( | : X ) of the universal set X are presented below. 1. Addition 5  2 =   = 1-  1,  x  X Fig. 24. Schedule of the operation "Complement" over the function M 2. Intersection I (minimum: non-interacting variables).  3 = ( 1   2) (x) = min ( 1 (x),  2 (x)),  x  X 3. Union I (maximum: noninteracting variables).  3 = ( 1   2) (x) = max ( 1 (x),  2 (x)),  x  X 4. Intersection II (restricted product).  3 = ( 1   2) (x) = max (0,  1 (x) +  2 (x) -1),  x  X 5. Union II (maximum: limited amount).  3 = ( 1   2) (x) = min (1,  1 (x) +  2 (x)),  x  X 6. Intersection III (algebraic product). 5 Hereinafter, operations that are the same for all three bases are displayed on a yellow background. - 54 -  3 = ( 1   2) (x) =  1 (x) *  2 (x),  x  X 7. Union III (algebraic sum).  3 = ( 1   2) (x) =  1 (x) +  2 (x) -  1 (x)   2 (x),  x  X А В Fig. 25. Graph of the operation of intersection I (A) of the union I (B) of the functions M and M1 A B Fig. 26. The graph of the operation of intersection II (A) of the union II (B) of the functions M and M1 A B Fig. 27. The graph of the operation of intersection III (A) of the union III (B) of the functions M and M1 - 55 - 8. Difference.  3 =  1 (x) -  2 (x) = max (0,  1 (x) -  2 (x)),  x  X 9. Concentration.  3 =  2 (x),  x  X Fig. 28. Graph of the difference between the functions M and M1 Fig. 29. Graph of concentration of the function M1 In contrast to Boolean algebra, in F (X) the laws of exclusion of the third are not satisfied. When constructing union or intersection operations in F (X), it is necessary to discard either the laws of exclusion of the third, or the property of distributivity and idempotency. Fuzzy objects can be classified by the type of the range of values ​​of the membership function. And here variants X are distinguished: - lattice; - semigroup; - ring; - category. The case of S-fuzzy sets defined by a pair (X, ), where - 56 - : XS is a mapping from X to a linearly ordered set S It is natural to impose the finiteness and completeness requirements on S. An example of a finite linearly ordered set is a set of linguistic values ​​of the linguistic variable "QUALITY" = (bad, average, good, excellent). N 1 2 3 4 5 6 7 8 9 Table 7 Correspondence table of operations on fuzzy sets and logical functions Name of the operation Modifier / link Complement NOT Intersection (minimum: AND (AND, ..., AND) noninteracting variables) Union I (maximum: OR noninteracting variables) (EITHER, ..., OR) Intersection II (limited AND product) Union II (limited AND product) OR Intersection III (algebraic AND product) Union III (algebraic sum) OR Difference Concentration VERY As shown, depending From the methods of introducing the operations of union and intersection of IS, there are three main theories of IS. In accordance with similar criteria, they divide: - fuzzy logic with maximin operations (operations 1,2,3,8,9); - fuzzy logic with limited operations (operations 1,4,5,8,9); - probabilistic fuzzy logic (operations 1,6,7,8,9). The interpretation of truth as a linguistic variable leads to fuzzy logic with the meanings "true", "very true", "perfectly true", "more or less true", "not very true", "false", etc. , i.e. to the fuzzy logic on which the theory of approximate reasoning is based. Fields of application of the theory of fuzzy sets in various fields of human knowledge.

Armavir State

Pedagogical University

FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

for students studying in the specialty "Informatics"

Armavir 2004

Published by the decision of the UMS ASPU

Reviewer:, Candidate of Physical and Mathematical Sciences, Associate Professor, Head of the Internet Center of the Kabardino-Balkarian State Agricultural Academy

Kozyrev of artificial intelligence. Teaching aid for students studying in the specialty "computer science". - Armavir, 2004.

Considered basic concepts artificial intelligence, directions and prospects for the development of research in the field of artificial intelligence, the basics of the logical programming language PROLOGUE.

The training manual is intended for students studying in the specialty "computer science", and can also be used by everyone who is interested in artificial intelligence and logical programming.

Introduction ……………………………………………… .. …………………… ... 4

1. Artificial intelligence: subject, history
development, research directions …… .. ………………… .. 5

1.1. Research directions in the field
artificial intelligence… .. ………………………………………… .. 5


artificial intelligence…. ………………………… .. ……………… ..... 6

2. Knowledge system ……………………………………………………… .. 8

3. Models of knowledge representation …………………………………. 9

3.1. Semantic networks ………………………………………………… ..9

3.2. Frame model …………………………………………. ………… 10

3.3. Production model ……………………………………………… ..11

3.4. Logical model ……………………………………………………. .12

4. Expert systems …………………………………………… ... 12

4.1. Appointment of expert systems ……………………………………… .12

4.2. Types of tasks solved with the help of expert systems …………… .14

4.3. The structure of expert systems ……………………………………… ... 15

4.4. The main stages of the development of expert systems …………………… 16

4.5. Expert systems development tools ……… 18

5. PROLOGUE - logical programming language ……… .19

5.1. General information about PROLOGUE ………………………………………… 19

5.2. Suggestions: facts and rules ……………………………………… 20

5.4. Variables in PROLOGUE ………………………………………… ... 22

5.5. Objects and data types in PROLOGUE ……………………………… ... 23

5.6. The main sections of the PROLOGUE-program …………………………… .23

5.7. Backtracking ………………………………………………… ... 24

5.8. Controlling Backtracking: Fail and Clipping Predicates …… 26

5.9. Arithmetic calculations ………………………………………… 27

5.10. Recursion …………………………………………………………… .28

5.11. Lists ………………………………………………………………… 30

5.12. Standard list processing tasks ………………………….… .31

Literature………………………………………………............................... .35

Introduction

In recent decades, artificial intelligence has invaded all spheres of activity, becoming a means of integrating sciences. Software tools based on technologies and methods of artificial intelligence have become widespread in the world. All economically developed countries have begun to carry out intensive research on the creation of a single information space that creates conditions for joint remote work based on knowledge bases. The course "Fundamentals of Artificial Intelligence" in higher education includes the study of such sections as the presentation of knowledge in a formal language, the structure of expert systems and the basic principles of their development, various strategies for finding a goal. One of the main lines of the course is the discussion of the implementation of artificial intelligence systems for solving specific applied problems.

As computer support The course examines the instrumental development environment for Visual Prolog programs. The Prolog programming language, based on the ideas and methods of mathematical logic, was originally created for the development of artificial intelligence applications. Applications such as knowledge bases, expert systems, natural language interfaces, and intelligent information management systems are effectively programmed in the Visual Prolog environment. A high level of abstraction, the ability to represent complex data structures and simulate logical relationships between objects allow solving problems of various subject areas.

The teaching aid "Fundamentals of Artificial Intelligence" will help to expand the ideas of the future teacher of informatics about the areas of application of the theory of artificial intelligence, about the existing and promising languages programming and hardware structures to create artificial intelligence systems.

1. Artificial intelligence: subject, history of development, research directions.

Intellectus(lat) - mind, reason, reason, thinking abilities of a person. Artificial Intelligence(AI) is a field of computer science, the subject of which is the development of hardware and software tools that allow the user to solve problems that are traditionally considered intelligent. The theory of artificial intelligence is the science of knowledge, how to extract it, represent it in artificial systems, process it within the system and use it to solve practical problems. AI technologies are used today in many application areas.

The beginning of research in the field of AI (late 1950s) is associated with the works of Newell, Syman, and Shaw, who investigated the processes of solving various problems. The results of their work were such programs as "LOGIK-THEORETIK", intended for proving theorems in propositional calculus, and "GENERAL PROBLEM SOLVER". These works marked the beginning of the first stage of research in the field of AI, associated with the development of programs that solve problems based on the use of various heuristic methods.

In this case, the heuristic method for solving a problem was considered as characteristic of human thinking "in general", which is characterized by the emergence of guesses about the way to solve the problem with their subsequent verification. He was opposed to the algorithmic method used in the computer, which was interpreted as the mechanical implementation of a given sequence of steps, deterministically leading to the correct answer. The interpretation of heuristic methods for solving problems as a purely human activity led to the emergence and further spread of the term AI

A. Neurocybernetics.

Neurocyberietics is focused on hardware modeling of structures similar to the structure of the brain. Physiologists have long established that the basis of the human brain is a large number of interconnected and interacting nerve cells - neurons. Therefore, the efforts of neurocybernetics were focused on creating elements similar to neurons and combining them into functioning systems. These systems are usually called neural networks, or neural networks. Recently, neurocybernetics has begun to develop again due to a leap in the development of computers. Neurocomputers and transputers appeared.

Currently, three approaches are used to create neural networks:

hardware- creation special computers, expansion boards, chipsets that implement all the necessary algorithms,

program- creation of programs and tools designed for high-performance computers. Networks are created in the computer's memory, all the work is done by its own processors;

hybrid- a combination of the first two. Part of the calculations is performed by special expansion boards (coprocessors), and part by software.

B. Cybernetics of the "black box".

The "black box" cybernetics is based on the principle opposite to neurocybernetics. It doesn't matter how the "thinking" device works. The main thing is that it reacts to the given input influences in the same way as the human brain.

This direction of artificial intelligence was focused on finding algorithms for solving intelligent problems on existing computer models.

Research in the field of artificial intelligence has come a long and thorny path: the first hobbies (1960), pseudoscience (1960-65), successes in solving puzzles and games (), disappointment in solving practical problems (), the first successes in solving a number of practical problems ( ), mass commercial use in solving practical problems (). But the basis of commercial success is rightfully formed by expert systems and, first of all, real-time expert systems. It was they who allowed artificial intelligence to move from games and puzzles to mass use in solving practically significant problems.

1.2. The main tasks solved in the field
artificial intelligence

Knowledge representation and development of knowledge-based systems

Development of knowledge representation models, creation of knowledge bases that form the core of expert systems (ES). Recently, it includes models and methods for extracting and structuring knowledge and merges with knowledge engineering. In the field of artificial intelligence, expert systems and tools for their development have achieved the greatest commercial success.

Games and creativity.

Game intellectual tasks - chess, checkers, go. It is based on one of the early approaches - the maze model plus heuristics.

Development of natural language interfaces and machine translation

Voice control, translation from language to language. The first program in this area is a translator from English to Russian. The first idea, word-by-word translation, turned out to be fruitless. Currently, a more complex model is used, including the analysis and synthesis of natural language messages, which consists of several blocks. For analysis, these are:

The language using the production model is PROLOGUE.

3.4. Logical model

Their description is based on a formal system with four elements:

M =<Т, Р, А, В >, where

T - a set of basic elements of different nature with appropriate procedures;

P is a set of syntactic rules. With their help, syntactically correct collections are formed from the T elements. Procedure P (R) determines whether this set is correct;

A is a subset of the set P, called axioms. Procedure P (A) gives an answer to the question about belonging to the set A;

B - a set of inference rules. By applying them to the elements of A, you can get new syntactically correct collections to which you can apply these rules again. Procedure P (B) determines, for each syntactically correct collection, whether it is deducible.

4. Expert systems

4.1. Appointment of expert systems

Expert systems(ES) are complex software systems that accumulate the knowledge of specialists in specific subject areas and replicate this empirical experience for the advice of less qualified users.

The purpose of the study of expert systems is the development of programs that, when solving problems from a certain subject area, obtain results that are not inferior in quality and efficiency to the results obtained by experts.

Expert systems are designed to solve non-formalized, practical tasks. The use of an expert system should be used only when their development is possible and expedient.

Facts indicating the need to develop and implement expert systems:

Lack of professionals spending significant time helping others;

The need for a large team of specialists, since none of them has sufficient knowledge;

Low productivity, since the task requires a complete analysis of a complex set of conditions, and an ordinary specialist is not able to view (in the allotted time) all these conditions;

The presence of competitors who have the advantage that they do a better job at a given task.

By functional expert systems can be divided into the following types:

1. Powerful expert systems designed for a narrow circle of users (control systems for complex technological equipment, air defense expert systems). Such systems usually operate in real time and are very expensive.

2. Expert systems designed for a wide range of users. These include medical diagnostic systems, complex training systems. The knowledge base of these systems is not cheap, as it contains unique knowledge obtained from experts. Knowledge gathering and the formation of a knowledge base is carried out by a knowledge gathering specialist - a cognitive engineer.

3. Expert systems with small number rules and relatively inexpensive. These systems are intended for the general public (systems that make it easier to troubleshoot hardware). The use of such systems makes it possible to do without highly qualified personnel, to reduce the time of troubleshooting. The knowledge base of such a system can be supplemented and changed without resorting to the assistance of the system developers. They usually use knowledge from various reference manuals and technical documentation.

4. Simple expert systems for individual use. Often made independently. Used in situations to facilitate daily work. The user, having organized the rules into a certain knowledge base, creates his own expert system on its basis. Such systems are used in jurisprudence, commercial activities, and the repair of simple equipment.

The use of expert systems and neural networks brings significant economic benefits. For example: - American Express reduced its losses by $ 27 million a year thanks to an expert system that determines the advisability of issuing or refusing a loan to a particular firm; “DEC saves $ 70M annually with XCON / XSEL, which customizes the VAX computing system. Its use reduced the number of errors from 30% to 1%; - Sira has cut pipeline construction costs in Australia by $ 40 million with a pipeline expert system.

4.2. Types of tasks solved with
expert systems

Interpreting data... Interpretation refers to the definition of the meaning of the data, the results of which must be consistent and correct. Examples of ES:

Discovery and identification different types ocean vessels - SIAP;

Determination of the basic personality traits based on the results of psychodiagnostic testing in the AUTANTEST and MICROLUSHER systems, etc.

Diagnostics... Diagnostics refers to the detection of a malfunction in a certain system. Examples of ES:

Diagnostics and therapy of coronary vasoconstriction - ANGY;

Diagnostics of errors in hardware and computer software - CRIB system, etc.

Monitoring... The main task of monitoring is continuous interpretation of data in real time and signaling that certain parameters are out of permissible limits. The main problems are the "skipping" of the alarming situation and the inverse task of "false" triggering. Examples of ES:

Control over the operation of power plants SPRINT, assistance to dispatchers of a nuclear reactor - REACTOR:

Control of emergency sensors at a chemical plant - FALCON, etc.

Design... Design consists in preparing specifications for the creation of "objects" with predefined properties. The specification refers to the entire set of necessary documents, drawing, explanatory note, etc. Examples of ES:

Design of VAX - 1/780 computer configurations in the XCON (or R1) system,

LSI design - CADHELP;

Synthesis of electrical circuits - SYN, etc.

Forecasting. Predictive systems logically infer probable consequences from given situations. Examples of ES:

Weather forecasting - WILLARD system:

Estimates of the future harvest - PI. ANT;

Forecasts in the economy - ECON et al.

Planning. Planning refers to the finding of action plans related to objects that are capable of performing some functions. In such ES, models of behavior of real objects are used in order to logically deduce the consequences of the planned activity. Examples of ES:

Robot Behavior Planning - STRIPS,

Industrial order planning - 1SIS,

Experiment planning - MOLGEN et al.

Education. Learning systems diagnose errors in the study of a discipline using a computer and suggest the right decisions. They accumulate knowledge about a hypothetical "student" and his characteristic mistakes, then in their work they are able to diagnose weaknesses in the students' knowledge and find appropriate means to eliminate them. Examples of ES:

Learning the Lisp programming language in the Lisp Teacher system;

PROUST system - teaching Pascal language, etc.

The solutions of expert systems are transparent, that is, they can be explained to the user at a qualitative level.

Expert systems are able to supplement their knowledge in the course of interaction with an expert.

4.3. The structure of expert systems

The structure of expert systems includes the following components:

Knowledge base- the core of the ES, the body of knowledge of the subject area, recorded on a machine medium in a form understandable to an expert and a user (usually in some language close to natural). Parallel to this "human" representation, there is a knowledge base in the internal "machine" representation. It consists of a set of facts and rules.

Facts - describe objects and the relationship between them. Rules - Used in the knowledge base to describe relationships between objects. Inference is made based on the relationships defined by the rules.

Database- is intended for temporary storage of facts and hypotheses, contains intermediate data or the result of communication between systems and the user.

Machine inference- a reasoning mechanism that operates with knowledge and data in order to obtain new data; for this, a software-implemented mechanism for finding solutions is usually used.

Communication subsystem- serves to conduct a dialogue with the user, during which the expert system asks the user for the necessary facts for the reasoning process, as well as allowing the user to control the course of reasoning to some extent.

Explain subsystem- is necessary in order to give the user the ability to control the course of reasoning.

Knowledge Acquisition Subsystem- a program that provides a knowledge engineer with the ability to create knowledge bases in an interactive mode. Includes a system of nested menus, knowledge representation language templates, tips ("help" - mode) and other service tools that facilitate work with the database.

The expert system operates in two modes:

Knowledge acquisition (definition, modification, addition);

Problem solving.

In this mode, the data on the task is processed and, after the appropriate coding, is transferred to the blocks of the expert system. The results of processing the obtained data are sent to the advice and explanation module and, after being converted into a language close to natural, are issued in the form of advice, explanations and comments. If the answer is not clear to the user, he can demand an explanation from the expert system to receive it.

4.4. The main stages of the development of expert systems

The technological process of developing an industrial expert system can be divided into six main stages:

1. Choosing the right problem

Activities leading up to the decision to start developing a specific ES include:

Definition of the problem area and task;

Finding an expert willing to cooperate in solving the problem, and assigning a development team;

Determination of a preliminary approach to solving the problem;

Analysis of costs and benefits from development;

Preparation of a detailed development plan.

2. Development of a prototype system

Prototype system is a truncated version of the expert system, designed to check the correctness of the coding of facts, relationships and expert reasoning strategies.

The prototype must meet two requirements:

A prototype system should solve the most common problems, but it shouldn't be big either.

The time and effort involved in prototyping should be negligible.

The work of the prototype programs is evaluated and verified in order to bring it in line with the real needs of users. The prototype is checked for:

Convenience and adequacy of the input-output interfaces (the nature of the questions in the dialogue, the consistency of the output text of the result, etc.)

The effectiveness of the control strategy (the order of enumeration, the use of fuzzy inference, etc.);

Quality of test cases;

Correctness of the knowledge base (completeness and consistency of rules).

An expert usually works with a knowledge engineer who helps to structure knowledge, define and form the concepts and rules needed

to solve the problem. If successful, the expert, with the help of the Cognition Engineer, expands the knowledge base of the prototype about the problem domain.

If it fails, it can be concluded that. What other methods are needed to solve this problem or to develop a new prototype.

3. Development of a prototype to an industrial expert system.

At this stage, the knowledge base is significantly expanded, big number additional heuristics. These heuristics usually increase the depth of the system by providing more rules for subtle aspects of individual cases. After the establishment of the basic structure of the ES, the knowledge engineer proceeds to the development and adaptation of interfaces, with the help of which the system will communicate with the user and the expert.

As a rule, a smooth transition from prototypes to industrial expert systems is realized. Sometimes, when developing an industrial system, additional stages are distinguished for the transition: demonstration prototype - research prototype - working prototype - industrial system.

4. Assessment of the system

Expert systems are evaluated in order to check the accuracy of the program and its usefulness. The assessment can be carried out based on various criteria, which we will group as follows:

User criteria (clarity and "transparency" of the system, user-friendliness of interfaces, etc.);

Criteria of invited experts (assessment of advice-solutions offered by the system, comparing it with our own solutions, assessment of the subsystem of explanations, etc.);

Criteria for the development team (implementation efficiency, productivity, response time, design, breadth of scope, consistency of knowledge base, number of deadlocks when the system cannot make a decision, analysis of the program's sensitivity to minor changes in knowledge representation, weighting factors used in logical output, data, etc.).

5. Docking the system

At this stage, the expert system is docked with other software tools in the environment in which it will operate, and the people it will serve is trained. Docking also implies the development of connections between the expert system and the environment in which it operates.

Docking includes ensuring the connection of the ES with existing databases and other systems in the enterprise, as well as improving system factors that depend on time, so that it can be ensured more efficiently and improve its characteristics. technical means if the system is operating in an unusual environment (for example, communication with measuring devices).

6. System support

Transcoding a system to a language like C improves performance and portability, but decreases flexibility. This is acceptable only if the system retains all knowledge of the problem area, and this knowledge will not change in the near future. However, if the expert system is created precisely because the problem area is changing, then it is necessary to support the system in the development environment.

Artificial Intelligence Languages

Lisp (LISP) and Prolog (Prolog) are the most common languages ​​for solving artificial intelligence problems. There are also less common languages ​​of artificial intelligence, such as REFAL, developed in Russia. The versatility of these languages ​​is less than that of traditional languages, but artificial intelligence languages ​​compensate for its loss by rich possibilities for working with symbolic and logical data, which is extremely important for artificial intelligence tasks. On the basis of artificial intelligence languages, specialized computers are created (for example, Lisp machines) designed to solve artificial intelligence problems. The disadvantage of these languages ​​is their inapplicability for the creation of hybrid expert systems.

Special software tools

Libraries and add-ons for the artificial intelligence language Lisp: KEE (Knowledge Engineering Environment), FRL (Frame Representation Language), KRL (Knowledge Represantation Language), ARTS, etc., allowing users to work with expert systems at a higher level than is possible in conventional artificial intelligence languages.

"Shells"

"Shells" are empty "versions of existing expert systems, that is, ready-made expert systems without a knowledge base. An example of such a shell is EMYCIN (Empty MYCIN), which is an empty MYCIN expert system. that they do not require the work of programmers at all to create a ready-made expert system. All that is required is domain experts to fill the knowledge base. However, if a certain domain does not fit well with the model used in a certain shell, filling the knowledge base in this case is very difficult.

5. PROLOGUE - the language of logical
programming

5.1. General information about the PROLOGUE.

PROLOGUE (PROGRAMMING IN LOGIC) is a logical programming language designed to solve problems in the field of artificial intelligence (creating ES, translation programs, natural language processing). It is used for natural language processing and has powerful tools for retrieving information from databases, and the search methods used in it are fundamentally different from traditional ones.

The basic constructions of the PROLOGUE are borrowed from logic. PROLOGUE is not a procedural language, but a declarative programming language. It is focused not on the development of solutions, but on a systematized and formalized description of the problem so that the solution follows from the written description.

The essence of the logical approach is that the machine as a program is offered not an algorithm, but a formal description of the subject area and the problem being solved in the form of an axiomatic system. Then the search for a solution using the output in this system can be entrusted to the computer itself. The main task of the programmer is to successfully represent the subject area with a system of logical formulas and with such a set of relations on it that most fully describe the task.

Fundamental properties of the PROLOGUE:

1) search and return inference engine

2) built-in pattern matching engine

3) simple and easily changeable data structure

4) lack of pointers, assignment and jump operators

5) natural recursion

Stages of PROLOGUE programming:

1) the announcement of facts about objects and the relationship between them;

2) determination of the rules for the relationship of objects and relations between them;

3) the formulation of the question about objects and the relationship between them.

The theoretical basis of the PROLOGUE is a branch of symbolic logic called the predicate calculus.

Predicate Is the name of a property or relationship between objects with a sequence of arguments.

<имя_предиката>(t1, t2, ..., tn)), t1, t2, ..., tn are arguments

For example, the fact black (cat) is written using the predicate black, which has one argument. Fact wrote (sholokhov, "QUIET DON") written using the predicate wrote that has two arguments.

The number of arguments of a predicate is called the arity of the predicate and is denoted by black / 1 (the black predicate has one argument, its arity is one). Predicates may have no arguments; the arity of such predicates is 0.

The Prolog language grew out of the work of A. Colmerauer on natural language processing and the independent work of R. Kowalski on the applications of logic to programming (1973).

The most famous in Russia is the Turbo Prolog programming system - a commercial implementation of the language for IBM-compatible PCs. In 1988, a much more powerful version of Turbo Prolog 2.0 was released, including an improved integrated programming environment, a fast compiler, and low-level programming tools. Borland distributed this version until 1990, when PDC acquired a monopoly on the use of the compiler source code and further marketing the programming system under the name PDC Prolog.

In 1996, the Prolog Development Center launches the Visual Prolog 4.0 system on the market. The Visual Prolog environment uses an approach called "visual programming", in which the appearance and behavior of programs are defined using special graphical design tools without traditional programming in an algorithmic language.

Visual Prolog includes an interactive visual development environment (VDE - Visual Develop Environment), which includes text and various graphic editor, code generation tools that construct control logic (Experts), and a language extension interface visual programming(VPI - Visual Programming Interface), Prolog compiler, a set of various include files and libraries, link editor, files containing examples and help.

5.2. Suggestions: facts and rules

A PROLOGUE program consists of sentences, which can be facts, rules, or queries.

Fact- this is a statement that some specific relationship between objects is observed. Fact is used to show a simple relationship between data.

Fact structure:

<имя_отношения>(t1, t2, ..., tn)), t1, t2, ..., tn are objects

Examples of facts:

studying (ira, university). % Ira is studying at the university

parent (ivan, alexei). % Ivan is the parent of Alexei

programming_language (prologue). % Prolog is a programming language

The set of facts is database... In the form of a fact, the program records data that is accepted as true and does not require proof.

Rules are used to establish relationships between objects based on available facts.

Rule structure:

<имя_правила> :- <тело правила>or

<имя_правила >if<тело правила>

The left side of the inference rule is called head rules, and the right side is body... The body can consist of several conditions, separated by commas or semicolons. A comma means a logical AND operation, a semicolon means a logical OR operation. The sentences use variables to summarize the rules of inference. Variables are valid in only one sentence. The name in different sentences indicates different objects. All sentences must end with a dot.

Examples of rules:

mother (X, Y): - parent (X, Y), woman (X).

student (X): - studies (X, institute); studying (X, university).

A rule differs from a fact in that a fact is always true, and a rule is true if all the statements that make up the body of the rule are true. Facts and rules form knowledge base.

If you have a database, you can write inquiry(target) to her. A request is a formulation of a problem that a program must solve. Its structure is the same as that of a rule or a fact. There are constant queries and variable queries.

Constant queries allow you to get one of two answers: "yes" or "no"

For example, there are facts:

knows (lena, tanya).

knows (lena, sasha).

knows (sasha, tanya).

a) Does Lena know Sasha?

inquiry: knows (lena, sasha).

Result: yes

b) Does Tanya know Lena?

inquiry knows (tanya, lena).

Result: no

If a variable is included in the request, the interpreter tries to find such values ​​for which the request will be true.

a) Whom does Lena know?

inquiry: knows (Lena, X).

Result:

X = tanya

X = sasha

b) Who knows Sasha?

inquiry: knows (X, sasha).

Result: X = Lena

Queries can be compound, i.e. consist of several simple queries... They are united by the sign “,”, which is understood as a logical connective “and”.

Simple queries are called subgoal, a compound query takes true meaning when each subgoal is true.

To answer whether Lena and Sasha have common acquaintances, you should make a request:

knows (Lena, X), knows (Sasha, X).

Result:

X = Tanya

5.4. Variables in PROLOGUE

The variable in PROLOGUE is not considered as an allocated memory location. It is used to denote an object that cannot be referenced by name. The variable can be considered local name for some object.

Variable name must start with capital letter or underscore and contain only letters, numbers and underscores: X, _y, AB, X1. A variable that does not matter is called free, a variable having a value - concretized.

A variable consisting only of the underscore character is called anonymous and is used if its meaning is irrelevant. For example, there are facts:

parent (Ira, Tanya).

parent (misha, tanya).

parent (Olya, Ira).

It is required to identify all parents

Inquiry: parent (x, _)

Result:

X = Ira

X = Misha

X = Olya

The scope of a variable is assertion. Within a statement, the same name belongs to the same variable. Two statements can use the same variable name in completely different ways.

There is no assignment operator in the PROLOGUE; its role is played by the equality operator =. Target X = 5 can be thought of as a comparison (if X has a value) or as an assignment (if X is free).

In the PROLOGUE, you cannot write X = X + 5 to increase the value of the variable. A new variable should be used: Y = X + 5.

5.5. Objects and data types in PROLOGUE

The data objects in the PROLOGUE are called terms... A term can be a constant, variable, or compound term (structure). The constants are integers and real numbers (0, - l, 123.4, 0.23E-5), as well as atoms.

Atom- any sequence of characters enclosed in quotation marks. Quotes are omitted if the string begins with a lowercase letter and contains only letters, numbers, and an underscore (that is, if it can be distinguished from variable notation). Examples of atoms:

abcd, “a + b”, “student Ivanov”, prologue, “Prologue”.

Structure allows you to combine several objects into a single whole. It consists of a functor (name) and a sequence of terms.

The number of components in a structure is called the arity of the structure: data / 3.

A structure can contain another structure as one of its objects.

birthday_date (person ("Masha", "Ivanova"), data (April 15, 1983))

Domain in PROLOGUE, the data type is named. The standard domains are:

integer - whole numbers.

real - real numbers.

string - strings (any sequence of characters enclosed in quotes).

char is a single character enclosed in apostrophes.

symbol - a sequence of Latin letters, numbers and underscores starting with a small letter or any sequence of symbols enclosed in quotes.

5.6. The main sections of the PROLOGUE program

As a rule, a PROLOGUE program consists of four sections.

DOMAINS- section describing domains (types). The section is used if the program uses non-standard domains.

For instance:

PREDICATES - section for describing predicates. The section is used if the program uses non-standard predicates.

For instance:

knows (name, name)

student (name)

CLAUSES - section of proposals. It is in this section that sentences are written: facts and rules of inference.

For instance:

knows (lena, ivan).

student (ivan).

familiar_student (X, Y): - knows (X, Y), student (Y).

GOAL - target section. This section records the request.

For instance:

student_sign (lena, X).

The simplest program can only contain a GOAL section, for example:

write (“Enter your name:”), readln (Name),

write (“Hello”, Name, “!”).

Top related articles