How to set up smartphones and PCs. Informational portal
  • home
  • Windows 10
  • Fundamentals of operating systems lecture course tutorial. Lecture course on operating systems

Fundamentals of operating systems lecture course tutorial. Lecture course on operating systems

Operating system (OS) is a program that enables the rational use of computer hardware in a user-friendly way. The introductory lecture talks about the subject studied in this course. First, we will try to answer the question, what is OS. Then comes the analysis of evolution OS and a story about the emergence of the main concepts and components of modern OS. Finally, a classification will be presented. OS in terms of architecture features and use of computer resources.

What is an operating system

Structure computing system

What does any computer system consist of? First, from what in English-speaking countries it is customary to call the word hardware, or technical support: CPU, memory, monitor, disk devices etc., united by a trunk connection, which is called a bus. Some information about the architecture of the computer is available in Appendix 1 to this lecture.

Secondly, the computing system consists of software. All software is usually divided into two parts: application and system. Application software typically includes a variety of banking and other business programs, games, word processors, etc. System software is usually understood as programs that facilitate the operation and development application programs. I must say that the division into application and system software is partly conditional and depends on who carries out such a division. Thus, an ordinary user, inexperienced in programming, can consider Microsoft Word a system program, but from the programmer's point of view, this is an application. The compiler of the C language for an ordinary programmer is a system program, and for a system one it is an application program. Despite this fuzzy edge, this situation can be displayed as a sequence of layers (see Fig. 1.1), highlighting separately the most general part system software - operating system:

Rice. 1.1. Layers of computer system software

What is OS

Most of the users have operating experience operating systems, but nevertheless they find it difficult to give this concept a precise definition. Let's take a quick look at the main points of view.

Operating system as a virtual machine

When developing OS abstraction is widely used, which is an important method of simplification and allows you to concentrate on the interaction of high-level components of the system, ignoring the details of their implementation. In this sense OS is the interface between the user and the computer.

The architecture of most computers at the level of machine instructions is very inconvenient for use by application programs. For example, working with a disk requires knowledge of the internal structure of its electronic component - a controller for entering disk rotation commands, searching and formatting tracks, reading and writing sectors, etc. It is clear that the average programmer is not able to take into account all the features of the equipment (in modern terminology - to develop device drivers), but should have a simple high-level abstraction, for example, representing the disk information space as a set of files. The file can be opened for reading or writing, used to retrieve or reset information, and then closed. This is conceptually simpler than worrying about the details of moving disk heads or organizing a motor. Similarly, with the help of simple and clear abstractions, all unnecessary organizational details are hidden from the programmer. interrupts, timer operation, memory management, etc. Moreover, on modern computing systems, you can create the illusion of an unlimited size of RAM and the number processors. It does all this operating system. Thus, operating system presented to the user virtual machine, which is easier to deal with than directly with the computer hardware.

Operating system as resource manager

Operating system designed to manage all parts of a highly complex computer architecture. Imagine, for example, what happens if several programs running on the same computer try to output to a printer at the same time. We would get a hodgepodge of lines and pages printed various programs.Operating system prevents this kind of chaos by buffering information intended for printing on disk and organizing a print queue. For multi-user computers, the need for resource management and protection is even more obvious. Hence, operating system , How resource manager, carries out an orderly and controlled distribution processors, memory and other resources between different programs.

The operating system as a protector of users and programs

If the computer system allows joint work several users, then the problem of organizing their safe activities arises. It is necessary to ensure the safety of information on the disk so that no one can delete or damage other people's files. You cannot allow the programs of some users to arbitrarily interfere with the work of programs of other users. It is necessary to stop attempts of unauthorized use of the computer system. All these activities are carried out operating system as an organizer of the safe work of users and their programs. From this point of view operating system appears to be the security system of the state, which is entrusted with police and counterintelligence functions.

The operating system as a constantly functioning kernel

Finally, the following definition can be given: operating system is a program that constantly runs on a computer and interacts with all application programs. It would seem that this is an absolutely correct definition, but, as we will see later, in many modern operating systems only part of the computer is constantly working operating system, which is called its core.

As we can see, there are many points of view on what is operating system. It is impossible to give it an adequate rigorous definition. It's easier for us to say not what is operating system what is it for and what does it do. To clarify this issue, consider the history of the development of computing systems.

A Brief History of the Evolution of Computing Systems

We will consider the history of the development of computing, and not operating systems because hardware and software have evolved together, mutually influencing each other. The emergence of new technical capabilities led to a breakthrough in the creation of convenient, efficient and safe programs, and fresh ideas in the software area stimulated the search for new technical solutions. It is these criteria - convenience, efficiency and safety - that played the role of natural selection factors in the evolution of computing systems.

First period (1945–1955). Lamp cars. No operating systems

We will begin the study of the development of computer systems with the advent of electronic computing systems (omitting the history of mechanical and electromechanical devices).

The first steps in the development of electronic computers were taken at the end of World War II. In the mid-40s, the first tube computing devices were created and program principle stored in the machine's memory (John Von Neumann, June 1945). At that time, the same group of people participated in the design, operation, and programming of the computer. It was more of a research work in the field of computing, rather than the regular use of computers as a tool for solving any practical problems from other applied areas. Programming was carried out exclusively on machine language. About operating systems there was no question, all the tasks of the organization computing process solved manually by each programmer from the control panel. Only one user could be at the console. The program was loaded into the machine's memory at best from a deck of punched cards, and usually with the help of a switch panel.

The computing system performed only one operation at a time (input-output or actual computation). The programs were debugged from the control panel by studying the state of the machine's memory and registers. At the end of this period, the first system software appears: in 1951-1952. there are prototypes of the first compilers from symbolic languages ​​(Fortran, etc.), and in 1954 Nat Rochester develops an assembler for the IBM-701.

A significant part of the time was spent on preparing the launch of the program, and the programs themselves were executed strictly sequentially. This mode of operation is called sequential data processing. In general, the first period is characterized by extremely high cost computing systems, their small number and low efficiency of use.

Second period (1955–early 60s). Computers based on transistors. Batch operating systems

Since the mid-50s, the next period in the evolution of computer technology began, associated with the emergence of a new technical base - semiconductor elements. Application transistors instead of often burned out electronic lamps led to an increase in the reliability of computers. Machines can now run continuously long enough to be tasked with practical tasks. The consumption of electricity by computers is reduced, cooling systems are being improved. Computers have gotten smaller. The cost of operation and maintenance of computer equipment has decreased. The use of computers by commercial firms began. At the same time, there is a rapid development of algorithmic languages ​​(LISP, COBOL, ALGOL-60, PL-1, etc.). The first real compilers, link editors, libraries of mathematical and service subroutines appear. Simplifies the programming process. There is no need to charge the same people with the entire process of developing and using computers. It was during this period that the division of personnel into programmers and operators, maintenance specialists and developers of computers took place.

The process of running programs is changing. Now the user brings the program with input data in the form of a deck of punched cards and specifies the required resources. Such a deck receives the name of the task. The operator loads the task into the machine's memory and starts it for execution. The resulting output is printed on the printer, and the user receives it back after some (quite a long) time.

Changing requested resources causes suspension program execution, as a result CPU often idle. To increase the efficiency of using the computer, jobs with similar resources begin to be collected together, creating a job package.

The first batch processing systems , which simply automate the launch of one program from a package after another and thereby increase the load factor processor. When implementing batch processing systems A formalized job control language was developed, with the help of which the programmer told the system and the operator what work he wanted to do on the computer. batch processing systems became the prototype of modern operating systems, they were the first system programs designed to control the computing process.

Third period (early 60s - 1980). Computers based on integrated circuits. First multitasking OS

The next important period in the development of computers dates back to the early 1960s - 1980. At that time, the technical base underwent a transition from individual semiconductor elements of the type transistors To integrated circuits. Computer technology is becoming more reliable and cheaper. The complexity and number of tasks solved by computers is growing. Increased productivity processors.

Increasing the efficiency of using processor time is hampered by the low speed of mechanical input-output devices (a fast punched card reader could process 1200 punched cards per minute, printers printed up to 600 lines per minute). Instead of directly reading a batch of tasks from punched cards into memory, they begin to use its preliminary recording, first on magnetic tape, and then on disk. When input is required during a job, it is read from disk. Similarly, the output is first copied to the system buffer and written to tape or disk, and only printed when the job is completed. In the beginning, actual I/O operations were carried out off-line, that is, using other, simpler, separate standing computers. In the future, they begin to be executed on the same computer that performs calculations, that is, in on-line mode. This technique is called spooling (short for Simultaneous Peripheral Operation On Line) or paging-pumping data. The introduction of the pump-pump technique in batch systems made it possible to combine the real I / O operations of one task with the execution of another task, but required the development of an apparatus interrupts to notify processor completion of these operations.

Magnetic tapes were sequential access devices, that is, information was read from them in the order in which it was recorded. The appearance of a magnetic disk, for which the order of reading information is not important, that is, a device direct access, led to the further development of computing systems. When processing a batch of tasks on a magnetic tape, the order in which the tasks were started was determined by the order in which they were entered. When processing a batch of tasks on a magnetic disk, it became possible to select the next task to be performed. Batch systems they start scheduling tasks: depending on the availability of the requested resources, the urgency of the calculations, etc. one task or another is selected for the account.

Further increase in efficiency of use processor was achieved through multiprogramming. The idea behind multiprogramming is this: while one program is doing an I/O operation, CPU does not idle, as it happened in single-program mode, but executes another program. When I/O operation ends CPU returns to the execution of the first program. This idea is reminiscent of the behavior of a teacher and students on an exam. While one student (the program) is thinking about the answer to the question (the I/O operation), the teacher ( CPU) listens to the answer of another student (calculations). Naturally, this situation requires the presence of several students in the room. Similarly, multiprogramming requires having several programs in memory at the same time. In this case, each program is loaded into its own section of RAM, called a partition, and should not affect the execution of another program. (Students sit at separate tables and do not prompt each other.)

The advent of multiprogramming requires a real revolution in the structure of the computing system. Hardware support plays a special role here (many hardware innovations appeared at the previous stage of evolution), the most significant features of which are listed below.

    Implementation of protective mechanisms. Programs should not have independent access to resource allocation, which results in privileged and unprivileged commands. Privileged commands, such as I/O commands, can only be executed operating system. She is said to be running in privileged mode. Transfer of control from application program to OS accompanied by a controlled change of regime. In addition, it is a memory protection that allows you to isolate competing user programs from each other, and OS- from user programs.

    Availability interrupts. External interrupts notify OS that an asynchronous event has occurred, such as the completion of an I/O operation. Internal interrupts(now commonly referred to as exceptional situations) occur when the execution of a program has led to a situation requiring intervention OS, such as a division by zero or a security breach attempt.

    Development of parallelism in architecture. Direct memory access and the organization of I / O channels made it possible to free up the central CPU from routine operations.

Equally important in the organization of multiprogramming is the role operating system. She is responsible for the following operations.

    Organization of the interface between the application program and OS with help system calls.

    In-memory queuing and allocation processor one of the jobs required usage planning processor.

    Switching from one job to another requires saving the contents of the registers and data structures necessary to complete the job, in other words, the context to ensure that the calculations continue correctly.

    Since memory is a limited resource, memory management strategies are needed, that is, it is required to streamline the processes of placing, replacing, and fetching information from memory.

    Organizing the storage of information on external media in the form of files and providing access to a specific file only to certain categories of users.

    Since programs may need to make authorized exchanges of data, it is necessary to provide them with the means of communication.

    For correct data exchange it is necessary to allow conflict situations that arise when working with various resources and provide for the coordination of programs of their actions, i.e. provide the system with synchronization tools.

Multiprogramming systems have made it possible to more effective use system resources (for example, processor, memory, peripherals), but they remained batch. The user could not directly interact with the task and had to foresee all possible situations using control cards. Debugging programs was still time consuming and required examining multi-page printouts of the contents of memory and registers, or using debug printing.

The advent of cathode ray displays and the rethinking of the possibilities of using keyboards put a solution to this problem in turn. Time-sharing systems became a logical extension of multiprogramming systems, or time sharing systems 1) . In them CPU switches between tasks not only for the duration of I / O operations, but simply after a certain time has passed. These switches happen so frequently that users can interact with their programs while they are running, i.e. interactively. As a result, it becomes possible for several users to work simultaneously on the same computer system. Each user must have at least one program in memory for this. In order to reduce the limits on the number of active users, the idea of ​​incomplete finding was introduced. executable program in working memory. The main part of the program is on the disk, and the fragment that needs to be executed at the moment can be loaded into RAM, and the unnecessary one can be downloaded back to disk. This is implemented using the virtual memory mechanism. The main advantage of this mechanism is the creation of the illusion of unlimited computer RAM.

IN time sharing systems the user was able to effectively debug the program in interactive mode and write information to disk without using punched cards, but directly from the keyboard. The advent of on-line files led to the need to develop advanced file systems.

In parallel with the internal evolution of computing systems, their external evolution also took place. Prior to this period, computing systems were, as a rule, incompatible. Everyone had their own operating system, its command system, etc. As a result, a program that successfully ran on one type of machine had to be completely rewritten and re-debugged to run on another type of machine. At the beginning of the third period, the idea of ​​creating families of software-compatible machines running under the control of the same operating system. The first family of software compatible computers built on integrated circuits, became the IBM/360 series of machines. Developed in the early 60s, this family was significantly superior to the second generation machines in terms of price / performance. It was followed by a line of PDP computers, incompatible with the IBM line, and the PDP-11 became the best model in it.

The strength of "one family" was at the same time its weakness. The wide possibilities of this concept (the presence of all models: from minicomputers to giant machines; an abundance of various peripherals; various environments; various users) gave rise to a complex and cumbersome operating system. Millions of lines of assembly language, written by thousands of programmers, contained many errors, which caused a continuous stream of publications about them and attempts to correct them. Only in operating system OS/360 contained over 1000 known bugs. However, the idea of ​​standardization operating systems was widely introduced into the minds of users and subsequently received active development.

Fourth period (from 1980 to the present). Personal computers. Classical, networked and distributed systems

The next period in the evolution of computing systems is associated with the advent of large integrated circuits(BIS). During these years, there was a sharp increase in the degree of integration and a decrease in the cost of microcircuits. The computer, which does not differ in architecture from the PDP-11, has become available to an individual in terms of price and ease of operation, and not to a department of an enterprise or university. The era of personal computers has arrived. Initially, personal computers were intended to be used by one user in a single-program mode, which led to the degradation of the architecture of these computers and their operating systems(In particular, the need to protect files and memory, schedule jobs, etc., has disappeared).

Computers began to be used not only by specialists, which required the development of "friendly" software.

However, the growth in the complexity and variety of tasks solved on personal computers, the need to improve the reliability of their work led to the revival of almost all features characteristic of the architecture of large computing systems.

In the mid-1980s, networks of computers, including personal computers, began to develop rapidly. network or distributed operating systems.

IN network operating systems users can access the resources of another network computer, only they should know about their presence and be able to do it. Each machine on the network runs its own local operating system, different from operating system stand-alone computer by the presence of additional tools ( software support for network interface devices and access to remote resources), but these additions do not change the structure operating system.

distributed system, on the contrary, outwardly looks like an ordinary autonomous system. The user does not and should not know where his files are stored - on the local or remote machine - and where his programs are executed. He may not know at all whether his computer is connected to the network. Internal structure distributed operating system has significant differences from autonomous systems.

In the future, autonomous OS we will call classical operating systems.

Having reviewed the stages of development of computing systems, we can identify six main functions that were performed by classical OS in the process of evolution:

    Job scheduling and usage processor.

    Providing programs with means of communication and synchronization.

    Memory management.

    File system management.

    I/O management.

    Security

Each of the above functions is usually implemented as a subsystem that is structural component OS. In each operating system these functions, of course, were implemented in their own way, in different volumes. They were not originally conceived as components operating systems, but appeared in the process of development, as computing systems became more convenient, efficient and secure. The evolution of human-made computing systems has taken this path, but no one has yet proven that this is the only possible way for their development. OS exist because this moment their existence is a reasonable way of using computing systems. Consideration general principles and algorithms for the implementation of their functions and constitutes the content of most of our course, which will sequentially describe the listed subsystems.

Basic concepts, OS concepts

In the process of evolution, several important concepts have emerged that have become an integral part of theory and practice. OS. The concepts discussed in this section will be encountered and explained throughout the course. Here is a brief description of them.

System calls

At any operating system a mechanism is supported that allows user programs to access kernel services OS. IN operating systems the most famous Soviet computer BESM-6, the corresponding means of "communication" with the core were called extracodes, in operating systems IBM called them system macros, and so on. IN OS Unix, these tools are called system calls.

System calls (system calls) is an interface between operating system and user program. They create, delete and use various objects, the main ones being processes and files. The user program requests a service from operating system, carrying out system call. There are libraries of procedures that load machine registers with certain parameters and perform interruptprocessor, after which control is transferred to the handler of this call included in the core operating system. The purpose of such libraries is to make system call similar to normal call subroutines.

The main difference is that when system call the task goes into privileged or kernel mode. That's why system calls sometimes also called software interruptions, as opposed to hardware interrupts, which are often referred to simply interruptions.

Kernel code works in this mode operating system, and it is executed in the address space and in the context of the task that called it. So the core operating system has full access to the memory of the user program, and when system call it is enough to pass the addresses of one or more memory areas with parameters call and addresses of one or more areas of memory for results call.

Most operating systems system call carried out by the software team interrupts(INT). Software interrupt is a synchronous event that can be repeated while executing the same program code.

Interrupts

Interrupt (hardware interrupt) is an event generated by an external (with respect to processor) device. Through hardware interrupts equipment either informs the central CPU that an event has occurred that requires an immediate response (for example, the user pressed a key), or reports the completion of an asynchronous I / O operation (for example, reading data from disk to main memory is finished). important type hardware interruptsinterrupts timer, which are generated periodically after a fixed period of time. Interrupts timers are used operating system when planning processes. Each type of hardware interrupts It has own number, which uniquely identifies the source interrupts. Hardware interrupt- this is an asynchronous event, that is, it occurs regardless of what code is being executed processor At the moment. Hardware processing interrupts should not take into account which process is the current one.

Exceptions

Exception (exception) - the event that occurs as a result of an attempt to execute command program, which for some reason cannot be completed. Examples of such commands might be accessing a resource without sufficient privileges, or accessing a missing memory page. Exceptions, like system calls, are synchronous events that occur in the context of the current task. Exceptions can be divided into correctable and incorrigible. Fixables include exceptional situations, as the lack of the necessary information in the RAM. After eliminating the cause of the correctable exceptional situation the program can continue to run. Occurrence in the course of work operating system correctable exceptional situations considered normal. Incorrigible exceptional situations most often arise as a result of errors in programs (for example, division by zero). Usually in such cases operating system responds by terminating the program that called exceptional situation.

Files are designed to store information on external media, that is, it is accepted that the information recorded, for example, on a disk, must be located inside the file. Usually, a file is understood as a named part of the space on the storage medium.

The main purpose of the file system is to hide the I/O features and give the programmer a simple abstract model of device-independent files. There is also an extensive category for reading, creating, deleting, writing, opening and closing files system calls(create, delete, open, close, read, etc.). Users are familiar with such concepts related to the organization of the file system as directory, current directory, root directory, path. To manipulate these objects in operating system there are system calls. File system OS described in lectures 11–12.

Processes, threads

Process concept in OS one of the most fundamental Processes are discussed in detail in lectures 2–7. Threads, or lightweight processes, are also described there.

Architectural features of the OS

So far we have been talking about looking at OS outside, about what they do OS. Our next course will be devoted to how they do it. But we have not yet said anything about what they are from the inside, what approaches exist to their construction.

Monolithic core

As a matter of fact, operating system is an ordinary program, so it would be logical to organize it in the same way as most programs are arranged, that is, to compose it from procedures and functions. In this case, the components operating system are not independent modules, but constituent parts one big program. Such a structure operating system called monolithic core(monolithic kernel). Monolithic core is a set of procedures, each of which can call each. All procedures run in privileged mode. Thus, monolithic kernel - this is the scheme operating system, in which all its components are components of one program, use general structures data and interact with each other by directly calling procedures. For monolithic operating system the core is the same as the whole system.

In many operating systems With monolithic core the assembly of the kernel, that is, its compilation, is carried out separately for each computer on which it is installed operating system. In this case, you can select a list of hardware and software protocols, the support of which will be included in the kernel. Since the kernel is a single program, recompilation is the only way add new components to it or exclude unused ones. It should be noted that the presence of unnecessary components in the kernel is highly undesirable, since the kernel is always completely located in RAM. In addition, the elimination of unnecessary components improves reliability. operating system generally.

Monolithic corethe oldest way organizations operating systems. An example of systems with monolithic core is most Unix systems.

Even in monolithic systems, some structure can be distinguished. As in a concrete block, it is possible to distinguish inclusions of crushed stone, and in monolithic kernel inclusions of service procedures corresponding to system calls. Service procedures run in privileged mode, while user programs run in non-privileged mode. To move from one level of privilege to another, a master utility can sometimes be used to determine which system call was done, the correctness of the input for this call and transferring control to the corresponding service procedure with the transition to the privileged mode of operation. Sometimes there is also a set of software utilities that help perform service procedures.

Layered systems

Continuing the structuring, it is possible to break the entire computing system into a number of smaller levels with well-defined links between them, so that objects of level N can call only objects of level N-1. The bottom level in such systems is usually the hardware, the top level is the user interface. The lower the level, the more privileged commands and actions a module at that level can execute. This approach was first used when creating the THE (Technishe Hogeschool Eindhoven) system by Dijkstra and his students in 1968. This system had the following levels:

Rice. 1.2. THE puff system

Layered systems are well implemented. When using bottom layer operations, you don't need to know how they are implemented, you just need to understand what they do. Puff systems are well tested. Debugging starts from the bottom layer and is carried out in layers. When an error occurs, we can be sure that it is in the layer under test. Puff systems are well modified. If necessary, you can replace only one layer without touching the rest. But layered systems are difficult to develop: it is difficult to correctly determine the order of the layers and what belongs to which layer. Layered systems are less efficient than monolithic ones. So, for example, to perform I / O operations, the user program will have to sequentially go through all the layers from the top to the bottom.

Virtual machines

At the beginning of the lecture, we talked about looking at operating system How on virtual machine when the user does not need to know the details of the internal structure of the computer. It works with files, not with magnetic heads and engine; it works with huge virtual rather than limited real RAM; he cares little whether he is the only user on the machine or not. Let's take a slightly different approach. Let operating system implements virtual machine for each user, but not simplifying his life, but, on the contrary, complicating it. Each such virtual machine appears to the user as bare metal - a copy of all the hardware in the computing system, including CPU, privileged and unprivileged commands, I/O devices, interrupts etc. And he remains with this iron one on one. When trying to access this virtual hardware at the level of privileged commands actually occurs system call real operating system, which performs all the necessary actions. This approach allows each user to upload their own operating system on virtual machine and do whatever your heart desires with it.

Rice. 1.3. VM option

First real system such was the CP/CMS system, or VM/370 as it is now known, for the IBM/370 family of machines.

The disadvantage of such operating systems is a decrease in efficiency virtual machines compared to a real computer, and they tend to be very bulky. The advantage lies in the use on one computer system of programs written for different operating systems.

Micronuclear architecture

The current trend in development operating systems consists in moving a significant part of the system code to the user level and at the same time minimizing the kernel. This is an approach to building a kernel called micronuclear architecture (microkernel architecture) operating system when most of its components are independent programs. In this case, the interaction between them provides special module kernel, called a microkernel. The microkernel runs in a privileged mode and provides interaction between programs, usage planning processor, primary processing interrupts, I/O operations, and basic memory management.

Rice. 1.4. Microkernel Operating System Architecture

The remaining components of the system communicate with each other by passing messages through the microkernel.

Main advantage micronuclear architecture– high degree of kernel modularity operating system. This greatly simplifies the addition of new components to it. In micronuclear operating system you can, without interrupting its work, load and unload new drivers, file systems, etc. The process of debugging kernel components is greatly simplified, since a new version drivers can be loaded without restarting the entire operating system. Kernel Components operating system do not fundamentally differ from user programs, so you can use conventional tools to debug them. Micronuclear architecture improves system reliability because a non-privileged program-level error is less dangerous than a kernel-mode failure.

In the same time microkernel operating system architecture introduces additional overhead associated with message passing, which significantly affects performance. In order for micronuclear operating system did not lose in speed operating systems on the base monolithic kernel, it is required to carefully design the partitioning of the system into components, trying to minimize the interaction between them. Thus, the main difficulty in creating micronuclear operating systems– the need for very careful design.

mixed systems

All considered approaches to the construction operating systems have their own merits and demerits. In most cases, modern OS using various combinations of these approaches. So, for example, the core operating system Linux is a monolithic system with elements micronuclear architecture. When compiling the kernel, it is possible to enable dynamic loading and unloading of very many kernel components - so-called modules. When a module is loaded, its code is loaded at the system level and linked to the rest of the kernel. Any functions exported by the kernel can be used inside a module.

Another example of a mixed approach is the ability to run operating system With monolithic core controlled by the microkernel. This is how 4.4BSD and MkLinux are based on the Mach microkernel. The microkernel provides virtual memory management and low-level drivers. All other functions, including interaction with application programs, are carried out monolithic core. This approach was formed as a result of attempts to take advantage of micronuclear architecture, keeping code as well debugged as possible monolithic kernel.

Most closely elements micronuclear architecture and elements monolithic kernel intertwined in the Windows NT kernel. Although Windows NT is often referred to as microkernel operating system, this is not entirely true. The NT microkernel is too large (more than 1 MB) to carry the "micro" prefix. Components of the Windows NT kernel live in preempted memory and communicate with each other by passing messages, as expected in microkernels. operating systems. At the same time, all kernel components work in the same address space and actively use common data structures, which is typical operating systems With monolithic core. According to Microsoft, the reason is simple: a purely microkernel design is not commercially viable because it is inefficient.

Thus, Windows NT can rightly be called hybrid. operating system.

OS classification

There are several classification schemes operating systems. Below is a classification according to some features from the user's point of view.

Multitasking Implementation

By the number of concurrent tasks OS can be divided into two classes:

    multitasking(Unix, OS/2, Windows);

    single-tasking (for example, MS-DOS).

Multitasking OS , solving the problems of resource allocation and competition, fully implements the multiprogram mode in accordance with the requirements of the section "Basic concepts, concepts OS".

Multitasking, which embodies the idea of ​​time sharing, is called preemptive. Each program is allocated a quantum processor time after which control is transferred to another program. They say that the first program will be ousted. Work in preemptive mode user programs most commercial OS.

In some OS(Windows 3.11 for example) a user program can monopolize CPU, that is, to work in non-preemptive mode. As a rule, in most systems, the code itself is not subject to preemption. OS. Responsible programs, in particular real-time tasks, are also not forced out. This is discussed in more detail in the lecture on work planning. processor.

Based on the examples given, one can judge the approximation of the classification. Yes, in OS MS-DOS can organize the launch of a child task and the presence of two or more tasks in memory at the same time. However, this OS traditionally considered single-tasking, mainly due to the lack of defense mechanisms and communication capabilities.

Multiplayer support

By number of concurrent users OS can be divided into:

    single user (MS-DOS, Windows 3.x);

    multiplayer(Windows NT, Unix).

The most significant difference between these OS is in the presence of multiuser systems mechanisms for protecting the personal data of each user.

Multiprocessing

Until recently, computing systems had one central CPU. As a result of demands for increased productivity, multiprocessor systems consisting of two or more processors general purpose, carrying out parallel execution of commands. Multiprocessing support is an important feature OS and leads to the complication of all resource management algorithms. Multiprocessing is implemented in such OS like Linux, Solaris, Windows NT, and several others.

Multiprocessor OS divided into symmetrical and asymmetric. In symmetrical OS on each processor the same core functions, and the task can be performed on any processor, that is, the processing is completely decentralized. At the same time, each of processors all memory is available.

In asymmetric OS processors unequal. There is usually a main CPU(master) and slaves (slave), the loading and nature of the work of which is determined by the master CPU.

Real time systems

In discharge multitasking OS, along with batch systems And time sharing systems, are also included real time systems not mentioned so far.

They are used to manage various technical objects or technological processes. Such systems are characterized by allowable time reaction to an external event, during which the program that controls the object must be executed. The system must process incoming data faster than it can come from multiple sources at the same time.

Such severe restrictions affect the architecture real time systems, for example, they may lack virtual memory, the support of which gives unpredictable delays in the execution of programs. (See also topics related to process scheduling and virtual memory implementation.)

Reduced classification OS is not exhaustive. In more detail, the features of the use of modern OS discussed in [ Olifer, 2001].

Conclusion

We have considered different views on what is operating system; studied the history of development operating systems; find out what functions are usually performed OS; Finally, we figured out what approaches to building operating systems. We will devote the next lecture to clarifying the concept of "process" and the issues of planning processes.

Annex 1.

Some information about computer architecture

The main hardware components of a computer are: main memory, central CPU and peripheral devices. To communicate with each other, these components are connected by a group of wires called a backbone (see Fig. fig.1.5).

Rice. 1.5. Some computer components

Main memory is used to store programs and data in binary form and is organized as an ordered array of cells, each with a unique numeric address. Typically, the cell size is one byte. Typical operations on the main memory are reading and writing the contents of a cell with a specific address.

Performing various operations with data is carried out by an isolated part of the computer, called the central processor(CPU). The CPU also has information storage cells called registers. They are divided into general purpose registers and specialized registers. In modern computers, the register capacity is usually 4–8 bytes. General purpose registers are used for temporary storage of data and results of operations. For information processing, data transfer from memory cells to general-purpose registers is usually organized, the operation is performed by the central processor and transferring the results of the operation to main memory.

Specialized registers are used to control the operation processor. The most important are: the program counter, the instruction register, and the register containing information about the state of the program.

Programs are stored as a sequence of machine instructions to be executed by a central CPU. Each command consists of an operation field and operand fields, that is, the data on which this operation is performed. The entire set of machine instructions is called machine language.

The program is executed as follows. The machine instruction pointed to by the program counter is read from memory and copied into the instruction register. Here it is decoded and then executed. After the instruction is executed, the program counter points to the next instruction. These actions, called the machine cycle, are then repeated.

Interaction with peripheral devices

Peripherals are designed to input and output information. Each device usually has a specialized computer called a controller or adapter. When the controller is inserted into the connector on motherboard, it connects to the bus and receives a unique number (address). The controller then monitors the signals on the bus and responds to signals addressed to it.

Any I/O operation involves a dialogue between the CPU and the device controller. When processor encounters an I/O-related instruction that is part of a program, it executes it by sending signals to the device controller. This is the so-called programmable I/O.

In turn, any changes with external devices result in signal transmission from the device to the CPU. From the point of view of the CPU, this is an asynchronous event and requires its reaction. In order to detect such an event, between machine cycles CPU polls a special register containing information about the type of device that generated the signal. If a signal occurs, then the CPU executes a device-specific program, the task of which is to respond to this event in an appropriate way (for example, to enter a character entered from the keyboard into a special buffer). Such a program is called a processing program. interrupts, and the event itself interruption because it disrupts the planned work processor. After processing is complete interrupt processor returns to program execution. These computer actions are called I/O using interrupts.

Modern computers also have the ability to communicate directly between the controller and main memory, bypassing the CPU, the so-called direct memory access mechanism.

1) The real evolution of operating systems did not occur as smoothly and systematically as it is presented in this review. So, for example, the first Joss time-sharing system was implemented on a Joniac tube machine without any hardware support.

Lecture number 1.

OPERATING SYSTEMS AND ENVIRONMENTS.

Under operating system usually understand the complex of control and processing programs, which, on the one hand, acts as an interface between the computer hardware and the user, and on the other hand, the purpose is to more efficiently use the resources of the computer system and organize reliable calculations.

Any of the components of the application software necessarily works under the control of operating systems. The diagram shows a general software procedure.

It can be seen that none of the software components, with the exception of the operating system itself, has direct access to the computer hardware. Even the user interacts with his programs through the interface. Any of their commands, before getting into application programs, pass through operating systems, the main functions that are performed by the operating system are:

1) acceptance from the user of tasks or commands formulated in the appropriate language and their processing;

2) receiving and executing a request program to start/suspend other programs;

4) initialization of programs (transfer of control to it), as a result of which the processor uses the program;

5) program identification;

6) ensuring the operation of the database file management system, which allows you to dramatically increase the efficiency of the software;

7) provision of multiprogramming mode, i.e. execution of two or more programs on one processor, creating the appearance of their simultaneous execution;

8) ensuring the function of organizing and managing all input and output operations;

9) meeting hard time constraints in real time;

10) memory allocation:

a) organization of virtual memory;

b) in most modern systems.

11) planning and dispatching in accordance with the task;

12) organization of the messaging and data exchange between running programs;

13) protection of one program from the influence of other programs, ensuring the preservation of data;

14) provision of services in case of partial system failure;

15) ensuring the operation of the system of programs with the help of which users prepare their programs.

As a rule, all modern operating systems have a memory management system. Designated CMS organization more convenient access to these organizations as a file. A number of operating systems allow you to work with several file systems at the same time. In this case, one speaks of a mounted file system, i.e. additional memory can be installed.

There are the simplest operating systems that can work without file systems or only on one side of the file systems. Any file management system is designed to work with a specific operating system and a specific file system.

For example, the well-known file system FAT.

Allocation

Has many implementations as a file management system, for example, FAT 16 under the system MS- DOS or Super FAT For OS/2 or FAT For Windows.

To work with files organized in accordance with some file systems, an appropriate file management system must be organized for each operating system. It will only work on the operating system for which it was designed. Additional interface shells can be used with the operating system for ease of use. Their main purpose is either to expand the capabilities of the operating system, or to change the capabilities built into the system. A classic example of interface shells is:

    X Windows in family systems Unix;

    KDE-K Desktop Environment;

    PM Shell;

    object desktop.

There are different interface options for the operating system family windows, which replace explorer, in file system. ini.

In the operating system, only the interface shell is replaceable. The operating environment is defined by the programming interfaces.

Application

Interface

API application programming interface, includes process, memory, and I/O management.

Some operating systems can run a number of programs designed to run on other operating systems. The corresponding environment is organized within the framework of this machine. Likewise, in linux you can create conditions for the execution of programs written for Windows 98.

Under utilities understand special programming systems with which you can service the operating system, perform data processing that optimizes the data on the media and perform maintenance of the operating system.

The utilities include a program for partitioning a drive into magnetic disks on partitions and a formatter, a program for transferring basic system files by the operating system itself. Utilities can only run on the appropriate operating system.

Lecture number 2.

THE CONCEPT OF OPERATING ENVIRONMENT.

The operating system performs the functions of managing computing processes in the computing system, distributes computing system resources between different computing processes and forms a software environment in which user application programs are executed. Such an environment is called operating room.

Any program deals with some input data, which it processes and generates some output data, i.e. calculation results. In the vast majority of cases, the original data end up in the RAM of external (peripheral) devices.

The calculation results are also output to external devices. Programming I/O operations is the most difficult task. That is why the development of the operating system has taken the path of highlighting the most common operations and creating appropriate modules for them, which can later be used in newly created programs. //Ultimately, a situation arose when, when creating binary machine programs ...//

Programmers may not know many details of resource management of a computing system at all, but must refer to some software subsystem with the appropriate outputs and obtain the necessary service functions. This software subsystem is the operating system, and the set of its service functions led to an appeal to it and forms a basic concept called operating environment, i.e. the term operating environment means the necessary user interface programs to access the operating system in order to obtain a particular service. The parallel existence of the terms “operating system” and “operating environment” is due to the fact that an operating system can support several operating environments. For example, operating system OS/2 Warp can run the following programs:

    so-called native native) programs created with a 32-bit operating interface in mind;

    16-bit programs designed for OS/2 first generation;

    16-bit programs designed for MS- DOS PS And DOS.

    16-bit programs for the operating environment Windows.

    The operating shell itself Windows 3. X and already in it, programs created for it.

PURPOSE AND FUNCTIONS OF THE OPERATING SYSTEM.

Operating system - it is a program that controls the operation of the user program and application systems and acts as an interface between applications and computer hardware. Its purpose can be divided into three main components:

    convenience: the operating system makes the execution of the computer simple and convenient;

    efficiency: the operating system allows efficient use of computer system resources;

    Evolvability: The operating system must allow development and testing of new applications and system functions without disrupting the normal functioning of the computing system.

OPERATING SYSTEM AS AN INTERFACE BETWEEN USER AND COMPUTER.

The user, as a rule, is not interested in the details of the computer hardware device, he sees it as a set of applications. The application can be written in one of the programming languages. To make this task easier, there is a set system programs, some of which are called utilities, they often implement execution functions that help in creating user programs in working with files and managing input / output devices. The programmer uses these tools to develop these programs, and applications at runtime call on utilities to perform certain functions. The most important of the system programs are operating systems, which hide hardware details from the programmer and provide a convenient interface for executing the operating environment system. May include multiple interfaces:

    custom;

    program

For example, the system linux them. for the user as interface commands (various shells): C-Shell, K-Shell, B-Shell, bash-shell.

INTERFACE TYPE Midnight commander.

So are graphical interfaces ( X Windows). It can have various window managers ( KDE Grome).

With regard to software interfaces, the operating system Windows programs can access both the operating system for the appropriate services and functions, and the graphics subsystem. In terms of processor architecture, the second program designed to run in linux uses the same commands and data formats as a program designed to work in the environment Windows. However, in the first case, there is an appeal to the operating environment, in the second - to another. Thus, the operating environment is the system software in which programs created according to the rules of operation of this environment can be executed.

Typical operating systems provide the following services:

    software development. The operating system provides the programmer with a variety of tools and services, such as editors and debuggers. These services, implemented as software utilities that support operating systems, although not included in its core, such programs are called application development tools;

    program execution. To run the program, you need to perform a number of actions. Commands and data should be loaded into the main memory, and devices should be initialized. The operating system does routine work;

    access to I/O devices. To control the operation of each I / O device, you need its own set of commands or a controlled signal. The operating system provides the user with a uniform interface that reveals all these details and allows the programmer to access I/O devices using simple commands reading and writing;

    control access to files. When working with files, the management of its side of the operating system is intended not only to understand the nature of I / O devices and knowledge of the data structures recorded in files. Multi-user operating systems, in addition, provide protection mechanisms for accessing files;

    access systems. The operating system controls access to the public computing system as a whole, as well as to individual system resources. It should ensure the protection of resources and data from unauthorized use, as well as resolve conflict situations;

    error detection and handling. Various failures occur during the operation of a computer system, these include internal and external errors that occurred in hardware, for example, memory errors, failure or failure of devices, software errors are also possible: arithmetic overflow, an attempt to access a memory cell, access to which is running, and the inability to fulfill an application request. In each of these cases, the operating system must take action to minimize the impact of the error on the application. The reaction of the operating system to an error can be different: from a simple error message to an emergency stop of the program;

    accounting for resource usage. A good operating system should have a means of accounting for the use of various resources and displaying manufacturer parameters. This information is extremely important for further improvement and tuning of the system to improve performance.

OPERATING SYSTEM AS A RESOURCE MANAGER.

A computer is a set of resources that support the execution of tasks, the accumulation, storage, movement and processing of data, and also controls the operation of these and other functions. It is the operating system that manages the resources of the computer and controls its basic functions. However, this control has the following features:

    the functions of the operating system work in the same way as all other software, i.e. they are implemented as separate programs or a set of programs, running processes;

    the operating system must transfer control to other processes and wait for the processor to allow it to perform its duties again.

The operating system is, in essence, a set of computer programs, like any other program, it gives commands to the processor. The key difference is the purpose of this program.

Operating system //capable//: how to use others system resources, and how to allocate time when using other programs, but for this the processor must suspend work with it and move on to other programs.

Thus, the operating system yields control to the processor so that it can do some useful work, and then resumes control just long enough to prepare the processor for the next piece of work.

Part of the operating system resides in RAM (main, base). This part includes the core ( kernel), which contains the main part of the most frequently used functions, as well as some other components of the operating system that are currently in use.

The rest contains other programs and user data. The placement of this data in RAM is controlled jointly by the operating system and the processor's memory management hardware. The operating system decides when the executing program can mess up the I/O devices it needs and manages file access.

The processor is also a resource to which the operating system must determine how much time it should devote to the execution of a particular user program. Multiprocessor systems: a decision must be made on a per-process basis.

OPERATING SYSTEM DEVELOPMENT OPPORTUNITIES.

Most operating systems are constantly evolving. This happens due to the following reasons:

A ) updating and the emergence of new types of hardware;

b) new services. New performance monitoring and evaluation tools can be added to the operating system to maintain high performance with existing user tools;

V) correction. Every operating system has bugs. From time to time they are discovered and corrected. The need to regularly change operating systems, certain restrictions are imposed on the device. Obviously, these systems must have a modular design, clearly defined by the interaction of modules. For large programs, good and complete documentation is essential.

Accepted conventions:

    I/O - input/output;

    AO - hardware;

    DB - database;

    RAM - random access memory;

    OS - operating system;

    ROM - Read Only Memory;

    PC - personal computer;

    ON - software;

    RT - real time;

    SU - control system;

    DBMS - database management system;

    UVV - input/output device;

    FS - file system;

    CPU - Processor ( CPU).

Wasp classification

The development of computers led to the development of the OS. Now there are more than 100 OS.

By appointment, the OS is usually divided into seven levels.

1. Mainframes (mainframe)

They have different I / O capabilities from a PC. Typically, mainframes contain thousands of disks and terabytes of RAM. They are used as powerful web servers, servers for large scale commercial sites, and servers for business transactions. Mainframe OSes are designed to handle many concurrent jobs, most of which require great amount I/O operations. Usually they involve three types of service:

      batch processing. The system performs standard tasks without the presence of users. In batch mode, claims of insurance companies are processed and sales reports are compiled in the store;

      transaction processing (group operations: data processing and recording). The transaction processing system manages a very large number of small requests (for example, controls the process of working in a bank, booking flights). Each individual request is small, but the system must respond to thousands of requests per second;

      division of time. Time-sharing systems allow many remote users to perform their tasks on the same machine, such as working with a large database. All these functions are closely related and often the mainframe OS performs them all. An example of a mainframe OS is OS/390 (from IBM).

2. Server (network) OS

They work on servers, which are either very large PCs, or workstations, or even mainframes. They simultaneously serve many users and allow them to share software and hardware resources. Servers provide the ability to work with printers, files, and the Internet. ISPs typically run multiple servers to support multiple clients accessing the network at the same time. The servers store web site pages and process input requests. Typical server OS: Windows 2000 and Unix. For these purposes, the Linux operating system has also been used at present.

3. Multiprocessor OS (clusters)

The most commonly used way to increase the power of a computer is to connect the CPU in one system. Depending on the type of CPU connection and division of work, such systems are called parallel computers, multicomputers, or multiprocessor systems. They require special operating systems, but they are typically variants of server operating systems with special communication capabilities.

4. PC OS

The job of these OS is to present a user-friendly interface for a single user. Such operating systems are widely used for word processing, spreadsheets, and Internet access. Vivid examples: Windows 98, 2000, MacOS, Linux.

Time is the main parameter of OS RT. For example, in a production control system, computers operating in RT mode collect industrial process data and use it to control machines. Such processes must meet strict time requirements. So, if the car moves along the conveyor, then each action must be carried out at a strictly defined point in time. If the welding robot welds the seam too early/late, it will cause irreparable harm. If some action must occur at some point in time or within a given range of time, then one speaks of a rigid RT system. There is a flexible system of RT, in which the occasional missed deadlines for the execution of operations are acceptable. Falls into this category digital audio and multimedia systems. OS examples: VxWorks, QNX.

6. Embedded OS

A pocket computer, or PDA (Personal Digital Assistant), is a small computer that fits in a trouser pocket and performs some functions (notebook, notepad). OS examples: PalmOS, Windows CE (Consumer Electronics - home appliances).

7. OS for Smart-cards (smart-cards - smart cards)

The smallest operating systems run on smart cards, which are devices with a CPU. Such operating systems are subject to extremely severe restrictions on the power of the CPU and memory. Some of them can manage only one operation, but other OSes on the same Smart Cards perform complex functions. Some operating systems are Java-oriented, i.e. The ROM contains the Java virtual machine interpreter (ROM - Read Only Memory). Java applets are loaded onto the card and executed by the JVM (Java Virtual Machine) interpreter. Some of these cards can manage multiple Java applets at the same time, leading to multitasking and the need for scheduling. There is also a need for protection. These tasks are usually performed by a very primitive OS.

7.1. Basic concepts of operating systems

One of the components of computer science is - software (software), which is heterogeneous and has a complex structure, including several levels: systemic, service, instrumental, applied.

On lowest level there are software complexes that perform interface functions (intermediary between a person and a computer, hardware and software, between simultaneously running programs) of distributing various computer resources. Programs at this level are called systemic. Any user programs run under the control of software packages called operating systems.

The next level is service software. Programs of this level are called utilities and perform various auxiliary functions. It can be diagnostic programs used in maintenance various devices(flexible and hard disk), test programs representing a set of programs Maintenance, archivers, antiviruses, etc. Utilities typically run under the operating system (although they can access hardware directly), so they are considered a higher level. In some classifications, the system and service levels are combined into one class - system software.

Tool software represents software packages for creating other programs. The process of creating new programs in the language of machine instructions is very complex and painstaking, so it is low-productive. In practice, most programs are compiled in formal programming languages, which are closer to mathematical, therefore, easier and more productive to work with, and the translation of programs into machine code language is carried out by a computer through tool software. Tool software programs are controlled by system programs, so they belong to a higher level.

Application software- the largest class of programs in terms of volume, these are end-user programs. There are about six thousand different professions in the world, thousands of different hobbies, and most of them currently have their own application software products. The application software is also managed by the system programs and has a higher level.

Summarizing what has been said, we can propose the following software structure (Fig. 7.1).

Fig.7.1. Software classification



The proposed classification of software is largely conditional, since at present the software products of many companies have begun to combine software elements from different classes. For example, the Windows operating system, being a complex of system programs, contains a block of utility programs (defragmentation, verification, disk cleanup, etc.), as well as a WordPad word processor, a Paint graphics editor, which belong to the class of application programs.

The central place in the software structure is occupied by the operating system (OS). It is "a system of programs designed to provide a certain level of efficiency to a digital computing system by automated control its operation and the range of services provided to users”.

operating system (English) operating system) - a basic set of computer programs that provides a user interface, control of computer hardware, work with files, input and output of data, as well as the execution of application programs and utilities.

The OS allows abstraction from hardware implementation details, providing software developers with minimal necessary set functions. From the perspective of the public, ordinary users computer technology, OS includes and programs user interface.

Operating system - program that loads when you turn on your computer. It carries out a dialogue with the user, manages the computer, its resources (RAM, disk space, etc.), launches other application programs for execution. The operating system provides the user and application programs with a convenient way to communicate (interface) with personal computer devices.

The most common operating systems are: MS-DOS, OS/2, UNIX, WINDOWS, LINUX, WINDOWS NT, they have different modifications.

Main functions (the simplest OS):

Standardized access to peripheral devices (I/O devices);

RAM management (allocation between processes, virtual memory);

Controlling access to data on non-volatile media (such as HDD, CD, etc.), usually via a file system;

User interface;

Network operations, protocol stack support

Additional functions:

Parallel or pseudo-parallel execution of tasks (multitasking);

Interaction between processes: data exchange, mutual synchronization;

Protection of the system itself, as well as user data and programs from malicious actions of users or applications;

Differentiation of access rights and multi-user mode of operation (authentication, authorization).

A program that hides the truth about hardware and presents a simple list of files that can be read and written, i.e. ., operating system, not only eliminates the need to work directly with disks and provides a simple, file-oriented interface, but also hides a lot of annoying work with interrupts, time counters, memory organization and other low-level elements. In each case, the procedure offered by the OS is much simpler and easier to handle than the steps required by the underlying hardware.

From the user's point of view, the OS functions as a virtual machine, which is easier and simpler to work with than directly with the hardware that makes up a real computer, and for programs, the OS provides a number of features that they can use using special commands called system calls.

The concept that sees the OS primarily as a user-friendly interface is a top-down view. An alternative bottom-up view gives an idea of ​​the OS as a mechanism for managing all parts of the computer. Modern computers are made up of processors, memory, disks, network hardware, printers, and a myriad of other devices. According to the second approach, the job of the OS is to provide an organized and controlled distribution of processors, memory, and I / O devices among various programs that compete for the right to use them.

Types of operating systems.

The history of OS development goes back many years. Operating systems appeared and developed in the process of improving computer hardware, so these events are historically closely related. The development of computers has led to the emergence of a huge number of different operating systems, of which not all are widely known.

Actually top level are Mainframe OS. These huge machines can still be found in large organizations. Mainframes differ from personal computers in their I/O capabilities. It's not uncommon to see mainframes with a thousand disks and terabytes of data. Mainframes act as powerful web servers and servers large enterprises and corporations. Mainframe operating systems are primarily focused on handling many concurrent jobs, most of which require huge amounts of I/O. They typically perform three kinds of operations: batch processing, transaction processing (bulk operations), and time sharing. Batch processing runs standard tasks for users working in interactive mode. Transaction processing systems handle a very large number of requests, such as flight bookings. Each individual request is small, but the system must respond to hundreds and thousands of requests per second. Time-sharing systems allow many remote users to simultaneously perform their tasks on the same machine, such as working with a large database. All these functions are closely related, and the mainframe operating system performs them all. An example of a mainframe operating system is OS/390.

Below are server OS. Servers are either multiprocessor computers or even mainframes. These operating systems simultaneously serve many users and allow them to share software and hardware resources among themselves. Servers also provide the ability to work with printers, files, or the Internet. ISPs typically run multiple servers in order to support multiple clients accessing the network at the same time. Servers store web site pages and process incoming requests. UNIX and Windows 2000 are typical server operating systems. Now the Linux operating system has also been used for this purpose.

The next category is OS for personal computers. Their job is to provide a user-friendly interface for a single user. Such systems are widely used in daily work. The main operating systems in this category are the operating systems of the Windows platform, Linux and the operating system of the Macintosh computer.

Another type of OS is real time systems. The main parameter of such systems is time. For example, in manufacturing control systems, real-time computers collect industrial process data and use it to control equipment. Such processes must meet strict time requirements. If, for example, a car is moving along a conveyor, then each action must be carried out at a strictly defined moment in time. If the welding robot welds the seam too early or too late, it will cause irreparable damage to the product. VxWorks and QNX systems are real-time operating systems.

Embedded operating systems used in handheld computers and home appliances. A pocket computer is a small computer that fits in a pocket and performs a small set of functions, such as phone book and notepad. Embedded systems that control the operation of devices household appliances, are not considered computers, but have the same characteristics as real-time systems, and yet have special size, memory, and power limitations that distinguish them into a class of their own. Examples of such operating systems are PalmOS and Windows CE (Consumer Electronics - home appliances).

The smallest operating systems run on smart cards, which is a device the size of a credit card and contains a central processing unit. Such operating systems are subject to very severe restrictions on processor and memory power. Some of them can manage only one transaction, such as an electronic payment, but other operating systems perform more complex functions.

Classification of operating systems.

Operating systems are classified by:

Number of concurrent users: single-user, multi-user;

The number of processes simultaneously running under the control of the system: single-tasking, multi-tasking;

Number of processors supported: uniprocessor, multiprocessor;

OS code bitness: 8-bit, 16-bit, 32-bit, 64-bit;

Interface type: command (text) and object-oriented (graphic);

Type of user access to the computer: batch processing, time sharing, real time;

Type of resource usage: network, local.

In accordance with the first sign of classification, multi-user operating systems, unlike single-user ones, support simultaneous work on a computer of several users using different terminals.

The second feature involves the division of the OS into multitasking and single-tasking. The concept of multitasking means support for the parallel execution of several programs that exist within the same computer system at one point in time. Single-tasking operating systems support the execution mode of only one program at a time. separate moment time.

In accordance with the third sign, multiprocessor operating systems, in contrast to single-processor ones, support the mode of distributing resources of several processors for solving a particular task.

The fourth feature classifies operating systems into 8-bit, 16-bit, 32-bit, and 64-bit operating systems. This implies that the bitness of the operating system cannot exceed the bitness of the processor.

In accordance with the fifth feature, operating systems are divided into object-oriented (as a rule, with a graphical interface) and command (with a text-based interface) according to the type of user interface. According to the sixth sign, OS are divided into systems:

Batch processing, in which a package (set) of tasks is formed from the programs to be executed, entered into the computer and executed in order of priority, with possible consideration of priority;

Time sharing (TSR), which provides simultaneous interactive (interactive) access to a computer for several users on different terminals, to which machine resources are allocated in turn, which is coordinated by the operating system in accordance with a given service discipline;

Real-time, providing a certain guaranteed response time of the machine to the user's request with the control of any external events, processes or objects relative to the computer.

In accordance with the seventh sign of classification, operating systems are divided into network and local. Network operating systems are designed to manage the resources of computers connected in a network for the purpose of sharing data, and provide powerful tools for restricting access to data in the framework of ensuring their integrity and safety, as well as many service options for using network resources.

In most cases, network operating systems are installed on one or more sufficiently powerful server computers dedicated solely to maintaining the network and shared resources. All other operating systems will be considered local and can be used on any personal computer, as well as on separate computer connected to the network as a workstation or client.

Software components OS provide management of computing and implement such functions as planning and allocation of resources, information input-output control, data management. The volume of the operating system and the number of programs that make it up are largely determined by the type of computers used, the complexity of the operating modes of the computer and the aircraft, the composition of the hardware, etc. The use of the OS allows you to:

Increase bandwidth computer, i.e. an increase in the total amount of work performed by a computer per unit of time;

Reduced system response time, i.e. reduction of the time interval between the moments of receipt of tasks in the computer and the moments of receipt of results;

Monitoring the performance of hardware and software;

Assistance to users and operators in their use of hardware and software, ensuring their work;

Management of programs and data in the course of calculations;

Ensuring the adaptation of the computer, its structural flexibility, which consists in the ability to change, replenish with new hardware and software.

The operating system is a complex of system and service software. On the one hand, it relies on the basic computer software included in its BIOS system(basic input-output system); on the other hand, it is itself a backbone for software more high levels- Applied and most service applications. Operating system applications are called programs designed to work under the control of this system.

The main function of all operating systems is intermediary. It consists in providing several types of interface:

The interface between the user and the software and hardware of the computer (user interface);

Interface between software and hardware (hardware-software interface);

An interface between different types of software (software interface).

Even for a single hardware platform, such as the IBM PC, there are multiple operating systems. The differences between them are considered in two categories: internal and external. Internal differences are characterized by the methods of implementation of the main functions. External differences are determined by the availability and accessibility of the applications of this system necessary to meet the technical requirements for a particular workplace.

The main criteria for the approach to choosing an operating system:

Currently, there are a large number of operating systems, and the user is faced with the task of determining which operating system is better than others (according to certain criteria). Obviously, there are no ideal systems, each of them has its advantages and disadvantages. When choosing an operating system, the user must imagine how this or that OS will provide him with the solution of his tasks.

To choose one or another OS, you need to know:

On what hardware platforms and at what speed does the OS work;

What peripheral hardware does the OS support;

How fully the OS satisfies the needs of the user, that is, what are the functions of the system;

What is the way the OS interacts with the user, that is, how visual, convenient, understandable and familiar to the user interface;

Are there informative tips, built-in guides, etc.;

What is the reliability of the system, that is, its resistance to user errors, equipment failures, etc.;

What opportunities does the OS provide for organizing networks;

Does the OS provide compatibility with other operating systems;

Which tools has an OS for developing application programs;

Does the OS support various national languages;

What known application software packages can be used when working with this system;

How information and the system itself are protected in the OS.

1. OS concept. Basic functions of the OS.

An OS is usually understood as a set of control programs that act as an interface between computer hardware and are designed for the most efficient use of computer system resources and the organization of reliable calculations. Any of the software components runs under the OS, and none of the software components, with the exception of the OS itself, has direct access to the hardware.

The main functions of the OS are:

1. Receiving tasks or commands from the user.

2. Receiving and executing program requests to start, pause and stop other programs.

4. Program initiation (transfer of control to it, as a result of which the processor executes the program).

5. Identification of all programs and data.

6. Ensuring the operation of the file management system and DBMS. which increases the efficiency of the entire software.

7. Providing a multiprogramming mode, i.e., the execution of 2 or more programs on 1 processor, giving the appearance of their simultaneous execution.

8. Management of input / output operations.

9. Satisfy hard limits in real time.??

10. Memory allocation, organization of virtual memory.

11. Planning and scheduling of tasks in accordance with the given strategy and service disciplines.

12. Exchange of messages and data between running programs.

13. Protection of programs from influence on each other. data security.

14. Provision of services in case of system failure.

15. Ensuring the operation of programming systems.

2. Interrupts. Interrupt handling.

Interrupts are a mechanism that allows coordinating the parallel operation of individual devices of a computing system and responding to special states that occur during the operation of the processor. Interrupts are a forced transfer of control from a running program to the system, and through it to the corresponding interrupt handler, occurring at a certain event. The main purpose of the introduction of interrupts is the implementation of an asynchronous mode of operation and parallelization of the operation of individual devices of the computing complex. The interrupt mechanism is implemented by hardware and software.

The structures of interrupt systems can be very different, but they all have a common feature - an interrupt will certainly lead to a change in the order of execution of instructions by the processor. The interrupt handling mechanism includes the following elements:

1. Establishing the fact of interruption (reception and identification of the interrupt signal).

2. Remembering the state of the interrupted process (the state of the process is determined by the value of the program counter, the contents of the processor register, the mode specification: user or privileged)

3. Control is transferred by hardware to the interrupt handler. In this case, the starting address of the interrupt service routine is entered into the program counter, and into the corresponding registers from the status word. ???

4. Saving information to an interrupted program that could not be saved with the help of hardware actions.

5. Interrupt handling. The work can be performed by the same subroutine that was given control in the 3rd step, but in the OS most often this processing is implemented by calling the corresponding. subroutines.

6. recovery of information related to the interrupted process.

7. Return to the interrupted program.

The first 3 steps are implemented in hardware and the rest in software.

The main functions of the interrupt mechanism:

1. Recognition or classification of interruption.

2. Transfer of control to the interrupt handler.

3. Correct return to an interrupted program

The transition from the interrupted program to the handler and back should be done as quickly as possible. One of quick methods is to use the table sod. a list of all interrupts allowed for the computer and addresses respectively. handlers. For a correct return to the interrupted program, before transferring control to the handler, the contents of the processor registers are stored either in memory with direct access or in the system stack.

Interrupt service. The presence of an interrupt signal does not have to cause an interruption of the running program, the processor may have an interrupt protection system: disabling the interrupt system or disabling or masking individual interrupt signals. Program control these means allows the OS to regulate the processing of interrupt signals. The processor can process interrupts immediately upon the arrival of an interrupt, postpone their processing for a while, or completely ignore them. Normally, interrupt operations are performed only after the execution of the current instruction has completed. Because interrupts occur at random times, there may be multiple interrupts at the time of an interrupt, which can only be processed sequentially. In order to process interrupt signals in a reasonable order, they are assigned priorities. Programs managing special mask registers allow implementing various service disciplines:

1) with relative priority. At the same time, service is not interrupted even if there are requests with higher priorities. after the end of the service given request(current) request with the highest priority is served. to organize such discipline, it is necessary to mask all other interrupts in the service program for this request, or simply disable the interrupt system.

2) with absolute priority. The tasks with the highest priority are always serviced. To implement this discipline, when an interrupt is requested, all interrupts with the lowest priority are masked. In this case, a multi-level interrupt is possible, i.e. interruption of the interrupt handler. The number of interrupt levels in this mode varies and depends on the priority of the request according to the stack principle: LCFS - last come first served, i.e. a request with a higher priority can interrupt a request with a lower priority. When an interrupt request appears, the interrupt system identifies the signal and if interrupts are enabled, then control is transferred to the corresponding. interrupt handler.

Service sections in which the context of the interrupted task is saved and the last section in which the context is restored so that the interrupt system does not respond again to the interrupt request signal. This interrupt system automatically disables interrupts, so you need to re-enable this interrupt service in your interrupt routines. So, for the duration of the central section of interrupt processing, interrupts are enabled, for the duration of the final section, the interrupt processing routine must be disabled, and after the context of the interrupted task is restored, it must be enabled again. These actions need to be performed in every interrupt handling. In many operating systems 1, the interrupt handling section is allocated to a special software module called. interrupt supervisor.

3. What is the difference between reentrant and reentrant software modules. How they are implemented.

1. What is a File Management System (FMS)?

Appointment of the SUF.

Organization of more convenient access to data organized as files. Instead of low-level access to data - specifying the physical address of each record - logical access is used, specifying the name of the file and the entry in it.

A number of operating systems allow you to work with several SUFs, in this case they talk about mounted file systems. There are also operating systems that work without SUF, i.e. any file management system is not needed by itself, it is designed to work in a specific CO and a specific file system.

2. External, internal and software interrupts.

Interrupts that occur during the work of the calc. systems can be divided into external and internal. External interrupts are triggered by asynchronous events that occur outside of the interrupted process. Example - timer interrupt, interrupt from external devices, I/O interrupt, power failure interrupt, operator console interrupt, other processor or OS interrupt.

Internal interrupts are triggered by events that are associated with the operation of the processor and are synchronous with its operations. For example: in case of violation of addressing (when a forbidden or non-existent address is specified) or access to a missing segment or page when organizing virtual memory; if there is an unused 2-digit combination in the field of the operation code; when dividing by 0; overflow or disappearance of the order; upon detection of parity errors, errors in the operation of various hardware devices by means of control.

Software interrupts. These interrupts occur on the corresponding interrupt command, i.e., on this command, the processor performs the same actions as with ordinary internal interrupts. This mechanism was specifically introduced so that switching to system program modules would occur not just as a transition to a subroutine, but in exactly the same way as in ordinary interrupts. This ensures automatic switching processor in privileged mode with the ability to execute any commands. The signals that cause interrupts are generated outside the processor or in the processor itself, and they can occur simultaneously. The choice of which one to process is based on the priority assigned to each type of interrupt. Interrupt priority consideration can be built into technical means, as well as determined by the OS.

Top Related Articles