How to set up smartphones and PCs. Informational portal
  • home
  • Operating Systems
  • Why can't a workstation be a server? Workstation: what is it in terms of computer systems.

Why can't a workstation be a server? Workstation: what is it in terms of computer systems.

By and large, an organization that has more than 7-8 computers on the network needs a server. It will facilitate administration, ensure the reliability of file storage, etc. You have a computer freed up and you decide to use it as a server for your enterprise, and your incoming system administrator says that he will be able to set it up? We have no doubt that it is quite possible to run a server operating system on a "household" computer. Yes, this will help save a significant amount, but is it really profitable and great? Let's figure it out.

The choice of hardware for your server should be determined by the tasks that you are going to assign to this difficult unit. Needless to say, even the name "server" is associated with something big for most ignorant people - huge computers, heavy boards, numerous indicators and connectors... and incredible performance. More often than not, this is not true at all.

At the moment, there are many form factors and a wide variety of server-type hardware and software. Sometimes ordinary household iron is also used to implement the tasks inherent in servers. How adequate this approach is, one can only say after considering in detail the functions performed by such a server and the requirements that apply to its reliability. But still, this solution is more suitable for a home network than for a serious corporate solution.

The most important characteristic of a server is its reliability. This is the most important requirement for absolutely any server. Judge for yourself - the failure of this device will most likely leave you without the information necessary for the business processes of your company. This can be a customer base, an accounting base, an accumulated array of documents, contracts or methodological information. A dead server is a blow to the very heart of your enterprise.

Server availability at any time of work is the second most important condition. Therefore, the hardware and software should be selected so that the server downtime during working hours is minimal - tends to zero.

The third important characteristic of server hardware should be considered the ability to quickly maintain. At the same time, it should be carried out without affecting the first two criteria.

Obviously, to fulfill these requirements, even at a minimal level, “household” iron is of little use, even if your system administrator is a magician and a needleworker in one bottle. The minimum reliability, availability and fast service without stopping services will be provided only by server hardware. Any specialist with at least minimal experience will tell you that “household” hardware is unsuitable for round-the-clock operation, and it is impossible to replace a broken hard drive or power supply without turning off a computer that is tied to many processes. Server hardware in this regard is indispensable.

"Professional" hardware is expensive. Not even like that. More often than not, it's EXPENSIVE! This is not a payment for super performance at all, but just for reliability, the possibility of uninterrupted operation for a long time and the ability to replace failed nodes without stopping the system. Also, often along with server systems you purchase a warranty, and this is worth a lot, since often for such replacements of failed nodes of such systems, exactly the same equipment is required, and not at all a similar new generation. Try to find exactly the same components to replace household hardware released a year and a half ago ... And for server systems under warranty, the manufacturer undertakes to provide such components in case of a breakdown.

Let's start with the so-called form factor. The form factor in this case is called the standard that determines the size of the motherboard, the place of its attachment to the case; the position of the bus interfaces, I/O ports, processor socket and RAM slots on it, as well as the type of connector for connecting the power supply.

There are several types of server form factors. There are conventional servers with vertical cases that look like desktop PCs. They allow you to install ATX or EATX motherboards, and you can easily use standard components. But for systems that include more than one or two servers, rackmount servers are much more convenient. They are usually installed in 19" rack cabinets in a horizontal position. As a result, several servers fit into a 19" rack. Racks come in different heights and depths.

Components of rack servers are most often non-standard and generally do not coincide with the "household" sector. The height of 19" servers is usually expressed in U (unit, standard case, often called "unit" in jargon). Servers are usually found in 1U, 2U and 4U heights. There are servers with higher heights, but this is rare and they are usually sharpened for some narrow application.

Many other products are available for rack mounting, including network switches, routers and firewalls, patch panels, studio audio and video units, uninterruptible power supplies (UPS), network attached storage (NAS), telephone exchanges, etc.

There is also a subcategory of rack servers called blade servers (dlade English - blade). They are much thinner than regular servers. They are not installed in a rack, but in a special tooling pre-installed in the rack.

Blade servers are designed to increase the density of computing units in a limited space. This form factor also simplifies system maintenance somewhat, making cable management more convenient, providing modularity and ease of deployment. Rack servers need power, display cables, networking, etc., while blade servers simply plug into hot-swappable slots.

Let's take a closer look at the individual server nodes and their differences from "household" hardware. Traditionally, let's start with processors. Two firms reign supreme here: Intel and AMD. It is these companies that produce processors for the vast majority of server solutions of various levels. The names of the server processor lines have not changed for a long time: XEON for Intel, and Opteron for AMD. They are distinguished from "consumer" processors by more flexible power consumption (depending on the load), expanded hardware support for virtualization (the ability to create several "virtual" servers on one server), better support for parallel processes and the availability of a number of technologies that allow monitoring the state of both individual processors and cores, and the most complex multiprocessor systems in general.

AMD processors are cheaper, but Intel ones are traditionally considered more reliable. Both firms produce processors that can only run on specific motherboards. Thus, it is impossible to put an Intel processor on an AMD processor board.

To the processor, you need to select the appropriate motherboard for the server. If you are going to build a multiprocessor system using virtual servers, then you need to choose a motherboard with the ability to install multiple processors.

In addition to multiprocessor support, modern server motherboards can have a lot of other useful features and devices that are fundamentally different from "household" devices. For example, several built-in network interfaces, which allows you to use them both to combine different networks, and as separate communication channels for virtual servers created on the same hardware. For systems with increased requirements for network speed, the function of combining 2 or more network interfaces into one can be a salvation, which will increase speed (interface bandwidth is summed up) and reliability (if one interface fails, the server remains available). Such technologies are also present in a number of motherboards.

Server motherboards can also handle large amounts of RAM. For most consumer systems, the limit is 4 GB, while server systems operate at 8, 16 or more GB. This is often absolutely necessary for the normal operation of services and applications. In addition, the number of channels for working with memory in such boards has been increased to 6 or more, which allows the server to more efficiently perform multiple tasks simultaneously.

Often such boards are equipped with built-in hardware support for RAID. RAID (English redundant array of independent disks - a redundant array of independent hard drives) is an array of several disks interconnected by high-speed channels and perceived by the system as a whole. Depending on the type of array used, it can provide varying degrees of fault tolerance and performance. Serves to increase the reliability of data storage and / or to increase the speed of reading / writing information. Now, even in consumer motherboards, support for this kind of arrays appears, but this is only a pale reflection of the capabilities that server hardware controllers have.

Also in these boards, in addition to the already familiar connectors for attaching SATA drives, there are also connectors for connecting so-called SAS drives - a server version of SATA that provides higher reliability and performance.

SAS disks that replaced SCSI server disks completely inherited their main properties characterizing the hard drive, including the spindle speed (15000 rpm - the rotation speed of the magnetic platters inside the device on which the information is located), which allows you to read data at a higher speed . In addition, the SAS standard allows you to transfer data in parallel streams, which old hard drives could not do.

In addition, almost all modern server motherboards are equipped with a very simple graphics controller with a small amount of dedicated memory. And this is justified, since applications that require powerful video cards do not run on servers. Moreover, most of the time there may not be a monitor connected to the server at all.

The principle of operation of the server's RAM is exactly the same as in ordinary "household" computers. The only difference is that the server memory has a built-in hardware mechanism for correcting certain types of errors to maintain data integrity. This saves the system from many problems.

Server power supplies deserve a separate discussion. These devices for the professional sector are specially designed for maximum reliability and fast replacement. Even a normal household power supply can fix the effects of one missing phase, but professional solutions can handle more serious failures. Including - they also provide surge protection, partially duplicating the functionality of uninterruptible power systems (UPS).

In addition, professional power supplies are modular and provide redundancy in the form of two modules. Each of these modules is capable of supplying sufficient power to the system. In case of failure of one block, the system will continue its work from the second block. Replacement of such a module can be done without shutting down the server.

Thus, it is obvious that the reliability and usability of server hardware is an order of magnitude higher than that of “household” hardware. Using an ordinary computer in this responsible capacity is a lottery in its purest form. Are you ready to take the risk?

Any computer network consists of more than just computers connected by wires. In fact, the network in this case is a very complex information infrastructure, each element of which is designed to ensure the exchange of data between users.

Despite the wide variety of computer networks and network equipment, all computers that work inside it are either server or client.

Server: what is it, what are its features

From the point of view of computer science, a server is a “master” computer that serves the entire network. It provides its computing and information resources to other computers that are connected to it - that is, workstations.

At the software level, a special application can also be called a server, which will respond to requests from client programs within the same machine or within a computer network.

In addition, not only a machine can act as a server, but a complex complex consisting of software and hardware parts. Several computers can be connected to such a server at the same time. This will allow us to process user requests more efficiently. For such a server, unique software tools have been developed that connect server computers to each other in so-called clusters.

The purpose of the servers is usually as follows:

  • Processing and organization of data transfer within the network;
  • Processing of mail messages (in the case of mail servers);
  • Organization of access to all kinds of network resources, including the Internet;
  • Organization of data storage in the network;
  • Interaction between game clients.

Depending on the type of server and the network in which it exists, these functions can be combined and intersected.

The concept of a workstation

The client machine (also known as a workstation) is the user's working computer, which is served by the server. Any workstation must provide unhindered access to the network resources that the server has. Of course, only if the client has the appropriate permissions.

No workstation makes its resources available for network sharing by other workstations.

Typically, network resources are assigned local drive or port names. For example, Z, E, I, etc., or LPTx, COMx, etc.

Any workplace can be represented either as a full-fledged user's working machine, or as a terminal that gives the employee access to network resources. In the second case, the terminal may not even have its own disk storage.

Not only computers, but also peripheral devices can act as clients. For example, a network printer.

One way or another, but the workstation is an end point where a person interacts with all the necessary tools that he needs to solve his problems through network resources.

The difference between a server and a workstation

Of course, there are actually quite a few differences between a server station and a work station. But there is one key. It lies in the fact that the server is designed to issue responses to requests in automatic mode. And the workstation (client) generates and sends these requests to the server, and also interacts with the user.

Join now!

We free up your time for life. K-Systems is another integrator!
fields marked with * are required

ARM composition.

Automated workstation (AWP) of the end user of the information system

Appointment and composition of AWP. Characteristics of the types of support for AWS

workstation is a set of information resources and software and hardware tools that provide the user with data processing and automation of management functions in a specific subject area.

The workstation has a problem-professional orientation and allows the user to transfer to the computer the execution of typical repetitive operations related to the accumulation, systematization, storage, search, processing, protection and transmission of data.

The composition of the workstation is determined:

Features of professional orientation of a specialist;

The level of management tasks (tactical, strategic, predictive);

Features of the tasks to be solved (for specialists: regulation of documents - repetition in terms, variety of regulatory and reference and operational information, etc.; for managers: setting strategic goals, planning, choosing sources of funding, policy development, etc.).

18. Classification of computers.

19. PC structure.

A PC includes three main devices: a system unit, a keyboard, and a monitor. However, to expand the functionality of a PC, various additional peripheral devices can be connected to it: printing devices (printers), various manipulators (mouse, joystick, trackball, light pen), information input devices (scanners, graphics tablets - digitizers), plotters, etc.

These devices are connected to the system unit using cables through special sockets (connectors), which are usually located on the back of the system unit. Additional devices will interfere if there are free slots on the motherboard directly into the system unit, for example, a modem for exchanging information with other PCs via the telephone network. As a rule, PCs have a modular structure (the structure of a modern PC is shown in Fig. 3.1). All modules are connected by a common bus (system bus).

20. Workstation and server.

In any case, the workstation is the end point of the interaction of a specialist with the necessary tools based on computer technology. Workstations are designed to perform final tasks and interact with the operator.

Server- a remote computer whose task is to issue requests for end clients connected to it (be it workstations, access terminals, other servers).

A server can be understood as a special program that responds to requests from other client programs in a local or global network. In this case, one of the workstations can act as a server, the purpose of which is to serve requests from other network clients.

Or a server is understood as a special software and hardware complex, consisting of several powerful computers of a special configuration, which is designed exclusively for processing requests. That is, it is not only a specially configured program on one of the workplaces on the network, but a special productive computer or their entire network, which are busy only responding to requests. For such platforms, special hardware configurations are developed that are easily interfaced with each other, forming a super-computer (cluster).

Typical servers are designed for:

  • processing and forwarding mail on the network,
  • processing queries to databases,
  • providing access to web resources,
  • redirecting or distributing traffic on the network (proxy servers),
  • storing and transferring files on the network,
  • ensuring the interaction of game clients.

Other configurations are also possible.

How is a server different from a computer (workstation)?

The main property of the server is the issuance of automatic responses to requests from connected clients. A workstation is designed to work only with the end user.

Our company offers turnkey workstation solutions, server hardware and software for both workstations and servers.

21.Classification of computer networks.

After personal computers were created by mankind, it was necessary to create a new approach to the organization of systems that process data, as well as the creation of new technologies in the field of storage, transmission and use of information. Somewhat later, there was a need to move from the use of separate computers operating in systems that process data centrally to systems capable of processing data distributed. Distributed data processing refers to the processing of information that is performed by independent but interconnected computers, which constitute a distributed system. A computer network is a set of computers that are interconnected by communication channels, which allows you to create a single system that fully meets the requirements imposed by the rules of distributed information processing. Thus, the main purpose of computer networks is the joint processing of data, in which all components of the system participate, regardless of their physical location. The classification of computer networks involves their division into types of computer networks, depending on the geographical location of computers and other components relative to each other. Thus, the classification of computer networks involves their division into: Global - these are computer networks that unite subscribers that are located at a great distance from each other - from hundreds to tens of thousands of kilometers. Such networks make it possible to solve the problem of combining the information resources of all mankind, as well as to organize instant access to these resources; Regional - these are computer networks that connect subscribers that are located at smaller distances than in global networks, but still significant distances. An example of a regional network is a network of a large city or a separate state. Local - these are computer networks that unite subscribers located at relatively short distances from each other - most often in one building or several nearby buildings. These are networks of enterprises, offices of companies, firms, etc. In addition, the classification of computer networks suggests that global, regional and local networks can be combined, which makes it possible to create multi-network hierarchies, which are powerful tools that allow you to process huge information arrays and provide virtually unlimited access to information resources. Among other things, the classification of computer networks, or rather its understanding, makes it possible to build just such a system that will fully satisfy the needs of an enterprise, office, city or state for information. In general, computer networks consist of three nested subsystems: a network of workstations, a network of servers, and a basic data transmission network. A workstation (can be represented by a client machine, workplace, subscriber station, terminal) is a computer used by a computer network subscriber. A network of workstations is a set of workstations, as well as means of communication, which are designed to ensure the interaction of workstations between themselves and the server. A server is a computer that performs general network tasks and provides workstations with various services. A server network is a collection of network servers, as well as communication tools designed to connect servers to the core network. The basic data transmission network is a set of means for transmitting information between servers. The core network includes communication channels and communication nodes. A communication node is a set of means of switching, as well as information transmission, concentrated in one point. The purpose of the communication node is to receive data that comes through communication channels, as well as their transmission to channels that lead to subscribers.

22. Types of data channels.

Data transmission channels used in computer networks are classified according to a number of criteria. First, according to the form of information representation in the form of electrical signals, the channels are divided into digital and analog. Secondly, according to the physical nature of the data transmission medium, communication channels are wired (usually copper), optical (usually fiber optic), wireless (infrared and radio channels). Thirdly, according to the method of dividing the medium between messages, the above-mentioned channels are distinguished with time (tdm) and frequency (fdm) division. One of the main characteristics of a channel is its capacity (information transfer rate, i.e., information rate), determined by the channel bandwidth and the method of encoding data in the form of electrical signals. Information rate is measured by the number of Bits of information transmitted per unit of time. Along with the information, they operate with a bean (modulation) rate, which is measured in bauds, i.e., the number of changes in a discrete signal per unit time. It is the baud rate that is determined by the bandwidth of the line. If one change in the value of a discrete signal corresponds to several bits, then the information rate exceeds the dead one. Indeed, if n bits are transmitted in the baud interval (between adjacent signal changes), then the number of signal gradations is 2n. For example, with a number of gradations of 16 and a speed of 1200 baud

One baud corresponds to 4 bps and the information rate is 4800 bps. With an increase in the length of the communication line, the attenuation of the signal increases and, consequently, the bandwidth and information rate decrease.

23. Digital and analog channels.

Under communication channel understand the totality of the propagation medium and technical means of transmission between two channel interfaces or junctions of type C1 (see Figure 1-1). For this reason, the C1 junction is often referred to as a channel junction.

Depending on the type of transmitted signals, there are two large classes of communication channels, digital and analog.


Rice. 25. Digital and analog transmission channels

A digital channel is a bit path with a digital (pulse) signal at the input and output of the channel. A continuous signal is received at the input of an analog channel, and a continuous signal is also taken from its output (Fig. 25).

Signal parameters can be continuous or take only discrete values. Signals can contain information either at every moment of time (continuous in time, analog signals), or only at certain, discrete times (digital, discrete, pulse signals).

Digital channels are PCM, ISDN, T1 / E1 type channels and many others. Newly created SPDs are trying to build on the basis of digital channels, which have a number of advantages over analog ones.

Analog channels are the most common due to their long history of development and ease of implementation. A typical example of an analog channel is a voice frequency channel (CH), as well as group paths for 12, 60 or more voice frequency channels. The PSTN telephone circuit typically includes multiple switches, splitters, group modulators, and demodulators. For the PSTN, this channel (its physical route and a number of parameters) will change with each next call.

When transmitting data, there must be a device at the input of the analog channel that would convert the digital data coming from the DTE into analog signals sent to the channel. The receiver must contain a device that converts the received continuous signals back into digital data. These devices are modems. Similarly, when transmitting over digital channels, the data from the DTE must be converted to the form adopted for this particular channel. This conversion is handled by digital modems, often referred to as ISDN adapters, E1/T1 channel adapters, line drivers, and so on (depending on the particular type of channel or transmission medium).

The term modem is widely used. This does not necessarily imply any modulation, but simply indicates certain operations for converting signals coming from the DTE for their further transmission over the channel in use. Thus, in a broad sense, the terms modem and data link equipment (DCE) are synonymous.

Kazakh-Russian International University

Protsan Alexander Valerievich

AU-401, 4th course

"Automation and control"

Control work on discipline

"Computer systems, networks and telecommunications"

Topic: "Purpose of network equipment of computer networks: workstation, server, modem, hub network adapter, bridge, gateway, router"

Introduction

To date, there are more than 130 million computers in the world, and more than 80% of them are connected to various information and computing networks, from small local area networks in offices to global networks such as the Internet.

The global trend towards connecting computers in a network is due to a number of important reasons, such as speeding up the transmission of information messages, the ability to quickly exchange information between users, receive and transmit messages (faxes, E-Mail letters, etc.) without leaving the workplace, the ability to instantly receive any information from anywhere in the world, as well as the exchange of information between computers of different manufacturers operating under different software.

Such huge potentialities that the computer network carries and the new potential rise that the information complex experiences, as well as a significant acceleration of the production process, do not give us the right not to accept this for development and not to apply them in practice.

Therefore, it is necessary to develop a fundamental solution to the issue of organizing an IVS (information and computer network) based on an existing computer park and a software package that meets modern scientific and technical requirements, taking into account the growing needs and the possibility of further gradual development of the network in connection with the emergence of new technical and software solutions.

A LAN is understood as a joint connection of several separate computer workstations (workstations) to a single data transmission channel.

Thanks to computer networks, we have gained the possibility of simultaneous use of programs and databases by several users.

The concept of a local area network - LAN (eng. LAN - Local Agea Network) refers to geographically limited (territorially or production) hardware and software implementations in which several computer systems are connected to each other using appropriate means of communication.

Through this connection, the user can interact with other workstations connected to this LAN.

In industrial practice, LANs play a very important role.

Through a LAN, the system combines personal computers located at many remote workplaces that share equipment, software and information. Workplaces of employees are no longer isolated and are combined into a single system. Consider the advantages obtained by networking personal computers in the form of an intra-industrial computer network.

Separation resources

Resource sharing allows you to use resources sparingly, such as controlling peripherals such as laser printers, from all connected workstations.

Data separation.

Data sharing provides the ability to access and manage databases from peripheral workstations that need information.

Separation of software

Separation of software provides the possibility of simultaneous use of centralized, previously installed software.

Sharing of processor resources.

When dividing the processor resources, it is possible to use computing power for data processing by other systems in the network. The opportunity provided is that the available resources are not “attacked” instantly, but only through a special processor available to each workstation.

Multiplayer mode

The multi-user properties of the system facilitate the simultaneous use of centralized applications previously installed and managed, for example, if a user of the system is working on another task, then the current work in progress is relegated to the background.

Work station

Work station(English) workstation) - a set of hardware and software designed to solve a certain range of problems.

A workstation as a place of work for a specialist is a full-fledged computer or computer terminal (input-output devices, separated and often remote from the control computer), a set of necessary software, supplemented, if necessary, with auxiliary equipment: a printer, an external storage device on magnetic and / or optical media, barcode scanner, etc.

In domestic literature, the term AWP (workstation) was also used, but in a narrower sense than "workstation".

Also, the term "workstation" refers to a computer in the local area network (LAN) in relation to the server. Computers in the local network are divided into workstations and servers. At workstations, users solve applied problems (work in databases, create documents, make calculations). The server services the network and provides its own resources to all network nodes, including workstations.

There are fairly stable signs of workstation configurations designed to solve a certain range of tasks, which allows them to be separated into a separate professional subclass: multimedia (image, video, sound processing), CAD, GIS, field work, etc. Each such subclass can have its own features and unique components (in parentheses are examples of areas of use): large video monitor and / or multiple monitors (CAD, GIS, stock exchange), high-speed graphics card (cinema and animation, computer games), large amount of data storage (photogrammetry, animation) , the presence of a scanner (photo), protected design (armed forces, field work), etc.

Server

server called a computer dedicated from the group personal computers(or workstations) to perform some service task without the direct participation of a person. The server and the workstation may have the same hardware configuration, as they differ only in the participation of the person behind the console in their work.

Some service tasks can run on the workstation in parallel with the user's work. Such a workstation is conventionally called non-dedicated server .

Servers need a console (usually a monitor/keyboard/mouse) and human participation only at the initial setup stage, during hardware maintenance and emergency management (normally, most servers are controlled remotely). For emergency situations, servers are typically provided with one console kit per group of servers (with or without a switch, such as a KVM switch).

As a result of specialization, a server solution may receive a simplified console (for example, a communication port), or lose it altogether (in this case, initial configuration and emergency management can only be performed via the network, and network settings can be reset to the default state).

Specialization of server hardware goes in several ways, the choice of which direction to go in, each manufacturer determines for himself. Most specializations increase the cost of equipment.

Server hardware, as a rule, is equipped with more reliable elements:

  • memory with increased fault tolerance, such as for i386-compatible computers, memory intended for servers has error correction technology (ECC). Error Checking and Correction). On some other platforms, such as SPARC (Sun Microsystems), all memory has error correction.
  • reservation, including:
    • power supplies (including hot plug)
    • hard drives (RAID; including hot-plug and swap). Not to be confused with the "RAID" systems of conventional computers.
  • more thoughtful cooling (function)

Servers (and other equipment) that need to be mounted on some standard chassis (such as 19-inch racks and cabinets) are standardized and supplied with the necessary mounting hardware.

Servers that do not require high performance and a large number of external devices are often reduced in size. Often this decrease is accompanied by a decrease in resources.

In the so-called "industrial version", in addition to the reduced size, the case has greater strength, protection from dust (provided with replaceable filters), humidity and vibration, and also has a button design that prevents accidental pressing.

Structurally, hardware servers can be executed in desktop, floor, rack and ceiling versions. The latter option provides the highest density of computing power per unit area, as well as maximum scalability. Since the late 1990s, so-called blade servers have become increasingly popular in systems of high reliability and scalability. blade - blade) - compact modular devices that reduce the cost of power, cooling, maintenance, etc ...

In terms of resources (frequency and number of processors, amount of memory, number and performance of hard drives, performance of network adapters), servers specialize in two opposite directions - increasing resources and reducing them.

Growth of resources is intended to increase the capacity (for example, specialization for a file server) and performance of the server. When the performance reaches a certain limit, further growth is continued by other methods, for example, by parallelizing the task between several servers.

Resource reduction aims to reduce the size and power consumption of servers.

The extreme degree of specialization of servers are the so-called hardware solutions(hardware routers, network disk arrays, hardware terminals, etc.). The hardware of such solutions is built from scratch or redesigned from an existing computer platform without regard to compatibility, which makes it impossible to use the device with standard software.

Software in hardware solutions is loaded into permanent and/or non-volatile memory by the manufacturer.

Hardware solutions tend to be more reliable than conventional servers, but less flexible and versatile. In terms of price, hardware solutions can be both cheaper and more expensive than servers, depending on the class of equipment.

Recently, a large number of diskless server solutions have spread, based on computers (usually x86) of the Mini-ITX form factor and less with specialized processing of GNU / Linux on an SSD disk (ATA flash or flash card), positioned as "hardware solutions" . These solutions do not belong to the hardware class, but are ordinary specialized servers. Unlike (more expensive) hardware solutions, they inherit the problems of the platform and software solutions they are based on.

Modem

Modem(an abbreviation made up of the words modulator-demodulator) - a device used in communication systems and performing the function of modulation and demodulation. The modulator modulates the carrier signal, that is, changes its characteristics in accordance with changes in the input information signal, the demodulator performs the reverse process. A special case of a modem is a widely used peripheral device for a computer that allows it to communicate with another computer equipped with a modem through the telephone network (telephone modem) or cable network (cable modem).

The modem performs the function of the terminal equipment of the communication line. In this case, the formation of data for transmission and processing of the received data is carried out by the terminal equipment, in the simplest case, a personal computer.

Types of modems for computers

By execution:

  • external- connected via COM, USB port or a standard connector in the RJ-45 network card usually have an external power supply (there are USB-modems powered by USB and LPT-modems).
  • internal- installed inside the computer in the slot ISA, PCI, PCI-E, PCMCIA, AMR, CNR
  • built-in- are inside a device, such as a laptop or docking station.

According to the principle of work:

  • hardware- all signal conversion operations, support for physical exchange protocols, are performed by a calculator built into the modem (for example, using a DSP, controller). Also in the hardware modem there is a ROM, which contains the firmware that controls the modem.
  • soft modem, winmodems(English) Host based soft - modem) - hardware modems, devoid of ROM with firmware. The firmware of such a modem is stored in the memory of the computer to which the modem is connected (or installed). At the same time, the modem contains an analog circuit and converters: ADC, DAC, interface controller (for example, USB). It is operational only if there are drivers that process all operations for signal encoding, error checking and protocol management, respectively implemented in software and performed by the computer's central processor. Initially, there were only versions for operating systems of the MS Windows family, from which the second name appeared.
  • semi-program(Controller based soft-modem) - modems in which some of the modem functions are performed by the computer to which the modem is connected.

By type of connection:

  • Modems for dial-up telephone lines- the most common type of modem
  • ISDN- modems for digital switched telephone lines
  • DSL- used to organize dedicated (unswitched) lines using the regular telephone network. They differ from switched modems in that they use a different frequency range, and also in that the signal is transmitted over telephone lines only to the PBX. Usually, they allow the use of the telephone line in the usual way at the same time as the data exchange.
  • Cable- are used for data exchange via specialized cables - for example, through a cable for collective television using the DOCSIS protocol.
  • Cellular- work using cellular communication protocols - GPRS, EDGE, 3G, 4G, etc. They often have versions in the form of a USB key fob. Mobile communication terminals are also often used as such modems.
  • Satellite
  • PLC- use the technology of data transmission over the wires of the household electrical network.

The most common at present are:

  • internal soft modem
  • external hardware modem
  • built-in modems in laptops.

Network adapter

Network adapter, also known as network card, NIC, Ethernet adapter, NIC (eng. network interface controller) is a peripheral device that allows a computer to communicate with other network devices.

Types

According to the constructive implementation, network cards are divided into:

  • internal - separate boards inserted into a PCI, ISA or PCI-E slot;
  • external, connected via USB or PCMCIA interface, mainly used in laptops;
  • built into the motherboard.

On 10-Mbit NICs, 3 types of connectors are used to connect to the local network:

  • 8P8C for twisted pair;
  • BNC connector for thin coaxial cable;
  • 15-pin transceiver connector for thick coaxial cable.

These connectors can be present in different combinations, sometimes even all three at once, but at any given moment only one of them works.

On 100-megabit boards, only a twisted-pair connector (8P8C, erroneously called RJ-45) is installed.

Next to the twisted pair connector, one or more information LEDs are installed to indicate the presence of a connection and the transfer of information.

One of the first mass network cards was Novell's NE1000/NE2000 series, and in the late 1980s there were quite a few Soviet clones of network cards with a BNC connector, which were produced with various Soviet computers and separately.

Network adapter settings

When configuring a network adapter card, the following options may be available:

  • IRQ line number
  • DMA channel number (if supported)
  • base I/O address
  • RAM base address (if used)
  • support for duplex/half duplex auto-negotiation standards, speed
  • support for tagged VLAN packets (802.1q) with the ability to filter packets of a given VLAN ID
  • WOL (Wake-on-LAN) parameters

Depending on the power and complexity of the network card, it can implement computing functions (mainly calculation and generation of frame checksums) in hardware or software (by a network card driver using a central processor).

Server network cards can be supplied with two (or more) network connectors. Some NICs (built into the motherboard) also provide firewall functionality (eg nforce).

Functions and characteristics of network adapters

The network adapter (Network Interface Card, NIC), together with its driver, implements the second, channel level of the open systems model in the end node of the network - a computer. More precisely, in a network operating system, the adapter/driver pair performs only the functions of the physical and MAC layers, while the LLC layer is usually implemented by an operating system module that is common to all drivers and network adapters. Actually, this is how it should be in accordance with the IEEE 802 protocol stack model. For example, in Windows NT, the LLC level is implemented in the NDIS module, which is common to all network adapter drivers, regardless of which technology the driver supports.

The network adapter, together with the driver, performs two operations: transmitting and receiving a frame. Transferring a frame from a computer to a cable consists of the following steps (some may be missing, depending on the encoding methods used):

  • Reception of an LLC data frame through an inter-layer interface along with MAC-layer address information. Usually, interaction between protocols inside a computer occurs through buffers located in RAM. Data for transmission to the network is placed in these buffers by higher-level protocols that retrieve them from disk memory or from the file cache using the I / O subsystem of the operating system.
  • The design of the MAC layer data frame into which the LLC frame is encapsulated (with flags 01111110 discarded). Filling in the destination and source addresses, calculating the checksum.
  • Formation of code symbols when using redundant codes of the 4V/5V type. Scrambling codes to obtain a more uniform spectrum of signals. This stage is not used in all protocols - for example, 10 Mbps Ethernet technology does without it.
  • Issuance of signals to the cable in accordance with the accepted line code - Manchester, NRZ1. MLT-3 etc.

Receiving a frame from a cable to a computer includes the following steps:

  • Receiving from the cable signals that encode the bit stream.
  • Isolation of signals against the background of noise. This operation can be performed by various specialized chips or DSP signal processors. As a result, a certain bit sequence is formed in the adapter's receiver, with a high degree of probability coinciding with the one that was sent by the transmitter.
  • If the data was scrambled before being sent to the cable, then it is passed through the descrambler, after which the code symbols sent by the transmitter are restored in the adapter.
  • Frame checksum check. If it is incorrect, then the frame is discarded, and the corresponding error code is transmitted to the LLC protocol through the interlayer interface upwards. If the checksum is correct, then the LLC frame is extracted from the MAC frame and transmitted through the inter-layer interface upstream, to the LLC protocol. The LLC frame is buffered in RAM.

The distribution of responsibilities between the network adapter and its driver is not defined by standards, so each manufacturer decides this issue on its own. Typically, network adapters are divided into adapters for client computers and adapters for servers.

In adapters for client computers, much of the work is offloaded to the driver, thereby making the adapter simpler and cheaper. The disadvantage of this approach is the high degree of loading of the computer's central processor with routine work on transferring frames from the computer's RAM to the network. The central processor is forced to do this work instead of performing user application tasks.

Therefore, adapters designed for servers usually have their own processors, which do most of the work of transferring frames from RAM to the network and vice versa. An example of such an adapter is the SMS EtherPower network adapter with an integrated Intel i960 processor.

Depending on which protocol the adapter implements, adapters are divided into Ethernet adapters, Token Ring adapters, FDDI adapters, etc. hub, many Ethernet adapters today support two speeds and have the prefix 10/100 in their name. Some manufacturers call this property auto-sensing.

The network adapter must be configured before being installed on the computer. When configuring an adapter, you typically specify the IRQ number used by the adapter, the DMA channel number (if the adapter supports DMA mode), and the base address of the I/O ports.

If the network adapter, computer hardware, and operating system support the Plug-and-Play standard, then the adapter and its driver are configured automatically. Otherwise, you must first configure the network adapter, and then repeat its configuration settings for the driver. In general, the details of the procedure for configuring a network adapter and its driver largely depend on the manufacturer of the adapter, as well as on the capabilities of the bus for which the adapter is designed.

Classification of network adapters

As an example of the classification of adapters, we use the approach of 3Com, which has a reputation as a leader in the field of Ethernet adapters. 3Com believes that Ethernet network adapters have gone through three generations in their development.

The first generation adapters were made on discrete logic circuits, as a result of which they had low reliability. They had buffer memory for only one frame, which led to poor performance of the adapter, since all frames were transmitted from the computer to the network or from the network to the computer sequentially. In addition, the configuration of the first generation adapter was done manually, using jumpers. Each type of adapter used its own driver, and the interface between the driver and the network operating system was not standardized.

Second-generation network adapters began to use the multi-frame buffering method to improve performance. In this case, the next frame is loaded from the computer's memory into the adapter's buffer simultaneously with the transfer of the previous frame to the network. In receive mode, after the adapter has fully received one frame, it can begin to transfer this frame from the buffer to the computer's memory at the same time as receiving another frame from the network.

Second-generation network adapters make extensive use of highly integrated chips, which improves the reliability of the adapters. In addition, the drivers for these adapters are based on standard specifications. Second-generation adapters typically ship with drivers that work in both the NDIS (Network Driver Interface Specification) standard developed by 3Com and Microsoft and approved by IBM, and the ODI (Open Driver Interface Specification) standard developed by Novell.

Third-generation network adapters (3Com includes its adapters of the EtherLink III family among them) implement a pipelined frame processing scheme. It lies in the fact that the processes of receiving a frame from the computer's RAM and transmitting it to the network are combined in time. Thus, after receiving the first few bytes of the frame, their transmission begins. This significantly (by 25-55%) increases the performance of the chain RAM - adapter - physical channel - adapter - RAM. Such a scheme is very sensitive to the transmission start threshold, that is, to the number of frame bytes that are loaded into the adapter's buffer before transmission to the network begins. The third generation network adapter self-tunes this parameter by analyzing the operating environment, as well as by calculating, without the participation of a network administrator.

Self-tuning provides the best possible performance for a particular combination of the performance of the computer's internal bus, its interrupt system, and its direct memory access system.

Third-generation adapters are based on application-specific integrated circuits (ASICs), which increase the performance and reliability of the adapter while reducing its cost. 3Com called its frame-pipelining technology Parallel Tasking, and other companies have implemented similar schemes in their adapters. Improving the performance of the "adapter-memory" link is very important for improving the performance of the network as a whole, since the performance of a complex frame processing route, including, for example, hubs, switches, routers, global links, etc., is always determined by the performance of the slowest element this route. Therefore, if the network adapter of the server or client computer is slow, no fast switches will be able to speed up the network.

Network adapters produced today can be attributed to the fourth generation. These adapters necessarily include an ASIC that performs MAC-level functions, the speed is developed up to 1 Gbps, as well as a large number of high-level functions. The set of such functions may include support for the RMON remote monitoring agent, a frame prioritization scheme, remote computer control functions, etc. In server versions of adapters, a powerful processor is almost required, which offloads the central processor. An example of a fourth-generation network adapter is the 3Com Fast EtherLink XL 10/100 adapter.

network hub

network hub or Hub(slang from English. hub- activity center) - a network device designed to combine several Ethernet devices into a common network segment. Devices are connected using twisted pair, coaxial cable or fiber. Term concentrator (hub) also applicable to other data transfer technologies: USB, FireWire, etc.

Currently, hubs are almost never produced - they have been replaced by network switches (switches), which separate each connected device into a separate segment. Network switches are erroneously referred to as "smart hubs".

Principle of operation

The hub works at the physical layer of the OSI network model, repeats the signal coming to one port to all active ports. If a signal arrives at two or more ports, a collision occurs at the same time, and the transmitted data frames are lost. Thus, all devices connected to the hub are in the same collision domain. Hubs always operate in half-duplex mode, all connected Ethernet devices share the provided access bandwidth.

Many hub models have the simplest protection against excessive collisions that occur due to one of the connected devices. In this case, they can isolate the port from the general transmission medium. For this reason, network segments based on twisted pair are much more stable in the operation of segments on coaxial cable, since in the first case each device can be isolated from the general environment by a hub, and in the second case several devices are connected using one cable segment, and, in the case of a large number of collisions, the hub can isolate only the entire segment.

Recently, hubs have been used quite rarely, instead of them, switches have become widespread - devices that operate at the data link layer of the OSI model and increase network performance by logically separating each connected device into a separate segment, a collision domain.

Characteristics of network hubs
  • Number of ports- connectors for connecting network lines, hubs are usually produced with 4, 5, 6, 8, 16, 24 and 48 ports (the most popular with 4, 8 and 16). Hubs with more ports are significantly more expensive. However, hubs can be cascaded to each other, increasing the number of ports on a network segment. Some have special ports for this.
  • Transfer rate- measured in Mbps, hubs are available with speeds of 10, 100 and 1000. In addition, hubs with the ability to change speed are mainly common, referred to as 10/100/1000 Mbps. The speed can be switched both automatically and using jumpers or switches. Typically, if at least one device is attached to a hub at a low range speed, it will send data to all ports at that speed.
  • Network media type- usually it is twisted pair or fiber, but there are hubs for other media, as well as mixed ones, for example, for twisted pair and coaxial cable.

network bridge

Bridge , network bridge, bridge(slang, from English. bridge) - network equipment for combining segments of a local network. The network bridge operates at the link layer (L2) of the OSI model, providing collision domain limitation (in the case of an Ethernet network). Bridges route data frames according to the MAC addresses of the frames. A formal description of a network bridge is given in the IEEE 802.1D standard.

Differences between switches and bridges

In general, a switch (switch) and a bridge are similar in functionality; the difference lies in the internal structure: bridges process traffic using a central processor, while a switch uses a switching matrix (hardware circuitry for switching packets). Currently, bridges are practically not used (since they require a high-performance processor to work), except for situations when network segments are connected with a different organization of the first level, for example, between xDSL connections, optics, Ethernet. In the case of SOHO equipment, the transparent switching mode is often referred to as "bridging mode".

Functionality

The bridge provides:

  • collision domain constraint
  • latency of frames addressed to a host in the sender's segment
  • limiting the transition from domain to domain of erroneous frames:
    • dwarfs (frames of less length than allowed by the standard (64 bytes))
    • frames with CRC errors
    • frames with the sign "collision"
    • protracted frames (larger than allowed by the standard)

Bridges "learn" the nature of the location of network segments by building address tables of the form "Interface:MAC address", which contain the addresses of all network devices and segments necessary to gain access to this device.

Bridges increase network latency by 10-30%. This increase in latency is due to the fact that the bridge, when transmitting data, needs additional time to make a decision. The bridge is considered a store-and-forward device because it must parse the frame's destination address field and calculate the CRC checksum in the frame's check sequence field before sending the frame to all ports. If the destination port is currently busy, then the bridge may temporarily hold the frame until the port becomes free.
These operations take some time to complete, which slows down the transfer process and increases latency.

Software implementation

Mode bridging is present in some types of high-level network equipment and operating systems, where it is used to "logically combine" several ports into a single whole (in terms of higher protocols), turning these ports into a virtual switch. In Windows XP/2003, this mode is called "bridge connections". In the Linux operating system, when connecting interfaces into a bridge, a new brN interface is created (N is a serial number, starting from zero - br0), while the original interfaces are in the down state (from the point of view of the OS). The bridge-utils package, included with most Linux distributions, is used to create bridges.

Gateway

network gateway

network gateway- hardware router gateway) or software for interfacing computer networks using different protocols (for example, local and global).

Description

A network gateway converts protocols from one type of physical medium into protocols from another type of physical medium (network). For example, when you connect your local computer to the Internet, you use a network gateway.

Routers (routers) are one example of hardware network gateways.

Network gateways work on almost all known operating systems. The main task of a network gateway is to convert the protocol between networks. The router itself receives, forwards and sends packets only among networks using the same protocols. On the one hand, a network gateway can accept a packet formatted for one protocol (for example, Apple Talk) and convert it into a packet of another protocol (for example, TCP / IP) before sending it to another network segment. Network gateways can be a hardware solution, software, or both, but are usually software installed on a router or computer. The network gateway must understand all the protocols used by the router. Typically, network gateways are slower than network bridges, switches, and regular routers. A network gateway is a point in a network that serves as an exit to another network. On the Internet, a host or endpoint can be either a network gateway or a host. Internet users and computers that deliver web pages to users are hosts, and nodes between different networks are network gateways. For example, a server that controls traffic between a company's local network and the Internet is a network gateway.

In large networks, a server acting as a network gateway is usually integrated with a proxy server and firewall. The network gateway is often combined with a router that manages the distribution and conversion of packets on the network.

A network gateway can be a special hardware router or software installed on a regular server or personal computer. Most computer operating systems use the terms described above. Windows computers usually use the built-in network connection wizard, which, according to the specified parameters, establishes a connection to a local or global network on its own. Such systems may also use the DHCP protocol. Dynamic Host Configuration Protocol (DHCP) is a protocol that is commonly used by network equipment to obtain various data that a client needs to work with the IP protocol. Using this protocol, adding new devices and networks becomes simple and almost automatic.

Internet gateway - a software network gateway that distributes and controls access to the Internet among local network clients (users).

Description

An Internet gateway, as a rule, is software designed to organize access to the Internet from a local network. The program is a working tool for a system administrator, allowing him to control the traffic and actions of employees. Typically, an Internet gateway allows you to distribute access among users, keep track of traffic, restrict access to individual users or groups of users to Internet resources. An Internet gateway may contain a proxy server, a firewall, a mail server, a shaper, an antivirus, and other network utilities. The Internet gateway can work both on one of the network computers and on a separate server. The gateway is installed as software on a machine with an operating system (such as Kerio winroute firewall on Windows) or on a bare computer with an embedded operating system deployed (such as Ideco ICS with embedded linux).

Software Internet Gateways
  • Microsoft ISA Server
  • Kerio Winroute Firewall
  • Traffic Inspector
  • usergate
  • Ideco Internet Control Server
  • TMeter

router

router or router , router(from English. route), - a network device, based on information about the network topology and certain rules, making decisions about forwarding network layer packets (layer 3 of the OSI model) between different network segments.

Works at a higher level than the switch and network bridge.

Principle of operation

Typically, the router uses the destination address specified in the data packets and determines from the routing table the path over which the data should be sent. If there is no described route in the routing table for the address, the packet is dropped.

There are other ways to determine the packet forwarding path, such as using the source address, upper layer protocols used, and other information contained in network layer packet headers. Often, routers can translate the addresses of the sender and recipient, filter the transit data flow based on certain rules in order to restrict access, encrypt / decrypt the transmitted data, etc.

Routing table

The routing table contains information on the basis of which the router makes a decision about further forwarding of packets. The table consists of a number of entries - routes, each of which contains the address of the recipient's network, the address of the next node to which packets should be transmitted and some entry weight - a metric. The metrics of the entries in the table play a role in calculating the shortest routes to various destinations. Depending on the router model and the routing protocols used, the table may contain some additional service information. For example:

192.168.64.0/16 via 192.168.1.2, 00:34:34, FastEthernet0/0.1 where 192.168.64.0/16 is the destination network, 110/ is the administrative distance /49 is the route metric, 192.168.1.2 is the address of the next router to follow transmit packets for the network 192.168.64.0/16, 00:34:34 - the time during which this route was known, FastEthernet0/0.1 - the router interface through which you can reach the "neighbor" 192.168.1.2.

The routing table can be compiled in two ways:

  • static routing- when records in the table are entered and changed manually. This method requires administrator intervention every time there is a change in the network topology. On the other hand, it is the most stable and requires a minimum of router hardware resources to serve the table.
  • dynamic routing- when entries in the table are updated automatically using one or more routing protocols - RIP, OSPF, IGRP, EIGRP, IS-IS, BGP, etc. In addition, the router builds a table of optimal paths to destination networks based on various criteria - the number of intermediate nodes, channel bandwidth, data transfer delays, etc. The criteria for calculating optimal routes most often depend on the routing protocol, and are also set by the router configuration. This way of building a table allows you to automatically keep the routing table up to date and calculate the best routes based on the current network topology. However, dynamic routing puts additional load on devices, and high network instability can lead to situations where routers do not have time to synchronize their tables, which leads to conflicting information about the network topology in its various parts and loss of transmitted data.

Often, graph theory is used to build routing tables.

Application

Routers help reduce network traffic by dividing it into collision or broadcast domains, and by filtering packets. They are mainly used to combine networks of different types, often incompatible in architecture and protocols, for example, to combine Ethernet LANs and WAN connections using xDSL, PPP, ATM, Frame relay, etc. Often, a router is used to provide access from local network to the global Internet network, performing the functions of address translation and firewall.

The router can be either a specialized (hardware) device (typical representatives of Cisco, Juniper), or a regular computer that performs the functions of a router. There are several software packages (mostly based on the Linux kernel) with which you can turn your PC into a high-performance and feature-rich router, such as the Quagga.

Bibliography.

1. Craig Zucker - Computer networks. Modernization and troubleshooting. Ed. BHV. 2001

2. Materials from Wikipedia - the free encyclopedia http://ru.wikipedia.org

Date added: December 10, 2012 at 09:33
Author of the work: a*******@mail.ru
The type of work: test

Download in ZIP archive (560.12 Kb)

Attached files: 1 file

Download file

Test Server_.doc

- 3.37 MB

Ministry of Education Russian Federation

Federal Agency for Education

Penza State University

Test

discipline "Work on the Internet"

on the topic “What is a server? The difference between a server and a workstation (client).
The main advantages obtained by networking computers. Definition of network technologies. Elements of a computer network. The role and place of network technologies in the modern world.

Completed by a student of the group

Saraikina O.N.

checked

Kolchugin A.F.

Penza, 2012

Introduction

At present, there is no such person, perhaps, who has never had a chance to work with a computer. Modern computer technologies are used everywhere: from ordinary retail outlets to research centers.

As confirmation, we examine the data published by the Russian Ministry of Communications and which were submitted to the UN electronic database "MilleniumDevelopment, GoalsIndicators" in 2009:

Diagram 1. Dynamics of growth in the number of personal computers in the world
(per 1000 people)

Therefore, research on topics directly related to information technology is extremely relevant. No economist can be highly effective in his work if he does not have even the slightest idea about working with a computer.

In the course of work on the work, statistical data of the Federal State Statistics Service, various educational and methodological publications, as well as articles from the Internet were used.

1 Servers. Basic concepts of servers

Server (from English server, serving). Depending on the purpose, there are several definitions of the concept of a server.

1. Server (network) - a logical or physical network node serving requests to a single address and/or domain name (adjacent domain names), consisting of one or a system of hardware servers running one or a system of server programs.

2. Server (software) - software that receives requests from clients (in the client-server architecture).

3. Server (hardware) - a computer (or special computer equipment) dedicated and / or specialized to perform certain service functions.

3. Server in information technology - a software component of a computing system that performs service functions at the request of the client, providing him with access to certain resources.

The relationship of concepts. The server application (server) runs on a computer, also called a "server", while considering the network topology, such a node is called a "server". In general, it may be that a server application is running on a normal workstation, or a server application running on a server computer within the considered topology acts as a client (i.e., is not a server from the point of view of the network topology).

2. Client-server model. The client - server system is characterized by the presence of two interacting independent processes - the client and the server, which, in general, can be executed on different computers, exchanging data over the network.

Processes that implement a service, such as a file system or database service, are called servers. Processes that request services from servers by sending a request and then waiting for a response from the server are called clients. According to this scheme, data processing systems based on DBMS, mail and other systems can be built. We will talk about databases and systems based on them. And here it will be more convenient not just to consider the client-server architecture, but to compare it with another one - the file-server one.
In a file server system, data is stored on a file server (for example, Novell NetWare or Windows NT Server), and its processing is carried out at workstations, which, as a rule, operate one of the so-called "desktop DBMS" - Access, FoxPro , Paradox, etc..
The application on the workstation is "responsible for everything" - for the formation of the user interface, the logical processing of data and for the direct manipulation of data. The file server provides only the lowest level services - opening, closing and modifying files. Please note - files, not databases. -

The database management system is located on the workstation.
Thus, several independent and inconsistent processes are engaged in the direct manipulation of data. In addition, to carry out any processing (search, modification, summation, etc.), all data must be transferred over the network from the server to the workstation (see Fig. Comparison of file-server and client-server models) .

Fig.1 Comparison of file-server and client-server models

In a client-server system, there are (at least) two applications - a client and a server, which share between themselves those functions that in the file-server architecture are entirely performed by the application on the workstation. The database server, which can be Microsoft SQL Server, Oracle, Sybase, etc., is responsible for storing and directly manipulating data.

The user interface is built by the client, which can be built using a range of custom tools, as well as most desktop DBMSs. Data processing logic can be executed both on the client and on the server. The client sends requests to the server, usually formulated in SQL. The server processes these requests and sends the result to the client (of course, there can be many clients).

Thus, one process is engaged in the direct manipulation of data. At the same time, data processing takes place in the same place where the data is stored - on the server, which eliminates the need to transfer large amounts of data over the network.

1.1 Advantages and disadvantages of client-server architecture

Let's look at this architecture from the point of view of business needs. What qualities does the client-server bring to the information system?
Reliability
The database server performs data modification based on the transaction mechanism, which gives any set of operations declared as a transaction the following properties:

  • atomicity - under any circumstances, all transaction operations will be performed, or none of them will be performed; data integrity at the end of the transaction;
  • independence - transactions initiated by different users do not interfere in each other's affairs;
  • fault tolerance - after the completion of the transaction, its results will not be lost.

The transaction mechanism supported by the database server is much more efficient than that found in desktop DBMSs. the server centrally controls the operation of transactions. In addition, in a file-server system, a failure on any of the workstations can lead to data loss and their inaccessibility for other workstations, while in a client-server system, a failure on the client almost never affects the integrity of the data and their availability to other customers.

Scalability - the ability of the system to adapt to the growth of the number of users and the size of the database with an adequate increase in the performance of the hardware platform, without replacing the software.

It is well known that the capabilities of desktop DBMS are seriously limited - these are five to seven users and 30-50 MB, respectively. The figures, of course, are some average values, in specific cases they can deviate both in one direction and in the other. Most importantly, these barriers cannot be overcome by increasing hardware capabilities.

Database server systems, on the other hand, can support thousands of users and hundreds of GB of information - just give them the right hardware platform.

The database server provides powerful data protection from unauthorized access that is not possible in desktop DBMS. At the same time, access rights are administered very flexibly - down to the level of table fields. In addition, it is possible to prohibit direct access to tables altogether, by performing user interaction with data through intermediate objects - views and stored procedures. So the administrator can be sure that no too smart user will read what he is not supposed to read.

There are three logical layers in a data application:

  • user interface;
  • logical processing rules (business rules);
  • data management (do not confuse the logical layers with the physical layers, which will be discussed below).

As already mentioned, in a file-server architecture, all three layers are implemented in one monolithic application running on a workstation. Therefore, changes in any of the layers lead unequivocally to the modification of the application and the subsequent updating of its versions on workstations.

In a two-tier client-server application shown in the figure above, as a rule, all functions for forming a user interface are implemented on the client, all data management functions are implemented on the server, but business rules can be implemented both on the server using server programming mechanisms (stored procedures, triggers, views, etc.) and on the client.

In a three-tier application, a third, intermediate layer appears that implements business rules, which are the most frequently changed components of the application (see Fig. Three-tier model of a client-server application)


Fig.2 Three-level model of a client-server application


The presence of not one, but several layers allows you to flexibly and cost-effectively adapt the application to changing business requirements.

Let's try to illustrate all of the above with a small example. Suppose an organization has changed its payroll rules (business rules) and needs to update its software.

1) In a file server system, we "simply" make changes to the application and update its versions on the workstations. But this "simple" entails maximum labor costs.

2) In a two-level client-server system, if the payroll algorithm is implemented on the server in the form of a payroll rule, it is executed by a business rules server, made, for example, in the form of an OLE server, and we will update one of its objects without changing anything neither in the client application nor on the database server.

3. Classification of standard servers
Typically, each server serves one (or several similar) protocols, and servers can be classified according to the type of service they provide.

Universal servers are a special kind of server program that does not provide any services on its own. Instead, generic servers provide service servers with a simplified interface to IPC resources and/or unified client access to various services. There are several types of such servers:

  • inetd from English. internet super-server da emon IP service daemon is a standard UNIX system tool - a program that allows you to write TCP / IP (and network protocols of other families) servers that work with the client through inetd redirected standard input and output streams (stdin and stdout).

    RPC from English. Remote Procedure Call remote procedure call - a system for integrating servers in the form of procedures available for calling by a remote user through a unified interface. The interface invented by Sun Microsystems for its operating system (SunOS, Solaris; Unix system) is currently used on most Unix systems as well as on Windows.

  • Applied client-server technologies Windows:

(D-) COM (English (Distributed) Component Object Model - a model of composite objects), etc. - Allows one program to perform operations on data objects using the procedures of other programs. Initially, this technology is intended for their "object linking and embedding" (OLE English. Object Linking and Embedding), but, in general, allows you to write a wide range of different application servers. COM works only within one computer, DCOM is available remotely via RPC.

  • Active-X - COM and DCOM extension for creating multimedia applications.

Generic servers are often used to write all sorts of information servers, servers that don't need any specific networking, servers that don't have any task other than serving clients. For example, regular console programs and scripts can act as servers for inetd.
Most of the internal and network specific Windows servers run through generic servers (RPC, (D-)COM).
Network services ensure the functioning of the network, for example, DHCP and BOOTP servers provide initialization of servers and workstations, DNS - translation of names into addresses and vice versa.
Tunneling servers (for example, various VPN servers) and proxy servers provide communication with a network that is not accessible by routing.

AAA and Radius servers provide a single network authentication, authorization and access logging.
Information Services. Information services include both the simplest servers reporting information about the host (time, daytime, motd), users (finger, ident), and monitoring servers, such as SNMP. Most information services operate through universal servers.
Time synchronization servers are a special type of information services - NTP, in addition to informing the client about the exact time, the NTP server periodically polls several other servers to correct their own time. In addition to time correction, the speed of the system clock is analyzed and corrected. Time correction is carried out by speeding up or slowing down the system clock (depending on the direction of correction) in order to avoid problems that can occur with a simple time swap.
File servers are servers for providing access to files on the server's disk.

Short description

At present, there is no such person, perhaps, who has never had a chance to work with a computer. Modern computer technologies are used everywhere: from ordinary retail outlets to research centers.
As confirmation, we examine the data published by the Russian Ministry of Communications and which were submitted to the UN electronic database "MilleniumDevelopment, GoalsIndicators" in 2009:.

Top Related Articles