How to set up smartphones and PCs. Informational portal
  • home
  • Interesting
  • Free server virtualization platforms. Keep your lab up and running after the production system is up and running

Free server virtualization platforms. Keep your lab up and running after the production system is up and running

Server virtualization can seem very challenging task, but our instructions will help you lift the veil of secrecy over it and take the first steps towards solving it. - Paul Venice

The benefits of server virtualization are now so significant that there is no doubt about the need to implement the appropriate technology. First of all, server virtualization makes it possible to use computing resources much more efficiently than physical servers - after all, several virtual servers are launched on one physical computer in this case. You might be surprised to know how many instances of virtual servers general purpose can be performed simultaneously on just one modern computer.
Another major benefit of virtualization is the ability to move live virtual servers between physical hosts to balance load and provide support. You can use snapshots of virtual servers to maintain current copies of live servers before making any configuration changes (for example, before updating software). If something goes wrong, it reverts to the saved snapshot, after which the server continues to operate as if no adjustments were made. It is clear that such an approach can save a lot of time and effort.

1. Start small on your desktop or laptop

As a rule, virtualization covers entire server rooms, but this technology can be applied in offices and on a much smaller scale. Just one desktop or laptop is enough.
In general, modern desktop and laptop PCs have a huge amount of resources that are left idle for simple daily tasks (reading Email or browsing websites). If from time to time you have a need to use some other operating system (for example, to support applications of another OS), you can run a virtual desktop computer, abandoning its physical installation.
This option is especially useful if incompatibility is detected when running old programs in a new operating environment. Free solution here it can become software VirtualBox for PC.

2. Set up a small and if possible free test lab

If you have recently decommissioned servers, you can use them as a base for setting up a virtualization test lab. The main thing is that they have several gigabit network interfaces and as much RAM as possible. Virtualization imposes significantly higher demands on the amount of RAM than on the processor resources, especially if the virtualization method used does not use shared RAM technologies in order to optimize the physical memory space.
If there are no free servers, you can purchase a new cheap server for testing (again with a large amount of RAM). If you have spare parts on hand, try to assemble the server from the available components. In the laboratory, the capabilities of this machine will be quite enough to confirm the correctness of the chosen concept, but it should not be used in production conditions.
As for choosing virtualization software, try it first possible options on a laboratory system. Armed with several hard drives, install on each VMware ESXi, Microsoft Hyper-V, CitrixXenServer, or Red Hat RHEV and boot from them one at a time to find out which system best suits your needs. All of these packages are available as free or trial versions with an evaluation period of 30 days or more.

3. Create your own shared storage

To realize the benefits of a virtualization environment that spans multiple physical servers, you need shared storage. If you want, for example, to be able to move virtual servers between physical hosts, the storage for these virtual servers must be located on a shared device that both hosts can access.
Virtualization tools support various storage protocols: NFS, iSCSI, Fiber-Channel. For laboratory research or testing, it is enough to add several hard drives, organize them sharing using NFS or iSCSI and bind lab servers to these storage resources. If you are interested in more complete solution you can keep in control, try out storage with open source(like FreeNAS). This software offers an easy way to integrate a variety of low-cost storage media into a lab or production network.

4. Spend enough time for laboratory research.

If you have shared storage resources and at least, two physical servers, you can assume that you have everything you need to create a complete virtualization platform. In the process of evaluating the capabilities of different software packages, spend at least a week experimenting with each of them. Do not forget to test all the important functions for you: online migration of virtual servers, snapshots, deployment and cloning of virtual servers, ensuring their high level of availability. Are you looking for hotel reservations in Moscow located near the Garden Ring or Red Square? Do you want to quickly find Moscow hotels by metro on the map? Don't know what the cost of 4 star hotels? Visit the site ex-hotel.ru and you will receive comprehensive information on Moscow hotels.
You may have a chance to evaluate production modes in a lab environment to get an idea of ​​how the system will perform in the real world. You can, for example, deploy a database server (DB) and use a backup of the real dataset to get any reports, or use a benchmarking tool to evaluate the performance of a web application server. This not only introduces you to the day-to-day features of the virtualization platform, but also helps you understand what resources virtual servers will need to bring them to production.

5. Keep the lab up and running after the production system starts up

After all this, you need to determine the parameters working environment... You have gained an understanding of the management tools and how to behave in the real world. However, it is too early to dismantle the laboratory.
When you start purchasing new equipment for your manufacturing infrastructure, remember to refer to your lab test results. The virtual servers you plan to deploy should be up to the task.
Once the production system is created, the lab can be used to test new functionality, updates, and other things that should ensure the stability and stability of the production platform.

Today I will tell you what virtualization is, what it is for and what it will give you when you implement it. Let's consider the concept of a hypervisor. Let's take a look at how this is organized by VMware using the example of its ESXI 5.5 product. The main task of any business on the Internet is the availability of its services. The servers work properly on iron servers, let's imagine a situation that 5 different services... They all have access to the same file system, to resources, they all work well. Time passes and they begin to interfere with each other, for various reasons, updates on or the wasps itself. As a result, you get that because of one service I stop working normally or even the rest 4. Virtualization just helps a business to consolidate resources and make each service independent within one physical server.

Remember Golden Rule one server, one service

Physical infrastructure

Let's take a look at how the application works at the physical level. In the modern world, if we consider the hardware on the servers, in 90 percent of the case, it is not fully loaded, on average by 50-60 percent, which means low utilization of resources. An example of wasteful use is the DHCP service, which is by definition lightweight and can serve at least 1000 clients, be it on Windows or Linux. As a result, by running it on a powerful server, you use it poorly, underloading it, waste extra energy, the more such servers you have, the more powerful the cooling system is needed, the more powerful the backup power supply system, more physical space in the Unit. In short, as you understood most, but when the virtualization technology came, everything changed, but more on that later. Below is a diagram of how the application works on a physical server.

There is a physical host on which the OS is installed and the application is already running in it. You can connect ISCSI, NFS, FC storage systems to the physical host.

Such a big top could not last long, and the business got sick of losing money on new equipment. And this is where virtualization entered their lives. Such a magical, incomprehensible word, that from the future. Virtualization technology has helped to consolidate server resources, allowing many isolated operating systems that sincerely consider themselves independent to run within a single physical server. on a separate server with virtual hardware. The client, as a consumer, is not at all interested in the light of the light on what his service is implemented on, whether it be mail, whether it is a database, his main product, but it is more profitable for business when, within the framework of one server, they could cram more services onto it and sell them to the client. Bala solved one more problem is to support old applications that are written for some kind of Windows 95, but you want to upgrade, as a result, you create a separate virtual machine with it and live peacefully. The transition from one hardware to another has become easier, the transfer is carried out on the fly without turning off, in most cases. So if a physical server breaks down, you will be able to run a virtual machine on another, without any problems.

Real life example: There is an HP server, it is 4 years old, its warranty has expired. One of the beautiful days on it discs began to crumble, replacing one of these cost 800 bucks. For this money, 3 SSD Samsung evo 850 were bought, feel the difference. As a result, all ssd were installed on the server and it was turned into a virtualization host, where all the same virtual machines with the same services were moved to it. And if there was just a physical server, then everything would be covered with a copper basin. Virtualization saved us from big hemorrhoids.

History of virtualization

The history of this idea and technology began back in the distant 90s, the first to implement it by VMware. She proposed the option that the resources of one large monster can be divided among all and live like a brother, so the concept of a hypervisor appeared. The hypervisor is the layer between the OS and the hardware that helps implement virtualization. In VMware, this is the VMkernel process.

The virtual infrastructure looks like this, there is an ESXi host, on it there is a hepervisor layer VMware vSphere on top of which there are already virtual machines. And all FC, NFS, ISCSI connections go only to the ESXi host, which gives the whole thing to the hypervisor, where it redistributes everything further to the suffering ones.

Below is a picture of what Physical architecture and vmware virtualization looks like. In physical architecture, the os functions on top of hardware. Considering vmware virtualization, everything is a little different here. The difference is in the ESXI hypervisor layer (VMware vSphare). VMware vSphare Allows you to launch instances of virtual machines, emulating hardware for them. In such an implementation, there will be a different scheme of communication with resources. We will talk about this in the future.

In the world of virtualization, there is one thing that never gets virtualized, and that is the friends of the CPU. Virtualization vmware or MS do not know how to do this. When starting a virtual machine, depending on the settings, it receives one or more cores, the virtual machine realizes what type of CPU, version and frequency on it, it sends all commands directly to it. Therefore, it is very important that the correct distribution of the CPU cores is so that the virtual machines do not interfere with each other.

With regard to network virtualization, the case is as follows. If we consider a physical server, then having a network adapter, it uses it exclusively, with all the bandwidth. Considering the virtual architecture, that is, the virtual switch, into which the virtualized network adapters are connected, and the virtual switch, already communicates with the physical network adapter or adapters. All bandwidth is shared across all virtual machines, but priorities can be configured.

It remains to consider the disk subsystem. On the left is the classic situation with an iron host, no matter how the OS, everything will be used in exclusive mode. Considering virtual hard disks, here it is different. Each wasp thinks that it has a real hard drive, but in fact it is a file on shared storage, like the files of other virtual machines. And it doesn't matter what protocol the storage is connected to the host using.

Our firm is a partner of VMware, a Professional solution provider

Have you ever thought about the fact that your fleet of servers or workstations in an office or production, which are entrusted with medium and high criticality tasks, has grown to huge size?! This complex constantly requires repair and modernization, and also increases the cost of maintenance personnel; the more physical servers there are, the more difficult it is to maintain the fleet as a whole.

Needless to say, a shutdown due to a malfunction of at least a part of the equipment in a production or office inevitably entails a loss of profit and a breakdown in obligations to customers and business partners.
Also, perhaps, you thought about what would happen if, after 4-5 years of operating time, the server hardware fails and the accumulated database, which is of critical business value, may not be lost, but will be unavailable for a long time? "Hardware" quickly becomes obsolete, and it is often possible to find an adequate replacement only to order, and this takes time. The people who built this system quit, get sick! Those who replace them take the time to study production processes on system maintenance and undocumented subtleties in the operation of existing equipment.

Long downtime or complete data loss is very likely!

The longer the system works, the more likely it is to fail or fail, and at the same time the loss of profit. However, infrastructure failures can be unpredictable and can be a time bomb.

Is it possible to plan a computing infrastructure by spending money once without thinking about it for a long enough time?

The answer is - the virtualization system!

The virtualization system is able to completely solve a large share of emerging problems and minimize possible losses, since:

  • The virtualization system is not tied to any type of hardware.
  • In the case of using one virtualization server without a cluster, if necessary, it is easily able to transfer all virtual machines to another, the same one, with minimal downtime.

If you have services and applications based on disparate physical servers and personal computers, then they can be ported along with the operating systems to a virtual platform. Most of the different operating systems can be easily ported to the virtual platform.

In the case of special criticality of computational data, virtualization allows you to make failover clusters with a large number of servers inside the cluster, which increases the reliability of the system as a whole. Clustering allows you to scale the system for any task in a short time; both expansion and decommissioning are done with minimal downtime.

    Example: You need to launch 50 more virtual machines for an already working project. Depending on the workload of these virtual machines, the required number of virtualization servers is added to the cluster. The virtual machines are deployed and the project is launched. The cost of the project is reduced by 50%, because there is no need to purchase a server for each service.


This system allows you to build clusters with an almost unlimited number of virtualization servers with full redundancy of each server. If one of the servers fails, its role is taken by the next (virtual machines are automatically moved from the failed server without interrupting their work). To end users, these actions of the system will be transparent, they will not see any changes in the work. It should be noted that the backup servers do not stand idle with a fully functional system, the load is distributed evenly over them. This feature also allows you to increase the speed of the virtualization system as a whole and provide users with a comfortable and fast work in applications.

It is worth noting the ease of virtualization management. Fewer people are required to manage the server fleet, and therefore lower personnel costs.

The configuration and maintenance process is carried out from a workstation with a remote control console installed. The console includes the differentiation of access rights to the settings, as well as to virtual machines separately.

VMware

The American company VMware, specializing in the production of solutions for virtualization and cloud infrastructures, is one of the leading leaders in its segment.

The company was founded in 1998 by five developers, among whom the main role was played by the married couple Mendel Rosenblum and Diana Green. The name VMware comes from the phrase "Virtual Machine (VM)" (virtual machine) and the second part was taken from the word "software" (software).

The first VMware product (VMware Worstation) was demonstrated in 1999, and in 2001 server applications... And thanks to these decisions, by 2003 the company took the lead in this area. In 2004, VMware was acquired by EMC and is currently under its leadership. For 2010, VMware had revenue of $ 2.9 billion. At the moment, VMware is ranked 5th in the ranking of software IT companies.

Over the years, more than 250,000 customers have become VMware customers, many of whom are in the Fortune 100 list, and the partner network has included about 25,000 companies, including technology partners. On the Russian market VMware customers are mainly large companies, banks, telecommunication companies.

VMware offers solutions to help lower IT costs by moving to cloud computing... With VMware products, companies can improve the efficiency of their IT infrastructure with a more flexible, adaptive, and reliable service delivery model that addresses the business challenges they need.

VMware, products

The number of products from VMware that are sold on the Russian market is not limited to a set of products for the initial virtualization of vSphere 5. A large company absorbs small ones, buys open source developments (Zimbra) and all this is sold under the VMware brand, combined into a single service infrastructure. Below you can find a table of products (or product families) with a brief description.

Software for creating your own virtualization system

VMware vSphere

A family of in-house server virtualization products. Usually, it consists of two products - an ESXi hypervisor and a vCenter server.
There are two types of vSphere licenses:
Small Business and Branch Office - vSphere Essentials Kits
For medium and large businesses - vSphere Acceleration Kit

VMware Go

A software product for those who want to start using free virtualization from VMware based on Free license ESXi. You will be able to automate some processes and centralize the management of virtual infrastructure. This is certainly not a vCenter server, but for the inexperienced administrator, the Go series software can be useful. Full functionality can be found on the product page.
They share two product versions, free VMware Go and free VMware Go Pro.

VMware vCloud Product Family

VMware vCloud Director

A software shell, an add-on over a virtual infrastructure. The administrator can use it to distribute access to virtual machines for ordinary users (developers, testers). Users can create virtual machines or entire virtual infrastructures themselves, turn them on or off.
This solution can be suitable for providers or large companies.
A large number of VMware products have been adapted for use with vCloud, such as vCenter Chargeback, vCenter vOrchestrator, vApp, vShield.
vCloud Director is licensed for the number of virtual machines that run on it at a time.

VMware vCloud Request Manager

This addition to the deployed vCloud Director, allows users to create requests for the creation of new virtual machines for them by the administrator, allocation free licenses various software for the duration of their stand. After submitting an application, an employee can track its movement in a graphical interface.
The vCloud Request Manager is licensed, as is the vCloud Director by the number of virtual machines in the vCloud infrastructure.

End-user and virtual workstation software

VMware vSphere Hypervisor ESXi

The basis of server virtualization in the implementation of VMware is a specially designed operating system VMware ESXi or in another way the ESXi hypervisor. Her the main task create and run virtual machines. A detailed description and installation instructions are on our website.

VMware Server

This program is released initially in two free options, for Windows server and for Linux. Allows you to run virtual machines on server platforms. Windows server has its own vCenter server. Its functionality, of course, cannot be compared with vCenter Standard, but for a small business it is well suited for the price

VMware Player

Free virtual machine player. A stripped-down open source version of VMware Workstation.

VMware View

A set of software components for virtualization of VDI workstations. A user from his workplace (computer or thin client), mobile device (Android, iOS) can connect to his virtual machines using the PCoIP protocol.

VMware ThinApp

Software for creating portable versions of programs. Such programs run inside isolated containers, do not need to be installed, therefore, after starting, they do not leave any traces in the computer registry.

VMware ACE

ACE - assured computing environment. Extension for VMware Workstation that provides centralized management and elevated level security for virtualized end-user environments.

VMware Workstation

The most popular product from VMware is by far the Workstation. Installed on workstation(Windows XP, Vista, 7 or Linux) and allows you to create and run virtual machines. It is very convenient to create test benches and development environments. Low cost and ease of use make VMware Workstation ubiquitous

VMware Fusion

This is VMware Workstation for MAC, allows you to run Windows and Linux virtual machines

VMware Zimbra

Collaboration tool most similar to MS Exchange server. Corporate product.

VMware Horizon App Manager
VMware Mobile Virtualization Platform (MVP)

Virtual infrastructure and application management software

Product family VMware vCenter

VMware vCenter Server
VMware vCenter Server Heartbeat
VMware vCenter Operations
VMware vCenter Orchestrator
VMware vCenter CapacityIQ
VMware vCenter Site Recovery Manager
VMware vCenter Lab Manager
VMware vCenter Configuration Manager
VMware vCenter Converter
VMware vCenter Application Discovery Manager
VMware vCenter AppSpeed
VMware Studio
VMware vCenter Chargeback
VMware Service Manager

Security Products

VMware vShield Product Family

VMware vShield App
VMware vShield Edge
VMware vShield Endpoint

Application platform

VMware vFabric tc Server
VMware vFabric Hyperic
VMware vFabric GemFire
VMware vFabric Enterprise Ready Server
RabbitMQ

Other

VMware Data Recovery
VMware VMmark
VMware Capacity Planner
Cisco Nexus 1000V
VMware Compliance Checker for PCI
VMware Compliance Checker for vSphere
SUSE Linux Enterprise Server for VMware

Description of VMware Products

The entire package of the company's products, one way or another, is associated with virtualization technologies and the possibilities of their application. It should be noted that among the three main players in the commercial virtualization market (Citrix, Microsoft, VMware), only VMware is a highly specialized company in virtualization products, which allows it to go ahead of all competitors in terms of product functionality.

VMware's flagship products are VMware ESX / ESXi - bare metal hypervisors. At the moment, the latest version of the product is version 4, released in mid-2009. The hypervisor is the basis for server virtualization, it allows you to share resources in such a way as to create separate, independent environments for multiple operating systems on a single physical server. However, the hypervisor itself has a very limited range of capabilities, but to realize all the benefits, a solution is required that includes not only virtualization tools, but also infrastructure management (vCenter). complex solution called vSphere.

An analysis of the efficiency of using server equipment shows that most of the working time the load is about 5-8% of the maximum, during non-working hours the servers simply stand idle, heating the air. When using VMware vSphere, we consolidate the load from several servers on one physical server (we transfer not only applications to one server, but also OS). Performance modern servers makes the previously popular "one task, one server" concept extremely ineffective, but thanks to virtualization, it is now possible to use the new one: "one task - one virtual machine". Thus, the problem of compatibility of various software is solved - not all applications can be run in one copy of the operating system. In addition, the infrastructure often uses old applications that are no longer compatible with the current versions of the OS, and the installation of older versions is not supported on new hardware. Virtualization solves this problem too - you can even run Windows NT 4.0 or MS-DOS in an ESX virtual machine.

Of course, virtualization technology requires additional resources from server hardware, but at the moment they amount to 1-3% of the available capacity, which is quite a bit for the benefits that this technology provides.

A special product, VMware vCenter, is used for centralized management and monitoring. In addition to monitoring and creating virtual servers, vCenter provides the implementation of such capabilities as moving virtual machines between physical servers, migrating disk resources, creating snapshots, deploying virtual machines from templates, and others. additional functions VMware vSphere.

Both versions of hypervisors (ESX and ESXi) have the same functionality from the point of view of virtual machines, but their implementation is different. ESX includes a service console for hypervisor management, while ESXi does not have such a console (due to this, its size is much smaller, and management is possible only through vCenter, vSphere Client or scripts on the management machine). If ESX looks like an operating system to the user, then ESXi is more like Motherboard BIOS boards. Installation and initial configuration of ESXi is very simple, and using the Embedded version (supplied with the server) allows you to deploy a virtualization system in a matter of minutes. A free version of ESXi is also available for users, which has a number of limitations - for example, there is no support for centralized management, as well as the "enterprise" capabilities of vSphere - vMotion, HA, DRS, etc.

Application area

Server virtualization products are used in a wide variety of infrastructures, from small businesses to large enterprises.

In small companies, the product allows you to minimize the amount of server hardware, if necessary, while retaining the ability to use various operating systems. With the help of virtualization technologies, we can place all services on one or two full-fledged servers(instead of several ordinary PCs, as is often the case) and solve both the issues of equipment quality and its quantity.

For midsize and large enterprises, server virtualization can improve service availability through resiliency technologies and virtual server migration between physical servers. The ability to move virtual servers from one physical server to another without stopping can significantly increase service availability and facilitate maintenance of the entire system. The time for deploying new services is significantly reduced - you no longer need to wait for the delivery of a new server, it is enough to deploy a new virtual machine and install the necessary software in a few minutes. Due to the fact that virtual machines do not require installation of specific drivers, firmware updates, etc. administration tasks are also greatly simplified.

VMware vSphere has a universal system for monitoring the state of the elements of the entire system, both at the level of physical servers and at the level of virtual servers in the enterprise. If standard tools monitoring for some reason is not enough, then there are a number of additional third-party applications (for example, Veeam Monitor) with additional capabilities.

It is also important that the system allows you to distribute powers between administrators using the system. It is a useful tool for large companies with large technical services.

There is also a technology of "transparent" transition from a physical server to a virtual one, which allows you to effortlessly migrate an existing server to a virtual environment, while the user will not notice any changes and can continue to work without additional modifications (Physical to virtual migration).

Basic functionality

VMware vSphere includes a number of features that can fundamentally improve the reliability and manageability of the virtual infrastructure of the enterprise. Support for this functionality depends on the VMware vSphere edition you are using.

Thin Provisioning- provision to virtual servers disk space in a larger volume than it actually is.

VC agent- management of ESX / ESXi servers via VMware vCenter.

Update Manager- Service Pack Management Manager for servers with ESX / ESXi hypervisors installed.

VMSafe- the ability to set advanced security settings and isolation of resources used for virtual machines.

vStorage APIs for Data Protection- a software interface that allows third-party systems Reserve copy work without putting a significant load on the server (replacing the VCB system in VMware VI3). For implementation, the technology of creating snapshots of virtual machines is used.

High Availability- ensuring increased availability of virtual servers by restarting on the backup physical server in case of failure of the main one. It is also possible to monitor specific services running inside the virtual server and restart not only in the event of a hardware failure, but also in the event of a stop of this service.

Data recovery- built-in backup system. Allows you to manage the creation and recovery process backups... Integrity of data and applications is ensured by integration with Microsoft VSS. For Windows guests, you can even restore separate files, not only virtual disks entirely.

Hot Add- support for adding resources (network interfaces, memory, etc.) to virtual servers "on the fly", without stopping. This option requires support from the operating system on the virtual server.

FaultTolerance- providing high availability virtual server due to parallel execution on a second physical server. In the event of a breakdown of one of them, the virtual server will continue to work on the second without interruption in service.

vShield Zones- provides fine tuning security of virtual Ethernet networks at 2/3 OSI level.

vMotion- allows you to migrate virtual servers between physical servers without stopping work.

Storage vMotion- allows you to move disks of virtual servers between different storages, without stopping the work of virtual servers.

DRS DPM- two functions that allow you to distribute virtual machines between physical servers in order to ensure maximum efficiency use of resources. If there is an excessive load on the physical server, virtual machines will be redistributed (using vMotion) between servers that have free resources. DPM allows you to turn off unused physical servers and then turn them on when needed, so power consumption can be significantly reduced during low server loads.

vNetwork Distributed Switch- the ability to create virtual switches distributed between different ESX servers. You can also purchase the Cisco Nexus 1000 virtual switch - it is full-featured software solution from Cisco. This product is controlled by the tools familiar to Cisco administrators and is fully integrated into the environment built on Cisco network equipment.

Hosted Profiles- the ability to create typical settings for virtualization servers. Allows you to centrally manage the settings of servers with ESX.

Third party multipathing- use of products for balancing and fault tolerance of ways to connect servers to storage systems. An example of such a product is the EMC Power-path.

Recently, users are increasingly hearing about such a concept as "virtualization". Its application is believed to be cool and modern. But not every user clearly understands what virtualization is in general and in particular. Let's try to shed some light on this issue and touch on server virtualization systems. Today, these technologies are the most advanced, since they have many advantages in terms of security and administration.

What is virtualization?

Let's start with the simplest thing - the definition of a term that describes virtualization as such. We note right away that on the Internet you can find and download some manual on this issue like the guide "Server Virtualization for Dummies" in PDF format. But when studying the material, an unprepared user may be faced with a large number of incomprehensible definitions. Therefore, we will try to clarify the essence of the issue, so to speak, on the fingers.

First of all, when considering server virtualization technology, let's focus on the initial concept. What is virtualization? Following simple logic, it is easy to guess that this term describes the creation of a certain emulator (similarity) of some physical or software component. In other words, it is an interactive (virtual) model that does not exist in reality. However, there are some nuances here.

The main types of virtualization and technologies used

The fact is that there are three main directions in the concept of virtualization:

  • representation;
  • applications;
  • servers.

For understanding, the simplest example would be the use of the so-called which provide users with their own computing resources. User program it is executed on and the user sees only the result. This approach allows to reduce the system requirements for the user terminal, the configuration of which is outdated and cannot cope with the given calculations.

For applications, such technologies are also widely used. For example, it can be virtualization of a 1C server. The essence of the process is that the program is launched on one isolated server, and a large number of remote users gain access to it. Updating a software package is made from a single source, not to mention the highest level security of the entire system.

Finally, it implies the creation of an interactive computer environment, server virtualization in which completely repeats the real configuration of the "iron" brothers. What does this mean? Yes, the fact that, by and large, on one computer you can create one or more additional ones that will work in real time, as if they existed in reality (server virtualization systems will be discussed in more detail a little later).

In this case, it does not matter at all which operating system will be installed on each such terminal. By and large, this has no effect on the main (host) OS and virtual machine. It is similar to the interaction of computers with different operating systems in local network, but in this case the virtual terminals may not be connected to each other.

Equipment selection

One of the explicit and undeniable advantages virtual servers is to reduce material costs for creating a fully functional hardware and software structure. For example, there are two programs that require 128 MB of RAM for normal operation, but they cannot be installed on the same physical server. How to proceed in this case? You can buy two separate 128 MB servers and install them separately, or you can buy one with 128 MB of "RAM", create two virtual servers on it and install two applications on them.

If someone has not yet understood, in the second case, the use of RAM will be more rational, and material costs are significantly lower than when buying two independent devices... But this is not the only thing.

Security benefits

As a rule, the server structure itself implies the presence of several devices for performing certain tasks. For security, sysadmins install domain controllers Active Directory and Internet gateways are not on one, but on different servers.

In the event of an attempt at outside interference, the gateway is always attacked first. If a domain controller is also installed on the server, then there is a very high probability of damage to the AD databases. In a situation with targeted actions, attackers can take over all of this. And restoring data from a backup is quite troublesome, although it takes relatively little time.

If you approach this issue from the other side, it can be noted that server virtualization allows you to bypass installation restrictions, as well as quickly restore the desired configuration, because the backup is stored in the virtual machine itself. However, it is believed that server virtualization with Windows Server (Hyper-V) looks unreliable in this view.

In addition, the issue of licensing remains rather controversial. So, for example, for Windows Server 2008 Standard, only one virtual machine is provided, for Enterprise - four, and for Datacenter - an unlimited number (and even copies).

Administration issues

The benefits of this approach, not to mention the security and cost savings, even when Windows Server servers are virtualized, should be appreciated primarily by the system administrators who maintain these machines or LANs.

It is very common to create system backups. Usually, when creating a backup, third-party software is required, and reading from an optical media or even from the Network takes longer than the speed of work disk subsystem... Cloning the server itself can be done in just a couple of clicks, and then quickly deploy a workable system even on "clean" hardware, after which it will work without interruptions.

In VMware vSphere, server virtualization allows you to create and save so-called snapshots of the virtual machine itself (snapshots), which are special images its state at a certain point in time. They can be represented in a tree structure in the machine itself. Thus, it is much easier to restore the health of the virtual machine. In this case, you can arbitrarily select restore points, rolling back the state back, and then forward (Windows systems can only dream of this).

Server virtualization programs

When it comes to software, there are a lot of applications that can be used to create virtual machines. In the simplest case, the native tools of Windows systems are used, with the help of which server virtualization can be performed (Hyper-V is a built-in component).

However, this technology also has some drawbacks, so many people prefer software packages like WMware, VirtualBox, QUEMI or even MS Virtual PC. Although the names of these applications differ, the principles of working with them do not differ much (except in details and some nuances). With some versions of applications, virtualization of Linux servers can also be performed, but these systems will not be considered in detail, since most of our users still use Windows.

Server virtualization on Windows: the simplest solution

Since the release of the seventh Windows versions a built-in component called Hyper-V appeared in it, which made it possible to create virtual machines using the system's own means without using third-party software.

As in any other application of this level, in this package you can simulate the future by specifying the size hard disk, amount of RAM, availability optical drives, the desired characteristics of a graphics or sound chip - in general, all that is available in the "hardware" of a conventional server terminal.

But here you need to pay attention to the inclusion of the module itself. Hyper-V servers cannot be virtualized without first enabling this component in Windows itself.

In some cases, it may be necessary to enable the activation of support for the corresponding technology in the BIOS.

Use of third-party software products

Nevertheless, even in spite of the means that can be used to virtualize Windows-based servers, many experts consider this technology somewhat ineffective and even overly complicated. It is much easier to use a ready-made product, in which similar actions are performed based on automatic selection of parameters, and the virtual machine has great capabilities and flexibility in management, configuration and use.

It is about using such software products like Oracle VirtualBox, VMware Workstation (VMware vSphere) and others. For example, a VMware virtualization server can be created in such a way that the analogs of computers made inside a virtual machine work separately (independently of each other). Such systems can be used in learning processes, testing any software, etc.

By the way, it can be separately noted that when testing software in a virtual machine environment, you can even use programs infected with viruses that will only show their effect in the guest system. This will not affect the main (host) OS in any way.

As for the process of creating a computer inside a machine, in VMware vSphere, server virtualization, as well as in Hyper-V, is based on the "Wizard", however, if we compare this technology with Windows systems, the process itself looks a little simpler, since the program itself may suggest some semblance of patterns or automatically compute desired parameters future computer.

The main disadvantages of virtual servers

But, despite how many advantages it gives the same sysadmin or end user server virtualization, such programs have some significant drawbacks.

First, you can't jump over your head. That is, the virtual machine will use the resources of the physical server (computer), and not in full, but in a strictly limited amount. Therefore, for the virtual machine to work properly, the initial hardware configuration must be powerful enough. On the other hand, buying one powerful server will still be much cheaper than purchasing several with a lower configuration.

Secondly, although it is believed that several servers can be combined into a cluster, and if one of them fails, you can "move" to another, in the same Hyper-V this cannot be achieved. And this looks like a clear disadvantage in terms of fault tolerance.

Thirdly, the issue of transferring resource-intensive DBMS or systems like Mailbox Server to virtual space will be clearly controversial. Exchange Server etc. In this case, a clear inhibition will be observed.

Fourth, for the correct operation of such an infrastructure, you cannot use exclusively virtual components. In particular, this applies to domain controllers - at least one of them must be "hardware" and initially available on the Internet.

Finally, fifthly, server virtualization is fraught with another danger: the failure of the physical host and the host operating system will entail the automatic shutdown of all accompanying components. This is the so-called single point of failure.

Summary

Nevertheless, despite some disadvantages, the advantages of such technologies are clearly greater. If you look at the question of why server virtualization is needed, there are several main aspects here:

  • reduction in the amount of "iron" equipment;
  • reduction of heat generation and energy consumption;
  • reduction of material costs, including for the purchase of equipment, payment for electricity, acquisition of licenses;
  • simplification of service and administration;
  • the possibility of "migration" of the OS and the servers themselves.

Actually, the advantages of using such technologies are much greater. While it may seem like there are some serious drawbacks, with the proper organization of the entire infrastructure and the proper management of the necessary controls to operate smoothly, in most cases, these situations can be avoided.

Finally, for many, the question of the choice of software and the practical implementation of virtualization remains open. But here it is better to turn to specialists for help, since in this case we were faced solely with the question of general acquaintance with server virtualization and the feasibility of introducing the system as such.

Welcome to the site ITsave, which is dedicated to modern technologies in the IT field, as well as fashion trends that developers never tire of surprising us. Main focus of articles and step by step instructions focuses on server virtualization topics and including. Initially, we did not pursue the goal of keeping track of all the news, for this there are specialized blogs with daily updates to the feed, but decided to focus on basic technologies and products, without which modern IT is no longer possible to imagine.

Server virtualization is the basis, by and large, it does not matter what virtualization platform you choose and use in your organization, rather, the question is raised by the situation when the "specialist" in the infrastructure is not virtualized servers. With rare exceptions, when the problem of high availability and load balancing is solved by other means.
VMware vSphere and Microsoft Hyper-V are world standards in server virtualization, their functionality in the base part can be called very close, as evidenced by an objective rating from Gartner.

Reasons for server virtualization

As a rule, idle interest in new technologies for the purchase of equipment and licenses is not enough. The management needs to provide iron arguments in favor of the proposed solution, I will give the most popular of them.

  • Server virtualization ensures the continuity of the company's business processes, which means that the likelihood of downtime, for example, mail or 1C, will be minimized due to the initially thought-out system architecture and virtualization technologies. And in the classic version, when one operating system is running on the server, the hardware fails, and it takes a long time to restore the services.
  • With server virtualization, the safety of company data increases - if earlier company data was stored separately - on local drives different servers, but now the architecture of the IT system itself assumes centralized storage, for which the Data Storage System is specially purchased, the protection of which from hardware failures and breakdowns is ensured by the duplication of all elements.
  • Expensive server equipment is used more efficiently, due to this, you need to buy it in fewer quantities. Consequently, IT capital costs will be reduced in the long term. What used to work on 20 old servers can be run on 3 modern servers with virtualization.
  • New equipment comes with a three-year warranty (with an extension of up to 5 years), in the event of a breakdown, a manufacturer's engineer comes to your office and replaces the components free of charge.

For a system administrator, the reasons for starting a server virtualization project may be different, for example, pumping professional skills with a reserve for the future. Or unwillingness to stay after a working day to work with equipment that cannot be serviced in work time... But we must be aware that the director needs to voice those real reasons that will help lead the business to prosperity and further scaling, then the likelihood of receiving the green light for procurement will be the greatest.

To protect such projects, requiring high initial costs, but allowing savings in the course of further operation, feasibility studies are drawn up, which are described, and then two ways of IT development for a period of 5 years are compared: with and without virtualization.

Components of a Server Virtualization Project

The most important thing, without which the project cannot physically take place, simply cannot be equipment. It can be divided into the following categories:

  • computing power- these are servers, the characteristics of which are CPU processors, the amount of operational RAM memory, various expansion cards, etc. Local hard drives in servers for virtualization, they are used only to install the hypervisor.
  • disk subsystem- data storage system can be one or several. All data and files of virtual machines in virtualization projects are located centrally on the storage system so that any of the cluster servers can access them.
  • switching equipment is divided into equipment for data network and storage network. Through the first, virtual machines communicate with each other and users, through the second, traffic goes between servers and the storage system. To prevent networks from negatively affecting each other, it is recommended to separate them at the physical layer.

In server virtualization projects at the architecture level, the computing and disk parts are separated, the explanation for this is very simple, this scheme allows you to configure failover clusters when the failure of one of the servers does not lead to long downtime in the operation of virtual machines and, as a result, applications.

The second component of the project Are always licenses. Usually, these are licenses for the virtualization platform itself (VMware vSphere, Citrix XenServer, Microsoft Hyper-V) and for guest operating systems of virtual machines, mainly Windows 2012 server. It all depends on the possibilities of the budget and the attitude to the topic of the compulsory legalization of all software in the company. It's no secret that licenses for popular virtualization platforms are now easy to find on trackers, and note that the copyright holder does not in any way interfere with this method of distributing his products, although he has all the levers of pressure.

Third part time and money costs are implementation work. Here we can highlight the part related to the initial planning of architecture and actions, further steps to migrate infrastructure to a virtualization platform and configure the system itself in accordance with the tasks set for IT. With the appropriate experience and confidence, the customer's specialists can cope on their own, otherwise the involvement of professionals is recommended.

Equipment for server virtualization projects

Servers

Servers or, in other words, computing power. They can be divided into the following types:

  • single processor servers, most inexpensive option... The cost starts from $ 1000. Here you will not find powerful processors and a large number of slots for RAM. Such servers are usually purchased with a lack of funding or for tasks that do not require significant resources. From the point of view of server and workstation virtualization, this is not the best option, because the capacity of such a server will most likely not be enough for a large number of virtual machines
  • dual processor servers Is the most suitable hardware for virtualization projects. Prices start at $ 2,000 in base configuration with one processor. Usually, the resources of this type of servers are more than enough to run virtual machines for any tasks of modern IT. Licensing of virtualization platforms is also based on the purchase of dual-processor server models.
  • four-socket servers- are rarely used and only for resource-intensive applications such as databases. The price tag of such servers starts at $ 17,000. I will clarify once again that it makes sense to buy such servers if you have applications in your company that lack the power of two processors to work.
  • blade serverperfect solution for virtualization, mainly because of its architecture. If the project plans to use four or more servers, then it makes sense to consider replacing the river models with a blade server. The cost of a complete blade chassis with three dual-processor blades is $ 36,727, and this already includes switches for the data network and storage network.

Storage System

A storage system is a data storage system that is a separate independent device, usually made in the form of a 2U chassis, which is connected to servers via a data network. An expansion card is installed in the server, which is directly or through a switch connected to storage controllers. The picture below shows rear part storage systems entry level, you can see two controllers that operate in the fail-safe mode active-active. Failure of one controller does not stop operation.

Files of virtual machines are available running on any of the cluster servers are located on the storage system and are available centrally. That is why, on each server (host) of the cluster, you can run any of the existing virtual machines or migrate a VM from host to host without stopping work. Entry-level Storage Systems can be divided according to the type of connection to servers (hosts).

  • iSCSI is a protocol for transferring information over a TCP / IP LAN network, i.e. through a regular 1Gbe or 10Gbe switch. Many vendors have port trunking (aggregation) options to improve their performance. For example, out of three gigabit ports, you can programmatically make one three-gigabit one. The 10Gbe option is rarely used due to the high cost of switches.
  • SAS - direct connection of storage systems and servers via a SAS cable or through a special SAS Switch. 6Gbe connection speed.
  • FC - connection of servers (hosts) and data storage systems via optical channels. Speed ​​16 - 8 Gbe, depending on FC HBA cards in the server.

Particular attention in storage systems is paid to fault tolerance; to increase it, all hardware components in the storage system are duplicated. But, despite this, there is one controller options, their cost is about $ 1000 less than two controller options.

Switching equipment

The task of switching is to link virtual machines to internal network LAN and connect servers and storage systems. If the first part does not cause great difficulty for administrators, then the second is something new, and we will analyze it. As mentioned above, storage systems have three types of connections: iSCSI, SAS, FC. It is necessary to initially plan the connection architecture so that it is fault-tolerant. Each server (host) must have access to virtual machine files via at least two independent paths, only in this case, a single point of failure is excluded.

Power & Cooling

It is not enough to buy all the necessary equipment and connect it correctly; you also need to ensure optimal operating conditions. The server room should be around 18 degrees Celsius. Condensation from the air conditioner must not drip onto the server cabinet. The system must be able to withstand voltage surges and intermittent power outages. In the event of a power outage, virtual machines must shutdown according to a previously worked out plan to avoid data loss.

Implementation

Agree that all work should be carried out by specialists. The more expensive the hardware and software you are going to buy, the more significant the issue of inviting qualified engineers becomes. The following specialties are involved in the virtualization project:

  • An architect, a person who will help to draw up a technical specification for a virtualization project, and then, based on it, will select the right equipment and draw up a work plan.
  • An installer is a specialist who takes equipment out of boxes and installs it in a rack.
  • A virtualization engineer switches equipment in a rack, updates device firmware, configures virtualization software, installs new virtual machines and migrates to a virtual environment.
  • A network engineer is involved when complex network settings are required.
  • Electrician - his path is electricity, uninterruptible power supplies.

Comparison of virtualization platforms

The experience of participating in server virtualization projects shows that most customers do not need top-end product capabilities, because they are designed for infrastructures with hundreds of servers and their use is impractical on a smaller number. If we compare different virtualization platforms, then the differences in the ones actually used in Everyday life there are practically no features. And the choice in favor of buying, for example, VMware vSphere in Enterprise Plus edition was made under the influence of pressure from marketers and fashion. Now we can observe that many respected large companies, on the contrary, refuse to use the already purchased VMware licenses in favor of Hyper-V, on the one hand, in order to save on paying for annual technical support, on the other, because the virtualization system from Microsoft, which goes to a gift from a Windows server, is not much inferior to an eminent competitor in terms of functionality.

Top related articles