How to set up smartphones and PCs. Informational portal
  • home
  • Interesting
  • How to properly install and configure a file server on Windows Server. File server

How to properly install and configure a file server on Windows Server. File server

A file server is a fairly powerful computer that is connected to a network, most often such a network is a local area network (LAN), whose main function is to serve as a centralized storage of data on several computers within the client-server model of computer networks. They are available in a number of different hardware and software configurations. File servers are sometimes used to back up critical data. A typical file server will be configured to only send and receive files, and does not run any active processes for users. They can also be configured to distribute data over the Internet using FTP (File Transfer Protocol) or http (Hypertext Transfer Protocol).
Any modern computer can be configured to act as a file server. A simple personal computer shares files throughout the entire home network and acts as a file server. In large organizations, a file server is usually dedicated computers that are usually equipped with arrays of very large storage devices. The most specialized form of file server widely used in modern computing consists of computers that are specifically designed to serve only as file servers. These devices provide network-attached storage systems (NAS) using hardware that is typically configured to maximize only its storage and communication performance and includes only the most basic I / O and processing techniques.

File servers can operate using standard and specialized operating systems. All modern operating systems allow computers to be file servers. The Linux operating system enjoys significant popularity on file servers because of its reputation for stability and economic reasons. Windows and Unix are also often used as file server operating systems. NAS devices can use versions of standard operating systems, but they can also use specialized features of operating systems.

File servers are commonly found in situations where communication is beneficial. Large networks use file servers to facilitate the exchange of data between users. Networked systems that use centralized file servers are also secure, as all files are located on centralized hardware and can be easily backed up.

When the demand for data is particularly high, all file servers will experience intermittent performance degradation, and servers connected to the Internet are also vulnerable to attack. DOS and DDoS attacks have been used repeatedly against Internet-connected file servers. In each case, the attackers directed the data stream to the file server with so many malicious requests for data that legitimate requests were often lost or resulted in unacceptable delays.

A server is a computer that provides its resources (disks, printers, directories, files, etc.) to other network users.

The file server serves workstations. It is usually fast acting nowadays. PC based on Pentium processors, operating at a clock frequency of 500 MHz and higher, with a volume RAM 128MB or more. More often than not, the file server performs only these functions. But sometimes in small LAN file-server is also used as a workstation. The file server must have a network operating system as well as network software. Server networking software includes network services and protocols, and server administration tools.

File servers can control user access to various parts of the file system. This is usually accomplished by allowing the user to attach some file system (or directory) to the user's workstation for later use as a local disk.

As the complexity of the functions assigned to servers increases and the number of clients they serve increases, there is an increasing specialization of servers. There are many types of servers.

The primary domain controller, the server that stores the user account database and maintains security policy.

Secondary domain controller, a server that stores a backup of the user budget database and security policies.

A universal server designed to perform a simple set of various data processing tasks in a local network.

The database server that handles the processing of queries sent to the database.

Proxy server that connects the local area network to the Internet.

Web-server designed to work with web-information.

A file server that ensures the functioning of distributed resources, including files, software.

An application server designed to run application processes. On the one hand, interacts with clients, receiving tasks, and on the other hand, works with databases, selecting the data necessary for processing.

A remote access server that provides employees, home-based sales agents, branch employees, and people on business trips with the ability to work with network data.

A telephone server designed for organizing a telephony service in a local network. This server performs the functions of voice mail, automatic distribution of calls, accounting of the cost of telephone calls, interface with an external telephone network. In addition to telephony, the server can also transmit images and fax messages.


A mail server that provides a service in response to requests sent by email.

An access server that allows users to share resources outside of their networks (for example, users who are on business trips and want to work with their networks). To do this, users through communication networks are connected to the access server and the latter provides the necessary resources available in the network.

A terminal server that consolidates a group of terminals, making it easier to switch when moving.

A communication server that acts as a terminal server, but also routes data.

The video server, which is most adapted to image processing, provides users with video materials, tutorials, video games, and provides email marketing. Has high performance and large memory.

Fax server, which provides transmission and reception of messages in fax communication standards.

A data protection server equipped with a wide range of data security tools and, first of all, password identification.

When creating a file server, the question of choosing an operating system inevitably arises. There is something to think about here: spend money on Windows Server or pay attention to free Linux and BSD? In the second case, you still have to decide on the choice of the file system, of which there are quite a few in Linux. It is impossible to give an unambiguous answer to the questions posed, you need a comprehensive testing, which we conducted in our test laboratory.

How we tested

You cannot embrace the immensity. So it is in our case. It is not possible to test all variants of file servers. Therefore, we decided to limit ourselves to the most common ones. For Windows Server, these are versions 2003 and 2008 R2, since the former is still widely used, and the latter is interesting for technical innovations, in particular, support for the SMB2 protocol, the NTFS file system.

For the Linux platform, Ubuntu 10.04 LTS was chosen, after conducting a series of additional tests, we found out that the performance of file servers practically does not depend on the Linux distribution, at the same time, there is a certain dependence on the Samba version (in our case, 3.4.7). From the whole variety of file systems, we have selected the most common and popular: ext3, ext4, reiserfs, XFS, JFS. The FreeNAS distribution was also tested as a representative of the BSD family (built on the basis of FreeBSD 7.2) with UFS.

Windows 7 32-bit was used as a client. For XP fans right off the bat, whether you like it or not, it is Windows 7 that will become the default corporate OS in the coming years.

Two PCs were used for the test platform Core2 Duo E8400 - P45 - 2GB PC2-8500 connected by a gigabit network. One of them installed Windows 7, the second installed server OS and connected an additional hard drive 750 Gb Western Digital RE3 used solely for testing. This disk was formatted to the desired file system and configured as a share.

Testing was carried out using the Intel NASPT 1.0.7 package, you can learn more about the tests included in it. For each configuration, we ran 5 test runs using the average result as the final result.

File operations

Working with files

In write operations, Windows Server is confidently leading, outstripping Linux by more than two times; in read operations, the gap between Linux and Windows Server 2003 is practically narrowing, but Windows Server 2008 R2 holds a high position, significantly ahead of both Linux and Windows Server 2003.

In the Linux family of file systems, reiserfs unexpectedly leads the way when working with large files, ext4 showed rather poor results when writing, and ext 3 when reading. JFS is an outsider in testing, and has problems writing large files with an unacceptably low score. FreeNAS performed very modestly on the low end of Linux systems.

Working with folders

When working with a large number of small files distributed in folders of varying degrees of nesting, the result is more uniform. Windows systems are again leading the way, albeit by a smaller margin. SMB2 makes itself felt here as well, making Windows Server 2008 R2 the undisputed leader with a 40% superiority over Linux.

In Linux, the results are pretty even, reiserfs and JFS are slightly leading in writing, there is no clear leader in reading, JFS is an obvious outsider. FreeNAS has comparable results, slightly outperforming reading and slightly lagging behind in writing.

Working with applications

So, the absolute leader today is Windows Server 2008 R2, the SMB2 protocol shows a significant advantage, leaving no chance for competitors. If you are faced with the task of creating a high-performance file server to work in a modern infrastructure, then there is no choice as such. The new server operating system from Microsoft will certainly justify the money spent on it.

Windows Server 2003 in the overall standings gets the second place with 76.31%, given that in some tasks it showed a rather low result and a small gap from Linux solutions (10-15%) does not seem advisable to deploy new servers under this OS. The same should be taken into account when legalizing software, in this case it is advisable to upgrade to Windows Server 2008 R2 or switch to Linux solutions.

Among Linux solutions, with the exception of JFS, the result is fairly uniform, with a small margin (3-5%) ahead of XFS and reiserfs. JFS is an obvious outsider, it is strongly discouraged for use. Solutions based on FreeBSD also cannot be recommended for serious use, they are outperformed by Linux by 10-15%, not to mention a much more serious lag behind Windows systems.

We hope that our testing will help you make the right decision in choosing an operating and file system for your file server.

  • Tags:

Please enable JavaScript to view the comments powered by Disqus.

Trackback

About a year ago, we tested various operating systems and file systems, examining their impact on file server performance. Today, when solid-state drives (SSD) are in common use and the release of Samba with support for the SMB2 protocol has been released, ...

One can speculate for a long time about what the file server of a small company should be, but the current economic conditions force the business to look for fast, stable and cheap solutions. Indeed, storing organization information in a shared folder has become quite common, but how can it be implemented with a minimum cost?

Let's compare the solutions already used by our regular customers for servicing computers and servers. The selection criteria are banal:

  • deployment speed, i.e. time from the moment of submitting an idea to obtaining a result;
  • the cost of software and hardware, based on a small office of 7 people with a volume of 100 GB each;
  • reliability - the likelihood of service failure and loss of information;
  • security, considering in the context of data protection against loss or unwanted access;
  • and scalability, both the ability to increase the number of users and to scale up additional services, such as backup.

Share Windows on one of the computers on the network

The most popular option among small companies due to the fastest implementation and no investment. Indeed, access is configured with minimal settings in just a few mouse clicks on any PC in the network with the Windows operating system. You don't even need to have special knowledge for this. However, despite its simplicity and attractiveness, the disadvantages still cover these rather significant advantages.

First, the number of people simultaneously working with a shared folder is limited to 5. This is a limitation of the operating system, you should use Windows Server to remove it. Secondly, the speed of the computer, which is a file server, with the slightest active use of the shared folder will annoy the user.

Thirdly, there can be no talk of any scalability and fault tolerance, besides, the work computer on which daily tasks are performed is more susceptible to virus infection, and simply out of order.

Therefore, this option can be viewed exclusively as a demo version of the capabilities of a shared network folder on Windows.

Network Sharing Using NAS Devices

With the development of microelectronics, network devices made in the form of “boxes” with a popular set of functions have become very popular. One of these solutions is NAS (Network Attached Storage). In fact, it is one or several hard drives connected through a network controller to the network and performing the function of network storage, like a file server.

Modern similar devices have a reasonable price (from $ 200), good functionality:

  • USB ports allow you to connect flash drives or printers for collaboration;
  • various access protocols: FTP, Windows CIFS, Apple AFP;
  • separation of access rights, done, however, in a truncated form, but still will help protect data from theft;
  • and several bays for hard drives even allow you to create software, but RAID arrays.

Among the shortcomings, it should be noted that it is not possible to install new services, for example, backups, etc. - only what is provided by the firmware. Support for disks larger than 3GB and additional features such as RAID will cost a pretty penny. The amount of RAM, processor and other components that affect the speed of data exchange do not expand, so it is necessary to carefully select characteristics at the acquisition stage, taking into account the prospects for use.

Otherwise, the device could well claim to be next to a simple Windows shared folder due to its speed of deployment and ease of use.

File server with UNIX operating system

For those who have “grown” from the previous options, or may be thinking about replacing them, we suggest paying attention to a full-fledged file server. We will not consider the issue of hardware - it can be changed at any time to meet the needs of each specific case. The main question is the operating system, which will eliminate all the disadvantages of the previous options.

Due to the prohibitive cost of licensed Windows Server for a small office, we will not offer it for consideration. Instead, you should pay attention to the UNIX operating system, which is famous for its stability when working on a network. In addition, a UNIX-like operating system has a number of advantages when used as a file server:

  • low hardware costs due to the absence of the need to use a graphical interface;
  • support for a large number of equipment, standards and protocols;
  • setting up new services at any time;
  • and most importantly, the price.

Some UNIX-like operating systems are distributed free of charge, even for commercial use, which, with its stability, makes it almost ideal for use as a simple file server. Practice shows that when migrating from Windows (option 1 with a shared network folder) to UNIX at a total cost of the same $ 200 (setup cost), in addition to stability, the organization received a number of advantages:

  • no restrictions on the number of users;
  • use even on outdated equipment;
  • setting up all the necessary services, incl. not associated with a file server, for example, a DBMS;
  • using hardware RAID controllers for large amounts of data;
  • quick commissioning and the same quick upgrade if necessary.

Practice shows that setting up a full-fledged server, although it seems cumbersome, ultimately justifies itself, and due to the use of free server operating systems, it also allows you to get more functionality for less money.

Introduction

Perhaps you still decided and were going to build your own file server. But why bother with a dedicated file server at all when your PC's desktop hard drives already offer more than 2TB of storage space? Personally, I put together my file server to back up data separately from my work PC.

Another good reason to set up a network server is to make it easier to access data from multiple computers. For example, if you have a collection of MP3s and want to listen to music from the collection on an HTPC in your living room, then it is best to store your music centrally and listen to it over the network.


Cooler Master 4-in-3 module in the outer bays of the case. It allowed us to use four more hard drives than the chassis would normally support. Click on the picture to enlarge.

Of course, you can store any collection of files on the server without having to copy your data multiple times across multiple systems. If your file server is configured to use a RAID 5 or RAID 6 disk array, it can withstand the failure of one hard drive (or even two in the case of RAID 6) without losing data - unlike the information stored on a single hard drive on a desktop PC. ...

Why not NAS?

There are many different types of file servers and storage. The easiest way to store data outside of your computer is to use an external hard drive, which is cheap, fast, and flexible. If your data fits on one hard drive, then this method will be the most inexpensive way to back up your files.

External hard drives are available with different interfaces. The most common interface is USB 2.0. It doesn't work very fast (480 Mbps), but almost every computer is equipped with USB ports. Another popular interface is FireWire. There are two popular FireWire speeds, 400 and 800 Mbps. Most external drives that support FireWire are equipped with a 400 Mbps interface. In practice, it turns out to be even faster than USB. But, unfortunately, this interface loses to USB in versatility. The most modern (and fastest) external storage interface is eSATA. It operates at 3Gb / s and matches the performance of the internal SATA ports; today this interface provides more bandwidth than any mechanical hard drive can provide.


My old file server. Normal body with good airflow. Click on the picture to enlarge.

All of these interfaces, which connect the drive directly to the computer, are examples of a direct-attached storage (DAS) scenario. DAS's strengths lie in simplicity, performance and cost. On the other hand, if the main computer is turned off, you will not be able to access files located on such storage. Another limitation follows from direct connection to the host computer. Typically, only this computer will be able to access the stored files, and if you try to share the drive over the network, then the performance of the main computer will decrease when clients access files on the DAS.

The limitations of directly attached DAS storages can be bypassed by not connecting the storage to a computer at all, using a network for this purpose - we move on to network-attached storage (NAS). If the NAS is enabled, you can access it from any computer on the network. Most likely, you will be connecting the storage via a gigabit network port (Gigabit Ethernet), which will be fast enough for most users. If a gigabit network port is not enough, then your task is likely to require a high-end device with many gigabit ports, ample storage and teaming support.

DAS and NAS storages often contain multiple hard drives. Some rigs allow you to install a couple of hard drives, and some even more. The snap-in can support RAID 0 (striping, increasing speed over a single hard drive), RAID 1 (mirroring, protecting against one hard drive failure), or RAID 5 (striping with redundancy, increasing speed and protecting against one hard drive failure). Some high-end storage can even support RAID 6 arrays, which are similar to RAID 5, but can withstand the failure of two hard drives.

However, the RAID rigs mentioned have their limitations. They are not cheap. For instance, Qnap TS-509 Pro storage will cost $ 800 () without hard drives, although it supports RAID 5 and 6. With this system, as with most pre-configured storage, you will have to use a pre-installed work environment, which may not be as flexible as your preferred software ... Finally, while some retail NAS storage devices support expansion, most models are limited to one eSATA port or a pair of USB ports.

Well, let's see if conventional computer hardware can achieve the same goals as NAS storage.

Of course, we mean another solution that is cheaper and more flexible: building your own file server. Moreover, there are simply no reasons that would prevent you from assembling such a server on your own. Assembling a file server is no different from a regular computer - the same is true for enthusiasts who assemble their own systems, rather than buying assembled system blocks in the store.


Installed Cooler Master Stacker 4-in-3 module. Great device if you don't change your hard drives often. Click on the picture to enlarge.

Of course, there are a lot of decisions to make when building a file server. Some of the most important are how much data you plan to store, how much redundancy you need, and how many hard drives you plan to use. If you plan to store large amounts of information, then we recommend minimizing the price of one gigabyte instead of buying the most capacious hard drives available. Today, the minimum cost per gigabyte is observed in hard drives with a capacity of 1.5 TB. Personally, I love RAID 5 because they can withstand a single hard drive failure. If you plan to use more than eight or ten hard drives, it is better to build multiple RAID 5 arrays on four or five hard drives each, or use RAID 6 arrays to protect against the failure of more than one hard drive.

Frame


Hard drives are installed in the case in the appropriate mounts. Notice the 120mm blower fan blowing through the hard drives. It is equally important that the front panel of the case allows sufficient cold air to pass through. Click on the picture to enlarge.

You will need a case large enough to hold all of your hard drives. However, if you have already bought a case that is too small, no one bothers to later transfer the system to a larger model.

The chassis must provide sufficient cooling for the hard drives. In principle, today you can buy a variety of case models that satisfy this condition. For the first file server, I took a simple case. It used a 120mm fan to cool the hard drives in the front and also had a 120mm exhaust fan in the back. To this I added a "Cooler Master 4-in-3" module with a separate 120mm fan for cooling the hard drives. This module is perfect for installing additional hard drives. Unless you have to remove the entire module to replace one hard drive.

For the second file server, I chose two Supermicro SATA hot-swappable snap-ins, each capable of holding five hard drives. They cost much more than the Cooler Master module, but they also provide more features. The Supermicro rigs used a very loud 92mm fan (which I slowed down with the fan controller), raised an alarm if the fan stopped or the temperature rose too much, and also displayed access to each hard drive. But, most conveniently, the snap-in allowed changing hard drives without opening the case itself, and if the operating system supported hot-swapping, then without shutting down the computer.

Network interface


Asus CUR-DLS motherboard, two Pentium III 933s and 1.1GB ECC memory. Click on the picture to enlarge.

For the file server, the Gigabit Ethernet Gigabit network interface does not hurt, which will speed up network operations. Support for jumbo frames will not hurt if your Ethernet switch and network adapter will work with them (most new devices support them).

Initially, Ethernet had a maximum frame size of 1500 bytes. This was sufficient when the network speed was 10 Mbps. When Gigabit speed was introduced along with the Gigabit Ethernet standard, the overhead associated with small packet sizes became essential. Therefore, the industry de facto agreed to support larger packets - 9000 bytes was chosen. That is, you can transfer the same amount of data as with standard-size packets, but the number of packets will be six times less, the same applies to the amount of service information.

In practice, you can save CPU computing resources and increase throughput with such jumbo frames if network performance is a limiting factor in file transfers. If your switch does not support jumbo frames, then packets will not go through, so this function will have to be disabled.

On the other hand, you can buy an 8-port switch for about $ 40. Most modern motherboards come with onboard Gigabit Ethernet support, but if your motherboard does not support Gigabit networking, it is better to buy a PCI-X or PCI Express (PCIe) network card instead of a 32-bit PCI card. We have a very successful track record with PCI-X NICs from Intel and Broadcom.

Power Supply


The interior of the case. Of course, it doesn't look so pretty with four PATA cables, seven hard drives, a DVD drive, and power cords. Click on the picture to enlarge.

The internal components must be sufficiently cooled. The less heat is generated inside, the less you have to throw out. Therefore, it is better to take economical hard drives which consume less energy than standard models. The same goes for processors - economical CPU can reduce the power consumption and heat generation of the system. We recommend taking both options.

In addition, we recommend choosing an efficient power supply that complies with the "80 PLUS" standard. There are 80+ Bronze (82%) and 80+ Silver (85%) power supplies on the market with a reasonable price. In addition, it is important to select the correct power supply capacity. Hard drives consume the most power when spinning platters. A good hard disk controller uses delayed platters to minimize this effect. However, we haven’t yet come across controllers integrated into the chipset that would support this function.

Both of my servers are using power supplies with over 80% efficiency. The first server is based on two 933 MHz Pentium III processors, six 250 GB hard drives and a hard drive with an operating system. Peak power consumption during download is 214W and power consumption at 100% CPU load is 95W. The second server uses two low-power 2.8GHz Xeon processors and six 750GB hard drives plus an operating system hard drive. Peak power consumption during download is 315W, during idle is 164W, and during 100% CPU load is 260W.

Unless you have six more hard drives installed in the array, or you are not using a very hot CPU, then you do not need a PSU with a declared power higher than 400W. Of course, the PSU should provide enough power for the various voltage lines that the computer needs, but buying a 750W or higher model would be a waste of money. And such a power supply will work less efficiently than a 400-watt model.

Memory

Most enthusiasts don't spend much time on memory reliability. They are more interested in clock speeds and latencies, which in this scenario are less important than reliability. When data arrives at a file server or transferred to client computers, it is first stored in RAM. And the data on the disk is also cached in memory. The best off-the-shelf file servers use error correcting code (ECC) memory, while the cheapest are built on conventional memory. In my opinion, it hardly makes sense to build a high-performance file server, and still not use ECC memory.


Supermicro MV8 controller card inserted in PCI-X slot. Click on the picture to enlarge.

Memory can hardly be considered a source of persistent errors, but occasional errors can occur from time to time. IBM estimates that 1GB of memory has a random error once a week. These errors are caused by alpha particles in the memory packaging and cosmic rays. However, ECC memory has an additional mechanism that detects and corrects memory errors. Standard ECC memory can detect all 2-bit errors in 64-bit memory and correct 1-bit errors. There are also higher end ECC controllers, for example, which IBM offers with Chipkill memory.

Errors in memory areas that will be overwritten before reading, or in unused memory areas do not cause problems. But a memory error that will somehow affect the processing of data is already a bad thing. Serious server motherboards such as those from Tyan and Supermicro are capable of logging memory errors. Less expensive motherboards like Asus CUR-DLS and Asus NCCH-DL in my servers support ECC memory but don't log memory errors.

There are chipsets that do not support ECC memory at all, and motherboards based on those chipsets will not support ECC memory either. We recommend that you only use ECC motherboards and install ECC memory in them. If you are seriously concerned about memory errors, then it is best to choose a motherboard that supports the IBM Chipkill technology, which detects and fixes many multi-bit errors and can even continue to work if one memory chip fails.

Tires


120mm exhaust fan behind a black grill. Click on the picture to enlarge.

Most older motherboards support 32-bit PCI slots, which are connected to a common bus and share the available bandwidth. If you look at the chipset diagram of these motherboards, the Ethernet controller, IDE and SATA controllers are all connected to the PCI bus. If we add up the bandwidth of disks and Ethernet, then we run into a theoretical limit of 133 MB / s. Our system, of course, will work, but all this will lead to a slowdown of the file server.

There are a large number of older server motherboards that have PCI-X slots (not to be confused with PCI Express). These slots are more interesting because they use a bus that is separate from the 32-bit PCI bus. If you install hard disk controllers in PCI-X slots, then nothing will interfere with the I / O bandwidth.

My first file server used an Asus CUR-DLS motherboard with 64-bit 33 MHz (266 MB / s) PCI-X slots. The second file server was built on an Asus NCCH-DL motherboard with 64-bit 66MHz PCI-X slots that support 533MB / s bandwidth - faster than my six SATA drives. The controller card can handle up to 133MHz bus, which can give up to 1066MB / s of bandwidth on newer motherboards.

If your platform supports PCI Express, then slots with more than one lanes will be sufficient for a home file server, and 266 MB / s bandwidth is quite good.

There is another potential bottleneck to consider: the connection between the southbridge and northbridge on your motherboard. Although the Asus NCCH-DL is equipped with 64-bit 66 MHz PCI-X slots, communication between the bridges is carried out at a speed of only 266 MB / s. In theory, this should limit the I / O bandwidth. Fortunately, in practice, problems with this rarely arise, and newer chipsets usually support higher interface speeds between bridges.

Controller


Supermicro hard drive accessories. They only need two power connections. I added a fan controller to each rig to slow down the rotation speed. Click on the picture to enlarge.

Many modern motherboards are equipped with six SATA 3Gb / s ports. Older models may have fewer ports and may use the slower SATA 1.5Gb / s standard. So there is a high probability that you will have to buy a controller card into the system.

A variety of controller cards with different interfaces can be found on the market. As for new systems, the most popular are PCI Express cards. This interface provides significant bandwidth, while the old PCI-X interface provides sufficient bandwidth for older systems. For less expensive systems, a 32-bit PCI bus can be used, although this will limit performance.

There are common host bus adapters and RAID controllers. In Linux terminology, RAID cards can be divided into two groups: FakeRAID and True RAID. If the card performs the XOR redundancy calculations on its own, then it can be considered a true RAID controller. Otherwise, it will use the CPU for these calculations and software drivers.

Our new server uses a Supermicro SAT2-MV8 card with eight SATA 3Gb / s ports. It is a PCI-X controller that can operate at up to 133 MHz. The card is very nice, with good software support. We chose this for the reason that our motherboard does not have SATA 3Gb / s ports, but is equipped with PCI-X slots.

We also purchased a simple Rosewill HBA card with four 1.5Gb / s SATA ports. It uses a 32-bit PCI interface, although it can operate at 33 and 66 MHz. The card supports JBOD configurations that are required for software RAID. Our Asus NCCH-DL board is equipped with the Promise PDC20319 controller, that is, another simple HBA, but it does not support JBOD, so it was useless in this case.


We used two PCI Promise PATA cards. They are located on a dedicated bus to which no other devices are connected. Click on the picture to enlarge.

It is also a good idea to check Linux support for your controller (if you plan to install this system on your file server). To do this, you should find out the model of the storage controller on the card and check its support under Linux. Of course, if the card manufacturer provides a driver for Linux, then you're in luck.

Hard drives

We recommend SATA hard drives. They are available today in large containers and are quite affordable. The SATA architecture belongs to the "point-to-point" type, that is, the bandwidth of the interface with other devices does not have to be shared. I built my first file server on hard drives with a parallel ATA (PATA) interface, and I connected two hard drives to each channel. But if one hard drive fails, then the controller will most likely write both hard drives on the channel to the failed drives and freeze. If you buy a decent PATA RAID controller, it will most likely support one hard drive per channel to prevent this problem. Of course, in the case of PATA, you have to come to terms with the mess of cables. This is one of the reasons why the industry has moved to the SATA interface.

CPU


Asus NCCH-DL. Two low-power Xeon processors (SL7HU) run at 2.6 GHz. Click on the picture to enlarge.

For a file server, you hardly need a super fast CPU. But it is not a bad idea to install more than one processor. One CPU will be loaded with redundancy information calculations (required for RAID 5), but if you chose RAID 6, then the processor will have to do even more calculations, which will require more CPU resources.

Our first file server used two 933 MHz Pentium III processors. We have seen 100% CPU utilization while rebuilding the RAID array, so we recommend faster processors. The second file server used two 2.8GHz Xeon processors with Hyper-Threading support, and we never saw both cores get 100% CPU utilization.

AMD's 2GHz dual-core processor is likely to be sufficient. Of course, newer processors work more efficiently, so if you have a more modern platform on hand you can save energy and get better performance at the same time.

If I were buying a new file server processor today, I would probably go with the slow and cheap AMD Phenom II. The reason lies in the fact that the processor itself is cheap, motherboards for it are also very reasonable, the processor works without strong heating, and motherboard chipsets, as a rule, support ECC and Chipkill memory.


My new file server based on Cooler Master Stacker. On the front, you can see two Supermicro SATA hot-swappable rigs, each of which can accommodate up to five hard drives. Click on the picture to enlarge.

UPS

Regardless of the hardware you choose, you should use a UPS to keep your system safe from power outages. You can buy a cheap UPS, but a good quality uninterruptible power supply will pay off in the long run. At a minimum, UPS should allow you to shut down the file server in the normal way before the UPS runs out of charge, which requires three to five minutes of battery life. It is also nice that most UPSs have overvoltage protection in the network.

Prices

Of course, the price range is quite significant, and as a result, the cost of a file server depends on the required amount of data storage, as well as on the "hardware" that you have idle. Below is the estimate for a typical file server enthusiast.

  • Case: $ 150 for a model similar to my Cooler Master Stacker 810. When choosing, consider the possibility of installing a large number of hard drives.
  • PSU: $ 50 for the 350-watt 80 PLUS certified model.
  • Hard drives: Six 1TB hard drives, approximately $ 80 each.
  • Operating system hard drive: Free if you have a 10GB drive on hand.
  • DVD drive: $ 20.
  • Motherboard: $ 100 for a used motherboard to install two Opteron processors with 2-4GB of ECC memory if you don't have the hardware on hand. You can start with a motherboard for two Pentium III processors, which can be found for pennies. Be prepared to pay more than $ 150 for a new motherboard with a guarantee.
  • Memory: $ 50.
  • CPU: $ 100.
  • SATA controller: $ 100.

The total price is around $ 420- $ 620 plus $ 540 for hard drives. For the money, you get a file server with a 5TB RAID 5 array that can be easily expanded to eight or more hard drives. If you build the server yourself, you can probably use a variety of old components. As a result, you get a server that is cheaper than most NAS models, which can hold four or five hard drives, your system will run faster and offer much better flexibility.

Software


The interior of the new file server. Click on the picture to enlarge.

So, the file server is assembled. For testing, we recommend using Knoppix Linux - a system that boots from CD or DVD. You can check if Linux recognizes all of your hardware. As for Windows, there are almost always drivers from the manufacturer for this system, which are well tested. However, not all manufacturers offer drivers for Linux, so you often have to use drivers written by Linux enthusiasts.

Of course, more experienced vendors provide Linux drivers. For example, all Intel 802.11x wireless controllers are supplied with drivers directly from Intel. We recommend taking hardware from those manufacturers who support their hardware under Linux.

Old hardware that is several years old is almost always well supported by the Linux community. If any errors were found in the drivers, it is highly likely that they have been fixed.

It is also possible that the most recent Linux distributions will support your hardware, while the slightly older Knoppix distribution will not. This situation often happens with the newest hardware. Basically, burn the latest Knoppix distribution to disk, set the BIOS to boot from CD, and then your computer will start Knoppix.

Another useful feature is the memtest86 + boot test. I usually run it within 24 hours to make sure the system is stable and there are no memory errors. There is no point in installing the OS and software if the system is unstable.

Operating system


Rear case with 120mm exhaust fan. Click on the picture to enlarge.

There are several operating system choices that support software RAID, such as Microsoft Windows Server operating system with RAID 5 support. You can even configure Windows XP to support RAID 5 .

However, we do not recommend Windows for several reasons. First, this system is expensive. Windows Server 2008 prices start at around $ 999. Another reason is that Windows does not provide such modern RAID support options as other operating systems. Finally, Windows (according to the author) is a less secure and reliable OS, which is important for file servers.

There are several ways to assess reliability and safety, and you can find quite a few reports, some of which are funded by the manufacturers themselves. For example, a good one. Although it is dated 2004, the main points remain true today. For the top 40 vulnerabilities, Microsoft's system severity rating was 54.67, and Red Hat Linux's was 17.96. If you plan to use Windows for a file server, please read the report first.

Then you can choose one of the available BSD versions: OpenBSD, FreeBSD, and others. They are free, yet they are reasonably reliable and secure. But the biggest drawback is that these operating systems are not as modern as Linux in terms of RAID support.

OpenSolaris is also free, and reliable and secure. But the hardware support for this OS is very limited. On the other hand, here you get ZFS - by far the most sophisticated, reliable and stable file system. In addition, it includes support for RAID 5 and RAID 6. This OS is not as popular as Linux, but if you are familiar with it, the choice for a file server will be quite decent.

Finally, there is Linux, which is also free, reliable, and secure. This OS has excellent hardware support, there is support for RAID 5, RAID 6, RAID 10 and almost any other types of RAID. Linux evolves rather quickly, new hardware gets support almost immediately, and new software features are added regularly. When you upgrade a Linux system, you don't even need to reboot it, so Linux systems can run continuously for months or even years.

There are many different Linux distributions. Some, like Red Hat, provide better long-term support than other distributions. Others, like Fedora (also distributed by Red Hat), aim to quickly integrate new software into the distribution. Main advantage Ubuntu lies in user friendliness, which is why this distribution is the most popular. You can .

We chose Mandriva Linux, because new releases are released every two years, support lasts for several years, and all the necessary functions are present in this distribution. Any decent Linux distribution will do, though. Additional documentation is available. Here you will find a very good Mandriva tutorial, which we recommend that you familiarize yourself with before installing Linux for the first time.

Top related articles