How to set up smartphones and PCs. Informational portal

network file service. How it all started

The Network File Server (NFS) protocol is an open standard for providing a user with remote access to file systems. Based on it, centralized file systems make it easier to perform daily tasks such as backups or virus checks, and concatenated disk partitions are easier to maintain than many small and distributed ones.

In addition to providing centralized storage, NFS has proven to be very useful for other applications, including diskless and thin clients, network clustering, and collaborative middleware.

A better understanding of both the protocol itself and the details of its implementation will make it easier to deal with practical problems. This article is devoted to NFS and consists of two logical parts: first, the protocol itself and the goals set during its development are described, and then the implementation of NFS in Solaris and UNIX.

WHERE IT ALL STARTED...

The NFS protocol was developed by Sun Microsystems and appeared on the Internet in 1989 as RFC 1094 under the following title: Network File System Protocol Specification (NFS). It is interesting to note that Novell's strategy at the time was to further improve file services. Until recently, while the open source movement was still in full swing, Sun was not eager to reveal the secrets of its networking solutions, but even then the company understood the importance of interoperability with other systems.

RFC 1094 contained two original specifications. At the time of its publication, Sun was developing the next, third version of the specification, which is set out in RFC 1813 "NFS Protocol Specification, Version 3" (NFS Version 3 Protocol Specification). Version 4 of this protocol is defined in RFC 3010 NFS Protocol Specification Version 4 (NFS Version 4 Protocol).

NFS is widely used on all types of UNIX hosts, Microsoft and Novell networks, and IBM solutions such as AS400 and OS/390. Unknown outside the network realm, NFS is arguably the most widely used platform-independent network file system.

UNIX WAS THE GENERATOR

Although NFS is a platform-independent system, UNIX is its ancestor. In other words, the hierarchical architecture and methods of accessing files, including the structure of the file system, the ways in which users and groups are identified, and how files are handled, are all very similar to the UNIX file system. For example, the NFS file system, being identical in structure to the UNIX file system, is mounted directly on it. When working with NFS on other operating systems, user identities and file permissions are mapped.

NFS

The NFS system is designed to be used in a client-server architecture. The client accesses the file system exported by the NFS server through a mount point on the client. Such access is usually transparent to the client application.

Unlike many client/server systems, NFS uses Remote Procedure Calls (RPC) to exchange information. Typically, the client establishes a connection to a known port and then, in accordance with the protocol, sends a request to perform a certain action. In the case of a remote procedure call, the client creates a procedure call and then sends it to the server for execution. A detailed description of NFS will be presented below.

As an example, suppose a client has mounted the usr2 directory on the local root filesystem:

/root/usr2/ -> remote:/root/usr/

If a client application needs the resources of this directory, it simply sends a request to the operating system for it and for the file name, which provides access through the NFS client. For example, consider the simple UNIX cd command, which "knows nothing" about network protocols. Team

Cd /root/usr2/

will place the working directory on the remote file system without "even knowing" (the user doesn't need to know either) that the file system is remote.

Upon receiving the request, the NFS server will check whether the given user has the right to perform the requested action and, if the answer is positive, will perform it.

LET'S GET TO KNOW BETTER

From the client's point of view, the process of mounting a remote file system locally using NFS consists of several steps. As already mentioned, the NFS client will submit a remote procedure call to execute it on the server. Note that in UNIX the client is a single program (the mount command), while the server is actually implemented as several programs with the following minimal set: port mapper service (port mapper), mount daemon (mount daemon) and NFS server .

The client's mount command first communicates with the server's port translation service, which listens for requests on port 111. Most implementations of the client's mount command support multiple versions of NFS, making it more likely that the client and server will find a common protocol version. The search is carried out starting from the oldest version, so when a common one is found, it will automatically become the newest version supported by the client and server.

(This material is focused on the third version of NFS, since it is the most common at the moment. The fourth version is not yet supported by most implementations.)

The server port translation service responds to requests according to the supported protocol and the port on which the mount daemon is running. The client's mount program first establishes a connection to the server's mount daemon and then sends the mount command to it via RPC. If this procedure is completed successfully, then the client application connects to the NFS server (port 2049) and, using one of the 20 remote procedures that are defined in RFC 1813 and listed in Table 1, accesses the remote file system.

The meaning of most commands is intuitive and does not cause any difficulties for system administrators. The following listing, produced using tcdump, illustrates the read command used by the UNIX cat command to read a file named test-file:

10:30:16.012010 eth0 > 192.168.1.254. 3476097947 > 192.168.1.252.2049: 144 lookup fh 32.0/ 224145 "test-file" 10:30:16.012010 eth0 > 192.168.1.254. 3476097947 > 192.168.1.252.2049: 144 lookup fh 32.0/ 224145 "test-file" 10:30:16.012729 eth0 192.168.1.254.3476097947: reply ok 128 lookup fh 32.0/224 307 (DF0) 16.012729 eth0 192.168.1.254.3476097947: reply ok 128 lookup fh 32.0/224307 (DF) 10:30:16.013124 eth0 > 192.168.1.254. 3492875163 > 192.168.1.252.2049: 140 read fh 32.0/ 224307 4096 bytes @ 0 10:30:16.013124 eth0 > 192.168.1.254. 3492875163> 192.168.1.252.2049: 140 Read FH 32.0/ 224307 4096 Bytes @ 0 10: 30.013650 ETH0 192.168.1.254.3492875163: Reply Ok 108 Read (DF) 10: 30.013650 ETH0 192.168.168.168.168.168.168.168.168.168.168.168.16LA : reply ok 108 read (DF)

NFS has traditionally been implemented over UDP. However, some versions of NFS support TCP (TCP support is defined in the protocol specification). The main advantage of TCP is a more efficient retransmission mechanism in unreliable networks. (In the case of UDP, if an error occurs, then the complete RPC message, consisting of several UDP packets, is retransmitted. With TCP, only the corrupted fragment is retransmitted.)

ACCESS TO NFS

NFS implementations typically support four methods of granting access rights: through user/file attributes, at the share level, at the master node level, and as a combination of other access methods.

The first method relies on the built-in UNIX system of file permissions for an individual user or group. To simplify maintenance, user and group identification should be consistent across all NFS clients and servers. Security must be carefully considered: NFS can inadvertently grant access to files that was not intended when they were created.

Shared resource access allows you to restrict rights to only certain actions, regardless of file ownership or UNIX privileges. For example, working with the NFS file system can be limited to read only. Most implementations of NFS allow you to further restrict access at the level of shared resources to specific users and/or groups. For example, the Human Resources group is allowed to view information and nothing more.

Master-level access allows you to mount a filesystem only on specific nodes, which is generally a good idea, since filesystems can easily be created on any nodes that support NFS.

Combined access simply combines the above types (for example, share-level access with access granted to a specific user) or allows users to access NFS only from a specific host.

NFS PENGUIN STYLE

The Linux-related material presented here is based on a Red Hat 6.2 system with kernel version 2.4.9, which ships with version 0.1.6 of the nfs-utils package. Newer versions also exist: at the time of this writing, the most recent update to the nfs-utils package is 0.3.1. It can be downloaded at: .

The nfs-utils package contains the following binaries: exportfs, lockd, mountd, nfsd, nfsstat, nhfsstone, rquotad, showmount, and statd.

Unfortunately, NFS support is sometimes confusing for Linux administrators, as the availability of a particular feature is directly dependent on the version numbers of both the kernel and the nfs-utils package. Fortunately, things are improving in this area now: the latest distribution kits include the latest versions of both. For previous releases, section 2.4 of the NFS-HOWTO provides a complete list of system functionality available for each combination of kernel and nfs-utils package. The developers maintain the backward compatibility of the package with earlier versions, paying a lot of attention to security and fixing software bugs.

NFS support must be initiated at kernel compilation time. If necessary, the ability to work with NFS version 3 must also be added to the kernel.

For distributions that support linuxconf, it's easy to configure NFS services for both clients and servers. However, the quick way to set up NFS using linuxconf does not provide information about what files were created or edited, which is very important for the administrator to know in order to understand the situation in the event of a system failure. The architecture of NFS on Linux is loosely coupled to the BSD version, so the necessary support files and programs are easy to find for administrators running BSD, Sun OS 2.5, or earlier versions of NFS.

The /etc/exports file, as in earlier versions of BSD, specifies the filesystems that NFS clients are allowed to access. In addition, it contains a number of additional features related to management and security issues, providing the administrator with a tool for fine-tuning. This is a text file consisting of entries, blank lines, or commented lines (comments start with #).

Let's say we want to give clients read-only access to the /home directory on the Lefty host. This would correspond to the following entry in /etc/exports:

/home (ro)

Here we need to tell the system which directories we are going to make available using the rpc.mountd mount daemon:

# exportfs -r exportfs: No hostname specified in /home (ro), type *(ro) to avoid warning #

When run, the exportfs command warns that /etc/exports does not restrict access to a particular node, and creates a corresponding entry in /var/lib/nfs/etab from /etc/exports telling you which resources can be viewed with cat:

# cat /var/lib/nfs/etab /home (ro,async,wdelay,hide,secure,root_squash, no_all_squash,subtree_check, secure_locks, mapping=identity,anonuid= -2,anongid=-2)

Other options listed in etab include the defaults used by NFS. Details will be described below. To grant access to the /home directory, the appropriate NFS services must be started:

# portmap # rpc.mountd # rpc.nfsd # rpc.statd # rpc.rquotad

At any time after the mount daemon (rpc.mountd) has started, you can inquire about the individual files available for output by viewing the contents of the /proc/fs/nfs/exports file:

# cat /proc/fs/nfs/exports # Version 1.0 # Path Client(Flags) # IPs /home 192.168.1.252(ro,root_squash,async, wdelay) # 192.168.1.252 #

The same can be viewed using the showmount command with the -e option:

# showmount -e Export list for lefty: /home (everyone) #

Going a bit ahead, the showmount command can also be used to determine all mounted filesystems, or in other words, to find out which hosts are NFS clients for the system running the showmount command. The showmount -a command will list all client mount points:

# showmount -a All mount points on lefty: 192.168.1.252:/home #

As noted above, most NFS implementations support various versions of this protocol. The Linux implementation allows you to limit the list of NFS versions that will run by specifying the -N option for the mount daemon. For example, to start NFS version 3, and only version 3, enter the following command:

# rpc.mountd -N 1 -N 2

Fastidious users may find it inconvenient that on Linux the NFS daemon (rpc.nfsd) is waiting for version 1 and version 2 packages, although this achieves the desired effect of not supporting the corresponding protocol. Let's hope that the developers of the next versions will make the necessary corrections and will be able to achieve greater consistency between the package components in relation to different versions of the protocol.

"SWIMMING WITH PENGUINS"

Access to the above-configured Lefty, Linux-based NFS exportable file system is dependent on the client operating system. The installation style for most UNIX family operating systems is the same as either the original Sun OS and BSD systems or the newer Solaris. Since this article focuses on both Linux and Solaris, let's look at the Solaris 2.6 client configuration from the point of view of establishing a connection to the Linux version of NFS we described above.

With features inherited from Solaris 2.6, it is easy to configure it to act as an NFS client. This requires only one command:

# mount -F nfs 192.168.1.254:/home /tmp/tmp2

Assuming the previous mount command succeeded, then the mount command with no options will output the following:

# mount / on /dev/dsk/c0t0d0s0 read/write/setuid/ largefiles on Mon Sep 3 10:17:56 2001 ... ... /tmp/tmp2 on 192.168.1.254:/home read/ write/remote on Mon Sep 3 23:19:25 2001

Let's analyze the tcpdump output on the Lefty host after the user has entered the ls /tmp/tmp2 command on the Sunny host:

# tcpdump host lefty and host sunny -s512 06:07:43.490583 sunny.2191983953 > lefty.mcwrite.n.nfs: 128 getattr fh Unknown/1 (DF) 06:07:43.490678 lefty.mcwrite.n.nfs > sunny. 2191983953: reply ok 112 getattr DIR 40755 ids 0/0 sz 0x000001000 (DF) 06:07:43.491397 mcwrite.n.nfs > sunny.2191983954: reply ok 120 access c0001 (DF) 06:07:43.492296 sunny.2191983955 > lefty.mcwrite.n.nfs: 152 readdirplus fh 0.1/16777984 1048 bytes @ 0x000000000 (DF) 06:07:43.492417 lefty.mcwrite.n.nfs > sunny.2191983955: reply ok 1000 readdirplus (DF)

We see that the Sunny node asks for a file descriptor (fh) for ls, to which the Lefty node sends OK in response and returns the directory structure. Sunny then checks the permission for the contents of the directory (132 access fh) and receives a permission response from Lefty. The Sunny node then reads the full contents of the directory using the readdirplus procedure. Remote procedure calls are described in RFC 1813 and are listed at the beginning of this article.

Although the sequence of commands for accessing remote file systems is very simple, a number of circumstances can cause the system to mount incorrectly. Before mounting a directory, the mount point must already exist, otherwise it must be created using the mkdir command. Usually the only cause of errors on the client side is the lack of a local mount directory. Most of the problems associated with NFS, however, owe their origin to a mismatch between the client and the server, or incorrect server configuration.

The easiest way to troubleshoot problems on a server is from the host where the server is running. However, when someone else administers the server instead of you, this is not always possible. A quick way to ensure that the appropriate server services are properly configured is to use the rpcinfo command with the -p option. From the Solaris Sunny host, you can determine which RPC processes are registered on the Linux host:

# Rpcinfo -p 192.168.1.254 Program Vers Proto Port Service 100000 2 TCP 111 RPCBIND 100000 2 UDP 111 RPCBIND 100024 1 UDP 692 STATUS 100024 1 TCP 694 STATUS 1024 MUUNTD /100005 3 TCP 1024 MUUDD /100005 3 udp 2049 nfs 100021 1 udp 1026 nlockmgr 100021 3 udp 1026 nlockmgr 100021 4 udp 1026 nlockmgr #

Note that version information is also provided here, which is quite useful when the system requires support for various NFS protocols. If any service is not running on the server, then this situation should be corrected. If the mount fails, the following rpcinfo -p command will tell you that the mountd service on the server is down:

# rpcinfo -p 192.168.1.254 program vers proto port service 100000 2 tcp 111 rpcbind ... ... 100021 4 udp 1026 nlockmgr #

The rpcinfo command is very useful for finding out if a particular remote process is active. The -p option is the most important of the switches. See the man page for all the features of rpcinfo.

Another useful tool is the nfsstat command. With its help, you can find out whether clients are actually accessing the exported file system, as well as display statistical information according to the protocol version.

Finally, another fairly useful tool for determining the causes of system failures is tcpdump:

# tcpdump host lefty and host sunny -s512 tcpdump: listening on eth0 06:29:51.773646 sunny.2191984020 > lefty.mcwrite.n.nfs: 140 lookup fh Unknown/1"test.c" (DF) 06:29:51.773819 lefty.mcwrite.n.nfs > sunny.2191984020: reply ok 116 lookup ERROR: No such file or directory (DF) 06:29:51.774593 sunny.2191984021 > lefty.mcwrite.n.nfs: 128 getattr fh Unknown/1 ( DF) 06:29:51.774670 lefty.mcwrite.n.nfs > sunny.2191984021: reply ok 112 getattr DIR 40755 ids 0/0 sz 0x000001000 (DF) 06:29:51.775289 sunny.2191984022 > lefty.mcwrite.n.nfs : 140 lookup fh Unknown/1"test.c" (DF) 06:29:51.775357 lefty.mcwrite.n.nfs > sunny.2191984022: reply ok 116 lookup ERROR: No such file or directory (DF) 06:29: 51.776029 sunny.2191984023 > lefty.mcwrite.n.nfs: 184 create fh Unknown/1 "test.c" (DF) 06:29:51.776169 lefty.mcwrite.n.nfs > sunny.2191984023: reply ok 120 create ERROR: Permission denied (DF)

The above listing, obtained after executing the touch test.c statement, shows the following sequence of actions: first, the touch command tries to access a file named test.c, then it looks for a directory with the same name, and after unsuccessful attempts, it tries to create the file test.c , which also fails.

If the filesystem is mounted, most of the common errors are related to normal UNIX permissions. Sun's use of uid or NIS+ avoids setting permissions globally on all filesystems. Some administrators practice "open" directories, where permissions to read them are given to "the whole world". However, this should be avoided for security reasons. Security concerns aside, this is still a bad practice because users rarely create data with the intention of making it readable by everyone.

Accesses by a privileged user (root) to NFS mounted file systems are treated differently. To avoid granting unrestricted access to a privileged user, requests from the privileged user are treated as if they were from the user "nobody". This powerful mechanism restricts privileged user access to globally readable and writable files.

NFS SERVER, SOLARIS VERSION

Configuring Solaris to act as an NFS server is as easy as with Linux. However, the commands and file locations are somewhat different. When Solaris boots up, when boot level 3 is reached, NFS services are automatically started and all file systems are exported. To start these processes manually, enter the command:

#/usr/lib/nfs/mountd

To start the mount daemon and NFS server, type:

#/usr/lib/nfs/nfsd

Starting with version 2.6, Solaris no longer uses an export file to specify which file systems to export. The files are now exported using the share command. Suppose we want to allow remote hosts to mount /export/home. To do this, enter the following command:

Share -F nfs /export/home

Security measures

SECURITY IN LINUX

Some Linux-based NFS system services have an additional mechanism for restricting access through control lists or tables. At the internal level, this mechanism is implemented using the tcp_wrapper library, which uses two files to form access control lists: /etc/hosts.allow and /etc/hosts/deny. An exhaustive overview of the rules for working with tcp_wrapper is beyond the scope of this article, but the basic principle is the following: matching is done first with etc/hosts.allow, and then with /etc/hosts. deny. If the rule is not found, then the requested system service is not presented. To get around the last requirement and provide a very high level of security, you can add the following entry to the end of /etc/hosts.deny:

ALL: All

After that, /etc/hosts.allow can be used to set this or that mode of operation. For example, the file /etc/hosts. allow , which I used when writing this article, contained the following lines:

lockd:192.168.1.0/255.255.255.0 mountd:192.168.1.0/255.255.255.0 portmap:192.168.1.0/255.255.255.0 rquotad:192.168.1.0/255.255.255.0

This allows some kind of access to nodes before granting access at the application layer. On Linux, application-level access is controlled by the /etc/exports file. It consists of entries in the following format:

Export directory (space) host|network(options)

An "exported directory" is a directory that the nfsd daemon is allowed to process a request for. "Host|network" is the host or network that has access to the exported file system, and "options" determines what restrictions the nfsd daemon imposes on the use of this shared resource - read-only access or user id mapping .

The following example grants the entire mcwrite.net domain read-only access to /home/mcwrite.net:

/home/mcwrite.net *.mcwrite.net(ro)

More examples can be found in the exports man page.

NFS SECURITY IN SOLARIS

In Solaris, the ability to provide access to NFS is similar to Linux, but in this case, restrictions are set using certain options in the share command with the -o switch. The following example shows how to enable read-only mounting of /export/mcwrite.net on any host in the mcwrite.net domain:

#share -F nfs -o ro=.mcwrite.net/ export/ mcwrite.net

The share_nfs man page details how to grant access using control lists on Solaris.

Internet resources

NFS and RPC were not without "holes". Generally speaking, NFS should not be used on the Internet. You can't "hole" firewalls by allowing access of any kind through NFS. All RPC and NFS patches should be carefully monitored, and numerous sources of security information can help. The two most popular sources are Bugtraq and CERT:

The first can be viewed regularly in search of the necessary information or use a subscription to a periodic newsletter. The second provides, perhaps, not so prompt, in comparison with others, information, but in a fairly complete volume and without a hint of sensationalism, characteristic of some sites dedicated to information security.

Here, what's next? How to watch movies and listen to music files that have been downloaded? Is it really necessary to burn them to discs and transfer them that way to a computer with a GUI? Or will you have to copy them over slow SFTP? Not! NFS comes to the rescue! No, this is not a series of racing games, but Network File System (Network File System).
The Network File System (NFS) is a network file system that allows users to access files and directories located on remote computers as if those files and directories were local. The main advantage of such a system is that individual workstations can use less of their own disk space, since shared data is stored on a separate machine and is available to other machines on the network. NFS is a client/server application. That is, an NFS client must be installed on the user's system, and an NFS server must be installed on computers that provide their disk space.

Installing and configuring an NFS server (192.168.1.2)

1. Install. Having connected via SSH to the server computer or simply enter in its console:

sudo apt-get install nfs-kernel-server nfs-common portmap

This will install the NFS server as well as the required portmap package.

2. Set up. To configure the list of directories that we want to open and the list to whom we want to open them, we will edit the file /etc/exports :

Sudo nano /etc/exports /data 192.168.1.1/24(rw,no_root_squash,async)

In the above example, we opened a directory on the server /data and its subdirectories to be shared with all computers with IP: 192.168.1.1 - 192.168.1.255 with read and write permissions.

Another example:

/home/serg/192.168.1.34(ro,async)

This example makes the user serg's home directory read-only available to the computer with IP 192.168.1.34. All other computers on the network will not have access to this directory.

Available options:

  • ro - read-only permissions. It is possible not to specify, since it is installed by default;
  • rw Gives clients write permission
  • no_root_squash - By default, the root user on the client machine will not have access to open directories on the server. With this option, we remove this limitation. For security reasons, it is better not to do this;
  • noaccess - Denies access to the specified directory. It can be useful if you previously set access for all network users to a certain directory, and now you want to restrict access in a subdirectory to only some users.

Now you need to restart nfs-kernel-server:

sudo /etc/init.d/nfs-kernel-server restart

If after that you want to change something in the file /etc/exports , then in order for the changes to take effect, just run the following command:

sudo exportfs -a

Everything. NFS server installed and configured. You can switch to NFS client.

Installing and configuring the NFS client

1. Installation. We execute the following in the terminal of the computer that will connect:

sudo apt-get install portmap nfs-common

2. Setting. First, let's create a directory where the remote folder will be mounted:

cd ~ mkdir data

You can mount in two ways - each time manually or by writing mount options to a file /etc/fstab.

Method 1. Mounting manually
Create a text file on the desktop or in some other folder:

nano ~/Desktop\desktop/nfs-server-connect

We write to it:

#! /bin/bash sudo mount -t nfs -o ro,soft,intr 192.168.1.2:/data ~/data

Making it executable:

Chmod +x ~/Desktop\desktop/nfs-server-connect

Now when I need to connect to the server, I run this script in the terminal so that I can enter the password for sudo.

Method 2: Adding to /etc/fstab
Open /etc/fstab:

sudo nano /etc/fstab

And add a line at the end of the file:

192.168.1.2:/data ~/data nfs rw,hard,intr 0 0

Attention! Replace 192.168.1.2:/data with the server's IP or name and the path to the shared directory. Mount options can be changed.

Option hard hard-binds the directory on the client to the server, and if the server falls off, then your computer may also freeze. Option soft, as its name implies, is not so categorical.

After saving the file, you can mount the remote folder.

NFS ( Network File System) is mainly designed to be shared files and folders between / Unix systems from Sun Microsystems in 1980. It allows you to mount local filesystems over the network and remote hosts to interact with them as if they were mounted locally on the same system. Via NFS, we can set up file sharing between Unix in linux system and linux for the system Unix.

Benefits of NFS

  1. NFS creates local access to remote files.
  2. It uses the standard architecture customer/server to exchange files between all machines based on * NIX.
  3. Via NFS no need for both machines to run on the same OS.
  4. Via NFS we can customize the solution centralized storage.
  5. Users receive their data regardless of their physical location.
  6. Automatic update for new files.
  7. Newer Version NFS supports mounting acl, pseudo under root.
  8. Can be protected firewalls and Kerberos.

NFS Services

Service System V launched. Server package NFS includes three tools included in the packages portmap and nfs-utils.

  1. portmap: displays calls made from other machines to the correct service RPC(not required with NFSv4).
  2. nfs: converts remote requests file sharing to queries on the local file system.
  3. rpc.mountd: this service is responsible for mounting and unmount file systems.

Important configuration files for NFS

  1. /etc/exports: its main config file NFS, all exported files and catalogs, which are defined in this file and on destination NFS server.
  2. /etc/fstab: To mount NFS directory on your system without reboots, we need to write to /etc/fstab.
  3. /etc/sysconfig/nfs: Configuration file NFS to control which port RPC and other services auditions.

Setting up and mounting NFS on a Linux server

To set up a mount NFS, we will need at least two cars linux/Unix. Here in this tutorial, we will use two servers.

  1. NFS server: nfsserver.example.ru with IP - 192.168.0.55
  2. NFS client: nfsclient.example.ru with IP - 192.168.0.60

Installing NFS Server and NFS Client

We need to install packages NFS on our NFS server and also by car NFS client. We can set it with “ ” ( red hat Linux) and installation package “ apt-get” (Debian and ubuntu).

# yum install nfs-utils nfs-utils-lib # yum install portmap (not required with NFSv4) # apt-get install nfs-utils nfs-utils-lib

Now run services on both machines.

# /etc/init.d/portmap start # /etc/init.d/nfs start # chkconfig --level 35 portmap on # chkconfig --level 35 nfs on

After installing the packages and starting the services on both machines, we need to set up both machines to share files.

Setting up an NFS server

Let's set up the server first NFS.

Setting up an export directory

# mkdir /nfsshare

Now we need to write to " /etc/exports" and restart services to make our directory shared across the network.

# vi /etc/exports /nfsshare 192.168.0.60(rw,sync,no_root_squash)

In the example above, there is a directory under / entitled " nfsshare“, currently shared with client IP “ 192.168.0.60 ” with privileges reading and records (RW), you can also use hostname client instead IP in the above example.

NFS Options

Some other options we can use in files “ /etc/exports” for file sharing is as follows.

  1. ro: With this option, we can provide read-only access to shared files, that is customer will only be able read.
  2. rw: This option allows client to server access for both reading and records within the general directory.
  3. sync: Synchronization acknowledges requests to the shared directory only after changes have been committed.
  4. no_subtree_check: This option prevents checking subtree. When the shared directory is a subdirectory of a larger filesystem, NFS performs a scan of each directory above it to check its permissions and details. Disabling validation subtree can improve reliability NFS, but reduce security.
  5. no_root_squash: This phrase allows root, connect to a specific folder.

For more options with “ /etc/exports“, it is recommended to read pages guides for export.

Configuring an NFS Client

After setting NFS-server, we need mount this shared directory or partition on client server.

Mounting shared directories on an NFS client

Now on NFS client, we need mount this directory to access it locally. To do this, first we need to find out what resources are available on the remote or NFS server.

# showmount -e 192.168.0.55 Export list for 192.168.0.55: /nfsshare 192.168.0.60

Mounting an accessible directory on NFS

In order to mount general NFS directory, we can use the following mount command.

# mount -t nfs 192.168.0.55:/nfsshare /mnt/nfsshare

The above command will set the shared directory to “ /mnt/nfsshare” on the client server. You can test it with the following command.

# mount | grep nfs sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) nfsd on /proc/fs/nfsd type nfsd (rw) 192.168.0.55:/nfsshare on /mnt type nfs (rw,addr=192.168.0.55)

The above mount command mounts on NFS shared directory on the NFS Client temporarily to mount an NFS directory constantly on your system regardless of reboots, we need to make an entry in “ /etc/fstab“.

#vi /etc/fstab

Add the following new line as shown below.

192.168.0.55:/nfsshare /mnt nfs defauls 0 0

Testing the Behavior of an NFS Installation

We can test our installing an NFS server by creating test file on the server side and check for its existence on NFS Client side or vice versa.

Server side nfsserver

We have created a new text file named “ nfstest.txt” in this shared directory.

# cat > /nfsshare/nfstest.txt This is a test file to test the working of NFS server setup.

Client side nfsclient

Change to the shared directory at client server and you will find the shared file without any manual update or reload service.

# ll /mnt/nfsshare total 4 -rw-r--r-- 1 root root 61 Sep 21 21:44 nfstest.txt [email protected]~]# cat /mnt/nfsshare/nfstest.txt This is a test file to test the working of NFS server setup.

Removing an NFS mount

If you want to unmount this shared directory from the server after you are done with file sharing you can just unmount this particular directory with the command “ umount“. See this example below.

[email protected]~]# umount /mnt/nfsshare

You can see that the mount has been removed on the file system.

# df -h -F nfs

You will see that these shared directories are no longer available.

Important commands for NFS

Some more important commands for NFS .

  1. showmount -e: Shows available shared objects on local computer
  2. showmount -e : List of available shared objects on the remote server
  3. showmount -d: List of all subdirectories
  4. exportfs -v: Displays a list of shared files and options on server
  5. exportfs -a: Export all available objects listed in /etc/exports, or name
  6. exportfs -u: Re-export all available objects listed in /etc/exports, or name
  7. exportfs -r: Update server list after change /etc/exports

It's all about NFS mount at the moment, if interested, you can read the guide about . Leave your

The essence of the problem: at one time, Samsung began to produce TVs that support the DLNA technology developed by leading manufacturers of household appliances, based on the principle of "digital home". This technology made it possible to integrate TVs into a local home network, which made it possible to exchange media content between a TV and a computer, and in particular, to watch movies stored on a computer on a local network or via WiFi on TV. However, the multimedia solution proposed by Samsung to implement this technology, to put it mildly, leaves much to be desired. So, films viewed over the network in the media player built into the TV are not rewound in most cases. In addition, while watching movies over the network, unlike watching movies from a flash drive or a portable hard drive connected to a TV via a USB port, the continuous playback function (blue button on the remote control) is not supported. Finally, the very need to run Samsung PC Share Manger on your computer every time and make corrections after each deletion or addition of video files to the disk is a little annoying.

Not only to eliminate the existing problems with watching movies on TV over a local network, but also to increase the data transfer rate (which can be an important factor when watching large HD movies), the inclusion of the NFS (Network File System) network protocol will help us. After we have made the necessary installation and configuration of the NFS server, our computer will be perceived by the TV as if we connected a portable hard drive to the TV via a USB port (the only difference will be only in the data exchange speed, which is determined by the maximum bandwidth your local network or WiFi connection).

NFS is a network protocol organized on a server-client basis. We will have a computer as a server, and a TV as a client. We have already covered the inclusion of NFS support on the TV in the previous section during the setup and installation of the SamyGO Auto application on the TV. If you remember, in the settings of the SamyGO Auto configurator, we checked the box next to the NFS section and also registered the IP address of the NFS server (192.168.xxx.xxx), that is, the address of our computer:
In this section, we will look at installing and configuring an NFS server on our computer. There are many different programs on the Internet for installing and configuring an NFS server. We will use the application haneWIN NFS Server(it is shareware, and after a certain period it requires registration of a serial number, but, as you understand, there are always craftsmen on the Internet who can solve this problem). So let's get started:

Note: Sometimes the Windows firewall or the firewall built into the antivirus can block the operation of the NFS server. To prevent this from happening, in the Windows firewall (or if you have another firewall, then in it) you must allow access to the network to two applications: nfsd.exe and pmapd.exe (they are located in the server installation folder C:\Program Files\ nfsd).


Finally, let's turn on the TV and make sure our NFS server is running. In the previous section, when we installed the SamyGO Auto program on the TV, we specified the parameter for autorun in it. Therefore, when you turn on the TV, it should automatically detect our NFS (this does not happen immediately, but approximately 20 seconds after turning on the TV). So, turn on the TV, then go to the media player and see a new device there - NFS Server.

If you pay attention, there is a USB connection icon next to NFS. This is what we talked about earlier, now your TV will treat the computer as a hard drive or USB flash drive. You can go to the Movie section and enjoy watching movies online. You no longer need to run Samsung PC Share Manger on your computer. Just add the movie to your movie folder on your computer and it will automatically "load" into your TV's media player.

In the next section, we will talk about how to record TV programs to a USB flash drive or, since we now have NFS, then to the movie folder on the computer.


Network File System (NFS)- Protocol for network access to file systems, allows you to connect remote file systems.
Originally developed by Sun Microsystems in 1984. Based on Sun RPC: Remote Procedure Call. NFS is independent of the server and client file system types. There are many implementations of NFS servers and clients for various operating systems. The current version of NFS v.4 is used, which supports various means of authentication (in particular, Kerberos and LIPKEY using the RPCSEC GSS protocol) and access control lists (both POSIX and Windows types).
NFS provides clients with transparent access to files and the server's file system. Unlike FTP, the NFS protocol accesses only those parts of the file accessed by the process, and its main advantage is that it makes this access transparent. Because of this, any client application that can work with a local file can just as well work with an NFS file, without changing the program itself.
NFS clients access files on an NFS server by sending RPC requests to the server. This can be implemented using normal user processes - namely, the NFS client can be a user process that makes specific RPC calls to the server, which can also be a user process.

Versions
NFSv1 was for internal use only for experimental purposes. Implementation details are defined in RFC 1094.
NFSv2 (RFC 1094, March 1989) originally ran entirely over UDP.
NFSv3 (RFC 1813, June 1995). File descriptors in version 2 are a fixed size array of 32 bytes. In version 3, this is a variable size array with a size of up to 64 bytes. A variable length array in XDR is defined by a 4-byte count followed by real bytes. This reduces the size of the file descriptor in implementations such as UNIX, where only about 12 bytes are required, but allows non-Unix implementations to exchange additional information.
Version 2 limits the number of bytes per READ or WRITE RPC procedure to 8192 bytes. This limit is not in effect in version 3, which in turn means that using UDP, the limit will only be the IP size of the datagram (65535 bytes). This allows large read and write packets to be used on fast networks.
File sizes and start byte offsets for READ and WRITE routines now use 64-bit addressing instead of 32-bit, which allows you to work with larger files.
The file's attributes are returned in every call that can affect the attributes.
Writes (WRITE) can be asynchronous, whereas in version 2 they had to be synchronous.
One procedure has been removed (STATFS) and seven have been added: ACCESS (check file permissions), MKNOD (create a special Unix file), READDIRPLUS (returns filenames in a directory along with their attributes), FSINFO (returns statistical information about a file system ), FSSTAT (returns dynamic file system information), PATHCONF (returns POSIX.1 file information), and COMMIT (commits previously made asynchronous writes to persistent storage).
At the time of the introduction of version 3, developers began to use TCP more as a transport protocol. While some developers were already using TCP for NFSv2, Sun Microsystems added TCP support in NFS version 3. This made using NFS over the Internet more feasible.
NFSv4 (RFC 3010, December 2000, RFC 3530, revised April 2003), influenced by AFS and CIFS, included performance improvements, high security, and emerged as a complete protocol. Version 4 was the first version developed in conjunction with the Internet Engineering Task Force (IETF) after Sun Microsystems handed over development of the NFS protocols. NFS version v4.1 was approved by the IESG in January 2010 as RFC 5661. An important new feature in version 4.1 is the specification of pNFS - Parallel NFS, a mechanism for parallel NFS client access to data from multiple distributed NFS servers. The presence of such a mechanism in the network file system standard will help build distributed "cloud" storage and information systems.

NFS structure
The NFS structure includes three components at different levels:
The application layer (NFS itself) is remote procedure calls (rpc), which perform the necessary operations with files and directories on the server side.
The functions of the presentation layer are performed by the XDR (eXternal Data Representation) protocol, which is a cross-platform data abstraction standard. The XDR protocol describes a unified, canonical form of data representation that does not depend on the computer system architecture. When transmitting packets, the RPC client converts the local data into canonical form, and the server does the opposite.
The RPC (Remote Procedure Call) service, which provides a request for remote procedures by the client and their execution on the server, represents session-level functions. Connecting network resources
The procedure for connecting a network resource using NFS is called "exporting". The client can ask the server for a list of exportable resources it presents. The NFS server itself does not broadcast a list of its exported resources.
Depending on the options specified, the exported resource can be mounted (attached) "read-only", you can specify a list of hosts that are allowed to mount, specify the use of secure RPC (secureRPC), etc. One of the options determines the mounting method: "hard" ( hard) or "soft" (soft).
With a "hard" mount, the client will try to mount the file system no matter what. If the server is down, this will cause the entire NFS service to hang, as it were: processes accessing the file system will go into a state of waiting for RPC requests to finish executing. From the point of view of user processes, the file system will look like a very slow local disk. When the server is returned to a working state, the NFS service will continue to function.
With a soft mount, the NFS client will make several attempts to connect to the server. If the server does not respond, the system issues an error message and stops attempting to mount. From the point of view of the logic of file operations, when a server fails, a soft mount emulates a local disk failure.
The choice of mode depends on the situation. If the data on the client and server must be synchronized during a temporary service failure, then a "hard" mount is preferable. This mode is also indispensable in cases where the mounted file systems contain programs and files that are vital for the client to work, in particular for diskless machines. In other cases, especially when it comes to read-only systems, the soft mount mode seems to be more convenient.

Sharing in a mixed network
NFS is ideal for UNIX-based networks, as it comes with most versions of this operating system. Moreover, NFS support is implemented at the UNIX kernel level. The use of NFS on Windows client computers creates certain problems associated with the need to install specialized and rather expensive client software. In such networks, the use of resource sharing tools based on the SMB/CIFS protocol, in particular Samba software, seems to be more preferable.

Standards
RFC 1094 NFS: Network File System Protocol Specification] (March 1989)
RFC 1813 NFS Version 3 Protocol Specification] (June 1995)
RFC 2224 NFS URL Scheme
RFC 2339 An Agreement Between the Internet Society, the IETF, and Sun Microsystems, Inc. in the matter of NFS V.4 Protocols
RFC 2623 NFS Version 2 and Version 3 Security Issues and the NFS Protocol’s Use of RPCSEC_GSS and Kerberos V5
RFC 2624 NFS Version 4 Design Considerations
RFC 3010 NFS version 4 Protocol
RFC 3530 Network File System (NFS) version 4 Protocol
RFC 5661 Network File System (NFS) Version 4 Minor Version 1 Protocol

Sources used
1. en.wikipedia.org
2. en.science.wikia.com
3.phone16.ru
4.4stud.info
5.yandex.ru
6.google.com

Top Related Articles