Data operation
Unlike transactions with attributes of the data to determine the size of 8 MB. (The
size of a block of data, some NFS. More recently announced version of NFS
protocol + allows blocks of data up to 4 GB. But this has not changed the nature
of the data). Furthermore, while for each file, there is only one set of
attributes, the number of blocks up to 8 MB in a file can be large (potentially
reach several million). For most types NFS-serverov blocks of data are not
usually are cached, and thus servicing of requests associated with a significant
consumption of resources. In particular, to handle data requires much greater
bandwidth network : each of the data transmission includes six large packages
for Ethernet (two for FDDI). The possibility of overloading the network is a far
more important factor in the consideration of the data. Somewhat surprisingly,
but the majority of existing systems dominated transactions with attributes, and
not just in the data. If the client wants to use the system NFS file stored on a
remote file server, it gives the sequence of search (lookup) to determine the
location of the remote file directory hierarchy, followed by the operation
getattr masks for human access and other attributes of a file; Finally, the read
extracts the first 8 MB of data. For a typical file, which is located at a depth
of four or five levels of subdirectories remote hierarchy, simply opening the
file requires five or six of NFS. As most files rather short (average for the
majority of less than 16 MB) to read the entire file requires fewer operations
than for its search and opening. Recent studies of Sun found that from the
operating system, BSD 4.1 average file size has increased from about 1 MB to a
little more than 8 MB. To determine the correct server configuration NFS
primarily be attributed to one of two classes, in accordance with the dominant
workload for alleged services NFS : with intensive operations on the attributes
or intensive operations on the data.
Comparison of applications
Each with a different set of NFS In general, applications using the many small
files that can be characterized as performing intensive operations attributes.
Perhaps the best example of this is a classic application of software
development. Large software systems usually consist of thousands of small
modules. Each module contains a file inclusion (include file), source code file,
object files, and some of the archive file type (like SCCS or RCS). Most files
are small, often ranging from 4 to 100 MB. As usual during service transactions
NFS interrogator locks, the processing of these applications is the speed of
light server requests attributes. The total number of transactions over the
occupied less than 40%. In most servers with very intensive operations up to the
attributes required only moderate bandwidth network : bandwidth Ethernet (10
Mbit / s) is usually adequate. Most servers home directory (home directory) fall
into the category of intensive operations to the attributes : most small files
stored. In addition, these files are small compared with the size of the
attributes, they also provide the opportunity to client data file system cache,
eliminating the need for re-enacted from the server. Applications running very
files are categorized intensive operations with the data. This category includes,
for example, the application of geophysics, image processing and electronic CAD.
These annexes usual scenario of NFS workstations or computers includes : reading
very large file, a long process of the file (minutes or even hours), and finally
reverse record a smaller file a result. Files in these applications often reach
of 1 GB and files larger than 200 MB is the rule rather than the exception. When
handling large files dominated operations associated with data requests. For
applications with intensive operations up to the sufficient bandwidth network is
always critical. For example, data transfer speed among Ethernet is 10 Mbps.
This speed seems quite high, but 10 Mbps is only 1.25 MB / s, and even that
speed in practice can not be achieved because of the overhead protocol exchange
and the limited speed of each of the interacting systems. As a result, the real
speed limit Ethernet is approximately 1 MB / s. But even that speed is
achievable only in nearly ideal conditions, when the entire Ethernet bandwidth
to transfer data between the two systems. Unfortunately, the organization has
not usebal, when in fact often the case that only a small number of clients
request data simultaneously. While there are many active clients saturation of
the network is approximately 35%, which corresponds aggregated speed 440
kb / s. The very nature of these types of customers with intensive
implementation of the data determines the planning system configuration. It is
usually the choice network environment, and often dictates the type of server.
In many cases the development of applications with intensive operations with a
need pereprokladki networks. In general, it is believed that in an environment
of the intensive implementation of the data, some more than half of NFS linked
to the transfer of user data. As a representative of intensive operations up to
the attributes of a classic is usually a mixture Legato, in which 22% of all
transactions are read (read) and 15% - operation (write).
A model example of using NFS
After all examples of the use of most applications show that the server
customers burden very unevenly. Consider working with a typical application.
Typically, a user must first take binary code applications that accomplish the
part of the code, which is responsible for organizing the dialogue with the user,
who must determine is required by the data set. The annex reads data set from
the disk (possibly remote). The user interacts with applications manipulating
data in the main memory. This phase has continued much of the time of
application until the end, a modified set of data saved to disk. Most (but not
all) applications that are universal scheme work, often with repetitive phases.
The following figures illustrate the typical load NFS. Figure 4.2 shows a piece
for the magazine SunNetManager 486/33 PC running MS-DOS. The explosive nature of
the clients is very clearly : in short intervals visible peaks reaching as high
as 100 transactions per second, but the average load is small-7 operations per
second, and perhaps typical load is about 1 transaction per second. The schedule
released at intervals measured in one second to see the speed of transactions
with small granularity. Figure 4.3 shows the magazine piece SunNetManager for
diskless client-SPARCstation ELC with a 16 MB memory, performing various
instruments of office automation. Relatively flat load reflected on this chart
is typical of the majority of clients (Lotus 1-2-3, Interleaf 5.3, OpenWindows
DeskSet, email very files). While there are a few cases where speed of 40-50 in
the second, they all have short (1-5 seconds). The average time of the resulting
total load is much lower : in this case, substantially below 1 operations per
second, even without taking into account the free night. The graph measurement
interval of 10 minutes. Note that this is a diskless system with a relatively
small memory. The pressures from clients, with great drive and RAM will be even
less. Finally, Figure 4.4 shows how random nature of the various customer has
the effect of smoothing the load on the server. Chart shows pressure on the
server twenty diskless clients with 16 MB of memory within ten days.
|