Operating Systems
Operating real memory of a personal computer using a simple two-tier model of I
/ O, in which the main memory and the input / output files are managed
separately. In practice, this leads to even less load on the subsystem I / O.
For example, when the PC under Windows is for Lotus 1-2-3, the entire 123.exe
copied to the main memory system. The main memory is copied to the full code of
1.5 megabytes, even if the user would then assume command quit without
performing any other function. During the execution of the client application
will not issue any additional requests for input / output of the file, since all
binary code is rezidentno memory. Even if this code svopiruetsya Windows, it
will kPa at the local drive, which eliminates network traffic. In contrast, a
system based on Solaris, calling annex copy in memory function quit, and only
those functions that are required to perform initialization. Other functions are
loaded into memory pages later, in actual use, which results in considerable
savings in primary and allocate the time pressures on the subsystem I / O. If
the customer does not have enough memory, the pages can be removed and then
restored to the original source code programming (network server), but it placed
an additional strain on the server. As a result, the workload of the subsystem I
/ O server from PK-klientov is much more cataclysmic than to customers
workstations, with the same application. Another feature of the PC user base is
that the files used by these customers, significantly smaller than the same
files used on the workstations. On the very few PC applications that can be said
is that they are "intensive use of data (see sect. 3.1.3), mainly because the
memory management in the PC operating systems difficult and limited in capacity.
The very nature of the environment associated with intensive work with the
attributes, the choice of configurations to meet the challenges of random access.
While the fastest PC currently on CPU performance may well challenge the
supremacy workstation entry-level model PC is a much less demanding online
customer than the typical workstation. In part, this is because the vast
majority of PCs are still based on a slower processor 386 (or even 286), and
slower processors tend to operate with less demanding applications and users.
Moreover, the slower processors, even working at full speed, simply generate
queries less rapidly than workstations because the internal bus and network
adapters such PCs are not well optimized compared with the devices of greater
size. For example ISA standard Ethernet adapters, available in 1991 were able to
maintain the speed of data transmission only at the level of 700 MB / s (compared
with a rate greater than 1 MB / s, which was achieved in all workstations 1991)
and some fairly common interface cards were only able to provide speeds of
around 400 KB / s. Some PCs, including portable use interfaces, Ethernet, which
are actually connected via the parallel port. While such connections saves slot
bus and conveniently enough, but the Ethernet interface is one of the slow, as
many of the parallel port is limited data transmission speed of 500-800 kbps (60-100
MB / s). Of course when the user base have become a PC based on the 486,
equipped 32-bitovymi network adapters DMA, these distinctions are being blurred,
but is useful to remember that the vast majority of customers PC-NFS (especially
in the country) are in an older, less demanding users. The PC based on the 33
MHz 486DX, 32-bitovym equipped with Ethernet interface, shown in Figure 4.2.
The NFS Client-based UNIX systems such as Solaris the NFS client subsystem
equivalent disk subsystems, namely, it provides a service manager virtual memory,
and in particular the file system on the same basis as disk service, except that
this service is implemented with the assistance of the network. This may seem
obvious, but has some influence on the work of NFS client / server. In
particular, the virtual memory manager is located between client applications
and NFS. Implementing applications requesting the file system are cached system
virtual memory customers, reducing customer requirements for entry / withdrawal.
This can be seen in Figure 4.5. For most applications more memory on the client
to be less load on the server and more common (ie, client / server) system
performance. This is especially true for diskless clients who must use NFS as an
external storage device for anonymous memory. The mechanisms of virtual memory
caching delays, and sometimes cancels work NFS. For example, the diskless
workstation, serving 1-2-3. If data and binary codes applications remotely
located, the system will, as required, to download pages in memory performed
binary codes 1-2-3 with NFS. Then, using NFS to be loaded into memory data. For
most files typically configured 1-2-3 on the workstation will be cache
memory and stay there for a considerable time (more minutes, rather than seconds).
If opens and remains open temporary file, the file is opening itself to both the
client and server, but all updates the file usually are cached for a short time
in front of the client to the server. The semantics UNIX-fayla when file is
closed all changes must be written to external memory device, in this case the
NFS server. Alternatively, cached records may be recorded in the external
storage device using demons fsflush (Solaris 2.x) or udpated (Solaris 1.x). As
with conventional disk I / O, cached data input / output NFS remain in memory
until the memory is not needed for any other purpose. When a list issued to the
server, it must put the data in a stable memory before submission. However, the
client is a little different. If the user goes back to the cached data, for
example, if our example again handled some text pages 1-2-3, instead of issuing
queries to the server, treatment granted directly from the client's virtual
memory. Of course when the customer does not have enough memory to make room for
new data quickly modified pages are written back to the server, and unmodified
pages simply excluded. Since Sun Solaris 2.3 offers a new opportunity, called a
file system with data replication and caching file system (CFS - Cashed File
System). In accordance with standard protocol NFS files selected block by block
directly from the server to your client and the manipulation of them happened in
the memory. Data written back to disk. Software CFS is located between code NFS
client and server NFS access methods. When blocks of data obtained NFS client
software, they are cached in the highlighted area on the local hard disk. Local
copy of the file called the front Plan (front file), and a copy of the file-server
back (back file). Any subsequent appeal to cached file is copied to the
local disk, rather than copies, located on the server. For obvious reasons, such
an organization can significantly reduce the load on the server. Unfortunately,
CFS is not a comprehensive means to reduce the burden on the server. First,
because it did provide copies of the data, the system must provide certain
activities to maintain a coherent state of copies. In particular, CFS subsystem
periodically checks the file attributes back (frequency of testing a user). If
the file back plan has been modified, the front plan vychischaetsya file from
the cache, and the subsequent appeal to the (logical) file will mean that he
will be re-selected from the server and cached. Unfortunately, most applications
continue to work with a file, rather than certain blocks of data. For example,
vi, 1-2-3 and ProEngineer read and write data files its entirety, regardless of
the actual purpose of the user. (Generally, programs that use to access files
command mmap (2) do not apply to the file as a whole, while programs that use
the command read (2) and write (2) generally do). As a result, CFS usually
caches entire file. The NFS file systems subjected to frequent changes are not
very good candidates for CFS : files will be permanently cache and
cleaned, which eventually leads to an overall increase in network traffic,
compared to the simple work through NFS. The challenge of maintaining a coherent
state of the cached data between client and server is also another problem :
when a customer modifies the file, the file forward plan is cancelled, and file
back plan appropriately updated. Subsequent treatment reading of the file will
choose and then cache file. If updating files is a standard practice, it leads
to more traffic than when the standard NFS. The CSF is a relatively new
opportunity, unfortunately very little has been done measuring its conduct in
actual use. However, the very idea of a protocol CSF leads to the following
recommendations : The CSF should be used for the file system, which is mainly
read data, such as shared file systems coding applications. CSF is
particularly useful for sharing data between relatively slow network, such as
WAN, connected by lines less than T1. CSF useful for high-speed networks,
interconnected routers, which make the delay.
|