Owlcom software
basadOwlCom  Software
Let's Owl manage your files
Welcome Screenshots Buy now Download Write to Owl Subscribe to news FAQ Links

Data operation

Unlike transactions with attributes of the data to determine the size of 8 MB. (The size of a block of data, some NFS. More recently announced version of NFS protocol + allows blocks of data up to 4 GB. But this has not changed the nature of the data). Furthermore, while for each file, there is only one set of attributes, the number of blocks up to 8 MB in a file can be large (potentially reach several million). For most types NFS-serverov blocks of data are not usually are cached, and thus servicing of requests associated with a significant consumption of resources. In particular, to handle data requires much greater bandwidth network : each of the data transmission includes six large packages for Ethernet (two for FDDI). The possibility of overloading the network is a far more important factor in the consideration of the data. Somewhat surprisingly, but the majority of existing systems dominated transactions with attributes, and not just in the data. If the client wants to use the system NFS file stored on a remote file server, it gives the sequence of search (lookup) to determine the location of the remote file directory hierarchy, followed by the operation getattr masks for human access and other attributes of a file; Finally, the read extracts the first 8 MB of data. For a typical file, which is located at a depth of four or five levels of subdirectories remote hierarchy, simply opening the file requires five or six of NFS. As most files rather short (average for the majority of less than 16 MB) to read the entire file requires fewer operations than for its search and opening. Recent studies of Sun found that from the operating system, BSD 4.1 average file size has increased from about 1 MB to a little more than 8 MB. To determine the correct server configuration NFS primarily be attributed to one of two classes, in accordance with the dominant workload for alleged services NFS : with intensive operations on the attributes or intensive operations on the data.

Comparison of applications

Each with a different set of NFS In general, applications using the many small files that can be characterized as performing intensive operations attributes. Perhaps the best example of this is a classic application of software development. Large software systems usually consist of thousands of small modules. Each module contains a file inclusion (include file), source code file, object files, and some of the archive file type (like SCCS or RCS). Most files are small, often ranging from 4 to 100 MB. As usual during service transactions NFS interrogator locks, the processing of these applications is the speed of light server requests attributes. The total number of transactions over the occupied less than 40%. In most servers with very intensive operations up to the attributes required only moderate bandwidth network : bandwidth Ethernet (10 Mbit / s) is usually adequate. Most servers home directory (home directory) fall into the category of intensive operations to the attributes : most small files stored. In addition, these files are small compared with the size of the attributes, they also provide the opportunity to client data file system cache, eliminating the need for re-enacted from the server. Applications running very files are categorized intensive operations with the data. This category includes, for example, the application of geophysics, image processing and electronic CAD. These annexes usual scenario of NFS workstations or computers includes : reading very large file, a long process of the file (minutes or even hours), and finally reverse record a smaller file a result. Files in these applications often reach of 1 GB and files larger than 200 MB is the rule rather than the exception. When handling large files dominated operations associated with data requests. For applications with intensive operations up to the sufficient bandwidth network is always critical. For example, data transfer speed among Ethernet is 10 Mbps. This speed seems quite high, but 10 Mbps is only 1.25 MB / s, and even that speed in practice can not be achieved because of the overhead protocol exchange and the limited speed of each of the interacting systems. As a result, the real speed limit Ethernet is approximately 1 MB / s. But even that speed is achievable only in nearly ideal conditions, when the entire Ethernet bandwidth to transfer data between the two systems. Unfortunately, the organization has not usebal, when in fact often the case that only a small number of clients request data simultaneously. While there are many active clients saturation of the network is approximately 35%, which corresponds aggregated speed 440 kb / s. The very nature of these types of customers with intensive implementation of the data determines the planning system configuration. It is usually the choice network environment, and often dictates the type of server. In many cases the development of applications with intensive operations with a need pereprokladki networks. In general, it is believed that in an environment of the intensive implementation of the data, some more than half of NFS linked to the transfer of user data. As a representative of intensive operations up to the attributes of a classic is usually a mixture Legato, in which 22% of all transactions are read (read) and 15% - operation (write).

A model example of using NFS

After all examples of the use of most applications show that the server customers burden very unevenly. Consider working with a typical application. Typically, a user must first take binary code applications that accomplish the part of the code, which is responsible for organizing the dialogue with the user, who must determine is required by the data set. The annex reads data set from the disk (possibly remote). The user interacts with applications manipulating data in the main memory. This phase has continued much of the time of application until the end, a modified set of data saved to disk. Most (but not all) applications that are universal scheme work, often with repetitive phases. The following figures illustrate the typical load NFS. Figure 4.2 shows a piece for the magazine SunNetManager 486/33 PC running MS-DOS. The explosive nature of the clients is very clearly : in short intervals visible peaks reaching as high as 100 transactions per second, but the average load is small-7 operations per second, and perhaps typical load is about 1 transaction per second. The schedule released at intervals measured in one second to see the speed of transactions with small granularity. Figure 4.3 shows the magazine piece SunNetManager for diskless client-SPARCstation ELC with a 16 MB memory, performing various instruments of office automation. Relatively flat load reflected on this chart is typical of the majority of clients (Lotus 1-2-3, Interleaf 5.3, OpenWindows DeskSet, email very files). While there are a few cases where speed of 40-50 in the second, they all have short (1-5 seconds). The average time of the resulting total load is much lower : in this case, substantially below 1 operations per second, even without taking into account the free night. The graph measurement interval of 10 minutes. Note that this is a diskless system with a relatively small memory. The pressures from clients, with great drive and RAM will be even less. Finally, Figure 4.4 shows how random nature of the various customer has the effect of smoothing the load on the server. Chart shows pressure on the server twenty diskless clients with 16 MB of memory within ten days.

Operating Systems

Operating real memory of a personal computer using a simple two-tier model of I / O, in which the main memory and the input / output files are managed separately. In practice, this leads to even less load on the subsystem I / O. For example, when the PC under Windows is for Lotus 1-2-3, the entire 123.exe copied to the main memory system. The main memory is copied to the full code of 1.5 megabytes, even if the user would then assume command quit without performing any other function. During the execution of the client application will not issue any additional requests for input / output of the file, since all binary code is rezidentno memory. Even if this code svopiruetsya Windows, it will kPa at the local drive, which eliminates network traffic. In contrast, a system based on Solaris, calling annex copy in memory function quit, and only those functions that are required to perform initialization. Other functions are loaded into memory pages later, in actual use, which results in considerable savings in primary and allocate the time pressures on the subsystem I / O. If the customer does not have enough memory, the pages can be removed and then restored to the original source code programming (network server), but it placed an additional strain on the server. As a result, the workload of the subsystem I / O server from PK-klientov is much more cataclysmic than to customers workstations, with the same application. Another feature of the PC user base is that the files used by these customers, significantly smaller than the same files used on the workstations. On the very few PC applications that can be said is that they are "intensive use of data (see sect. 3.1.3), mainly because the memory management in the PC operating systems difficult and limited in capacity. The very nature of the environment associated with intensive work with the attributes, the choice of configurations to meet the challenges of random access. While the fastest PC currently on CPU performance may well challenge the supremacy workstation entry-level model PC is a much less demanding online customer than the typical workstation. In part, this is because the vast majority of PCs are still based on a slower processor 386 (or even 286), and slower processors tend to operate with less demanding applications and users. Moreover, the slower processors, even working at full speed, simply generate queries less rapidly than workstations because the internal bus and network adapters such PCs are not well optimized compared with the devices of greater size. For example ISA standard Ethernet adapters, available in 1991 were able to maintain the speed of data transmission only at the level of 700 MB / s (compared with a rate greater than 1 MB / s, which was achieved in all workstations 1991) and some fairly common interface cards were only able to provide speeds of around 400 KB / s. Some PCs, including portable use interfaces, Ethernet, which are actually connected via the parallel port. While such connections saves slot bus and conveniently enough, but the Ethernet interface is one of the slow, as many of the parallel port is limited data transmission speed of 500-800 kbps (60-100 MB / s). Of course when the user base have become a PC based on the 486, equipped 32-bitovymi network adapters DMA, these distinctions are being blurred, but is useful to remember that the vast majority of customers PC-NFS (especially in the country) are in an older, less demanding users. The PC based on the 33 MHz 486DX, 32-bitovym equipped with Ethernet interface, shown in Figure 4.2.

The NFS Client-based UNIX systems such as Solaris the NFS client subsystem

equivalent disk subsystems, namely, it provides a service manager virtual memory, and in particular the file system on the same basis as disk service, except that this service is implemented with the assistance of the network. This may seem obvious, but has some influence on the work of NFS client / server. In particular, the virtual memory manager is located between client applications and NFS. Implementing applications requesting the file system are cached system virtual memory customers, reducing customer requirements for entry / withdrawal. This can be seen in Figure 4.5. For most applications more memory on the client to be less load on the server and more common (ie, client / server) system performance. This is especially true for diskless clients who must use NFS as an external storage device for anonymous memory. The mechanisms of virtual memory caching delays, and sometimes cancels work NFS. For example, the diskless workstation, serving 1-2-3. If data and binary codes applications remotely located, the system will, as required, to download pages in memory performed binary codes 1-2-3 with NFS. Then, using NFS to be loaded into memory data. For most files typically configured 1-2-3 on the workstation will be cache memory and stay there for a considerable time (more minutes, rather than seconds). If opens and remains open temporary file, the file is opening itself to both the client and server, but all updates the file usually are cached for a short time in front of the client to the server. The semantics UNIX-fayla when file is closed all changes must be written to external memory device, in this case the NFS server. Alternatively, cached records may be recorded in the external storage device using demons fsflush (Solaris 2.x) or udpated (Solaris 1.x). As with conventional disk I / O, cached data input / output NFS remain in memory until the memory is not needed for any other purpose. When a list issued to the server, it must put the data in a stable memory before submission. However, the client is a little different. If the user goes back to the cached data, for example, if our example again handled some text pages 1-2-3, instead of issuing queries to the server, treatment granted directly from the client's virtual memory. Of course when the customer does not have enough memory to make room for new data quickly modified pages are written back to the server, and unmodified pages simply excluded. Since Sun Solaris 2.3 offers a new opportunity, called a file system with data replication and caching file system (CFS - Cashed File System). In accordance with standard protocol NFS files selected block by block directly from the server to your client and the manipulation of them happened in the memory. Data written back to disk. Software CFS is located between code NFS client and server NFS access methods. When blocks of data obtained NFS client software, they are cached in the highlighted area on the local hard disk. Local copy of the file called the front Plan (front file), and a copy of the file-server back (back file). Any subsequent appeal to cached file is copied to the local disk, rather than copies, located on the server. For obvious reasons, such an organization can significantly reduce the load on the server. Unfortunately, CFS is not a comprehensive means to reduce the burden on the server. First, because it did provide copies of the data, the system must provide certain activities to maintain a coherent state of copies. In particular, CFS subsystem periodically checks the file attributes back (frequency of testing a user). If the file back plan has been modified, the front plan vychischaetsya file from the cache, and the subsequent appeal to the (logical) file will mean that he will be re-selected from the server and cached. Unfortunately, most applications continue to work with a file, rather than certain blocks of data. For example, vi, 1-2-3 and ProEngineer read and write data files its entirety, regardless of the actual purpose of the user. (Generally, programs that use to access files command mmap (2) do not apply to the file as a whole, while programs that use the command read (2) and write (2) generally do). As a result, CFS usually caches entire file. The NFS file systems subjected to frequent changes are not very good candidates for CFS : files will be permanently cache and cleaned, which eventually leads to an overall increase in network traffic, compared to the simple work through NFS. The challenge of maintaining a coherent state of the cached data between client and server is also another problem : when a customer modifies the file, the file forward plan is cancelled, and file back plan appropriately updated. Subsequent treatment reading of the file will choose and then cache file. If updating files is a standard practice, it leads to more traffic than when the standard NFS. The CSF is a relatively new opportunity, unfortunately very little has been done measuring its conduct in actual use. However, the very idea of a protocol CSF leads to the following recommendations : The CSF should be used for the file system, which is mainly read data, such as shared file systems coding applications. CSF is particularly useful for sharing data between relatively slow network, such as WAN, connected by lines less than T1. CSF useful for high-speed networks, interconnected routers, which make the delay.

Configuring NFS-server

Assumptions To collect sufficient and accurate information to create NFS server configuration should answer the following questions : Is the intense pressure on the attributes or intensive data? Will customers to use a caching file system to reduce demand? How many average will be fully active clients? What types of client systems to be used and under what operating systems they work? How big file systems to be used in the separation of access? Repeat if demands of different customers to the same file (for example, to include files), or they belong to different files? What are the number and type of alleged exploitation networks? Is the current network configuration suitable for the type of traffic? Is the purported server configuration of the CPU to manage traffic associated with the applicable network? If local network (WAN), whether the medium and routers rather small delay and higher bandwidth to ensure the practicality of NFS? Is the disk drives and SCSI host adapters to the main set of productivity? Is the use of software tools like Online : DiskSuit to adequately burden on access to records of all the disk drives? If used frequently writes NFS, does the configuration of NVSIMM? Does the proposed strategy backup type, number and location of the SCSI bus devices backup? Perhaps the most important requirement for NFS-servera configuration is sufficient bandwidth and preparedness network. This practice leads to the need for a configuration with the number and type of networks and interfaces. As noted earlier, the most important factor in determining the choice of network configuration, is the dominant type of NFS used applications. For applications with intense pressure on the required relatively few networks, but those networks must have high bandwidth, such as FDDI network or CDDI. These requirements may be met with a 100baseT networks (Ethernet 100 Mbit / s) or ATM (Asynchronous Transfer Mode 155 Mbps). Most of the attributes intensive applications and working with a less expensive infrastructure, but may require a large number of networks. To decide on the choice of network is relatively easy. If for individual customer needs aggregated speed in excess of 1 MB / s, or for simultaneous operation of multiple customers need network bandwidth in excess of 1 MB / s, these applications require high-speed networks. There are in fact (1 Mb / s) artificially inflated, because it describes the speed that you do not exceed the guarantee. Usually considered more prudent speed Ethernet network of approximately 440 MB / s, but not limited to 1 MB / s. (Typically, users perceive Ethernet as neotvechayuschuyu "is about 35% in network load. The figure is 440 MB / s corresponds to 35% tax load lines with a capacity of 1.25 MB / s). If you do the customary mode does not require bandwidth, it may be less than adequate speed network environment like Ethernet or TokenRing. This environment provides sufficient speed in the performance of lookup and getattr that dominate annex intensive attributes and the relatively easy data traffic associated with such use. High-speed networks are most useful for large groups of customers with intense pressure on the rather because of the lower cost of infrastructure, rather than for reasons of maximum capacity, in collaboration with one of the other. This is the current state of NFS protocol, which is currently working with blocks of length 8 MB and provides pre-selected only 8 MB (ie one operation to the server can be defined as 16 MB of data). The overall effect of such an organization is that the maximum data transfer speed between the client and server to interact via FDDI ring is approximately 2.7 MB / s. (This speed is achieved by using the file / etc / system on the customer operator set nfs : nfs_async_threads = 16. Customers must run SunOS 4.1.x 12 biod demons, and not eight as it is by default). The speed of three times the maximum speed, which provides Ethernet though the speed of FDDI ten times greater. (NFS is an application layer protocol (level 7 in the OSI model). Minutes lower levels, such as TCP and UDP can handle much higher speeds, using the same hardware. Most of the time waiting for responses and other processing application layer. Other application layer protocols, not designed for immediate response and / or confirmation, also can convey a much higher rate environment). Peak speed in the case of 16 Mbps Token Ring is approximately 1.4 MB / s. More recently announced a new version of NFS protocol +, which eliminates this limitation, allowing work with much larger units. NFS + allows transmission blocks almost arbitrary size. The client and server agree on the maximum amount for each unit to mount a file system. The block size can grow up to 4 GB. The main advantage 100-Mbitnyh networks using conventional versions of NFS is that the network can support many simultaneous transmission of data without degradation. When the server sends the data to Ethernet clients at the rate of 1 MB / s, the transmission consumes 100% available network bandwidth. Attempts to transfer to the network more data lead to a lower capacity for all users. The same client and the server can send data at a speed of 2.7 MB / sec for FDDI rings, but in a high-speed network that transaction consumes only 21% of available bandwidth. The network can support five or six channel at the same time without serious degradation. This situation can be likened to a high-speed backbone. When the traffic light (light traffic) speed highway with two lanes and a speed limit of 90 kilometres per hour is almost as good as vosmipolosnaya superhighway with a speed limit of 120 km per hour. But when traffic is heavy (heavy traffic) Mainline much less susceptible to congestion. The FDDI also slightly (about 5%) is more effective than Ethernet and Token Ring environments in intensive sending data because it can put the package more useful data (4500 bytes, compared to 1500 bytes to 2048 bytes Ethernet and Token Ring to). When sending data of 8 MB needs to process only two sets, compared with five or six for Token Ring and Ethernet. But all these considerations are valid only for the environment with intensive data transfer, as the attributes in the processing of requests is small (for 80-128 bytes), to transfer only one package regardless of the type of network in use. If the existing wiring in the enterprise network pre-empt the use of FDDI fiber, there are standards for copper wire FDDI (CDDI), which ensures that switching network with the existing facility on the basis of TP cable. While ATM has not yet become a widely used technology, perhaps in the future it will become the main vehicle for protection from intense sending data because it provides faster data (currently defined data transmission speed 155 Mbits / s, 622 Mbit / s and 2.4 Gbit / s) and the use of point-to-point topology, in which each joint klient-hab can work with a speed of its environment.

NFS and global networks.

in real life situations to arise where the client and server NFS may be located in different networks, integrated routers. Topology network can greatly affect the perceived user productivity and NFS server provided by the service. The effectiveness of NFS-servisa through integrated network should be carefully monitored. But at least know that you can successfully configure the network and applications in the global (wide) topology NFS. Perhaps the most important issue in this situation is the delay of operation : the time that elapses between receipt of the extradition request and response. The delay in the implementation of LAN not so much because of such networks relatively short distances can cause significant delays in data transmission environments. The global network delay operations can occur only when carrying packets from one point to another. Delay packet consists of several components : Delay router : routers spend a long (and often substantial) time to do proper routing packets from one network to another. Note that in constructing the majority of global networks (even in the construction of the line between two adjacent buildings), at least two router. Figure 4.6 provides topology typical university campus, which is usually between client and server, three or even four router. Delayed transmission network : physical medium used to transmit packets through the global network, can often make their own significant delay beyond the largest delay routers. For example, satellite bridges often associated with the emergence of a very long delays. Faulty transmission : a global network for possible order of magnitude more sensitive to errors transfer than most local area networks. These errors cause significant re-channel data, resulting in increased delays for operations, and to reduce the effective capacity of the network. The network is highly prone to error transfer of a block of data NFS often set equal to 1 MB instead of the normal 8 MB. This reduces the amount of data that must be rebroadcast in the event of errors. If an acceptable level of a litmus transfers data file service on the global network is possible. Most of the configuration of the network used high-speed synchronous serial point-to-point links, which are connected to one or more local area networks at each end. In the United States, such successive lines usually speed 1.544 Mbps (T1 line), or 56 Kbps. European communication companies offer a little more speed : 2.048 Mbps (E1 lines) or 64 kbit / s respectively. There are even more high-speed data link. These leased lines, known as T3, offer transmission speeds up to 45 Mbps (5.3 MB / sec). Today, the majority of T3 lines used for data transmission. At first glance, it seems that this line significantly slower than local networks to which they are connected. However, the rapid sequential line (T1) provide bandwidth is much closer to the real capacity of local networks. This is because successive lines can be used with almost 100% of capacity without incurring excessive overhead costs, while Ethernet usually hurt some already with 440 MB / s (3.5 Mbit / s), which is only about twice the bandwidth T1 line. For this reason, file service for high-speed serial links is possible and can transfer data at an acceptable speed. In particular, the organization is useful when transferring data between remote offices. The annexes to the intensive processing attributes NFS work on global network can be successful if the delay operations is critical. The global network of short packets are transmitted through each segment promptly (with high capacity), although delays routing and the environment often cause considerable delays in the operation. Conclusions : To the global services NFS suited consistent line T1, E1, or T3. For most of the NFS line at speeds of 56 and 64 kbit / s are usually not been fast enough. When NFS through the global network there are problems with delays and network routing. The capacity of the network is usually no problem. To significantly reduce traffic on the global network, you can use a client of caching file system (CFS), unless this traffic is dominated writes NFS. Given these considerations, to determine the proper type and number of networks can be used following empirical rule : If annexed dominated operations on the data, choose FDDI network or any other high-speed network. If, for logistical reasons, the laying of fibre-optic cabling is not possible, consideration should be given to implementing FDDI on twisted pair. When a new system should be borne in mind that ATM networks used the same cables as for FDDI. The network configuration should be one FDDI ring for each customer 5-7, while fully active in the sense of NFS and intensively working with the data. It should be remembered that very few intensive data applications continuously generate queries to the server NFS. In typical intensive data applications automate the design of electronic devices and systems studies the Earth's resources are often allowed to have up to 25-40 clients for the ring. In systems with intensive data, where the existing system is forcing cable to use Ethernet, a separate Ethernet network for every two active clients and customers as 4-6 to the same network. If the annex involves intensive processing attributes, it is enough to build Ethernet or Token Ring. In an environment with a high use of attributes should have one Ethernet network to 8-10 all active clients. Close exceed 20-25 clients on Ethernet regardless of expectations because of the sharp degradation caused activity in the case of many customers. As a reference point in terms of common sense suggests that Ethernet can support 250-300 NFS-operatsy per second to test SPECsfs_97 (LADDIS) even with a high level of conflict. Nor exceed 200 NFS operations per second state security. configure a network for every 10-15 TokenRing fully active clients in an environment with a high use attributes. If necessary, the Token Ring network can be connected with an excellent 50-80 clients characteristics of this type of network on the resistance to degradation under heavy load (over Ethernet). For systems that provide multiple classes of service users make sense of mixed network configuration. For example, and FDDI and Token Ring suited for the server, which supports both applications related to the display of (intensive data), and PC group performing annex financial analysis (perhaps intensified by attributes). Because many computers are universal system, which allow a great increase in the number connected to peripheral devices, almost always possible to configure the system so that the main limiting factor is the processor. Under NFS power processor is spent directly for processing protocols IP, UDP, RPC and NFS, as well as for managing devices (disk and network adapters) and the manipulation of the file system (blatantly suggests that the consumption of CPU time is growing proportionately in accordance with the order here). For example, Sun recommends the following empirical rules for configuring NFS-serverov : If the customer is dominated by intense attributes Wednesday and is less than 4-6 Ethernet or Token Ring, then to work as NFS-servera enough odnoprotsessornoy system. For systems with one or two smaller networks enough processor power machines entry-level SPARCserver 4. For very intense on the attributes of many network recommended dual system such SPARCstation 20 Model 502 or dual configuration SPARCserver SPARCcenter 1000 or 2000. If Wednesday intensive data, it is recommended to configure two SuperSPARC processor with a high-speed SuperCashe per network (such FDDI). If the existing restrictions on the organization wire dictates the use of Ethernet in such an environment, it is recommended to configure a processor SuperSPARC for every 4 Ethernet or Token Ring.

Configuration of disk subsystems and load balancing

configuration like network configuration is determined by the type of client disks. Productivity disk drives varies widely depending on the requirements of their access methods. The random access nature almost always nekeshiruem and demands that were actually let positioning disk for each I / O (mechanical movement, which significantly reduces productivity). When serial access, particularly consecutive hits in reading, much less mechanical movement caret drive to the operation (usually one per cylinder, approximately 1 MB), which gives a much higher capacity. Experience shows that the majority of applications to run in an environment with intensive data is consistent, even on the servers that provide data to many clients. However, as a rule, the operating system has been working hard to provide access to their devices. So if you want to provide services for applications with intensive data should be chosen configuration for a coherent environment. For example, his drive capacity 2.9 GB disk was the fastest for Sun consecutive applications. He could provide data via the file system at speeds of 4.25 MB / sec. It was also the most concise and drive Sun has been the most convenient for storing large amounts of data. High speed data to the SCSI bus speed (peak bus bandwidth is 20 Mbytes / s) determines the optimal configuration of the disk subsystem : 4-5 active disk capacity 2.9 GB on one main adapter (DWI / S). If additional capacity to store data, the inclusion of more disks for each primary adapter is perfectly acceptable, but it will not increase the performance of disk subsystems. Discs with 2.9 gigabytes of Sun posted on the set in a rack chassis (up to six disk drives to the chassis). Each chassis can be connected to two independent SCSI adapters principal. This possibility is recommended to configure servers, serving customers with intensive data requests. To ensure maximum disk storage capacity of up to 12 drives can be configured on a single adapter DWI / S. However, maximum performance only 4-5 reservoirs. In an environment with consistent access to calculate just how many disks required for peak periods. Every fully active client may require the disk subsystem capacity to 2.7 megabytes per second. (This assumes the use of high-speed transmission networks in a 100 Mbit / s and above). A good first approximation to a single 2.9 GB disk for every three fully active clients. It is a ratio, while each drive can transfer data at a rate of more than 4 megabytes per second, and customers seek only 2.7 MB / s, since the two active clients on the same disk will be constant movement forward and backward caret between the cylinder (or file systems), leading to significantly lower capacity. To balance the work drive, as well as to accelerate certain types of channel data can use special software like Online : DiskSuit 2.0 (Sec. 4.3.4.3). If a network environment applies Ethernet or 16 Mbit Token Ring, one disc for each fully active user. If using NFS + is the attitude changed greatly since NFS + provides individual capacity in the client / server about speed network environment. Unlike environments with heavy data really all accessing files among intensive attributes to arbitrary access to the discs. When files are small, access to data dominates sample lines directories lines index descriptors and the first few blocks of indirect (positioning required to get all the pieces really meta), and each unit of the user. As a result of a carriage drive spends much more time "ryskaya" between different pieces of meta file system than the actual sample of the user. As a result, the criteria for the choice of intensive NFS attributes to be materially different from the criteria for the environment, with intensive use of the data. As in the time required for the operation of random I / O dominated time positioning caret drive, the total capacity of the disk in this mode is much less than in the sequential access. For example, the standard disk drive production in 1993 to work at a speed of 3.5-4 MB / s in sequential mode access, but provides the only access arbitrary 60-72 operations per second, or about the speed of 500 kb / s. In these circumstances, SCSI bus is much less taken, which allows the user to drive it much more before it came to handling tires. In addition, one of the tasks of configuring the system is the most reasonable number of disk drives, as it determines the number of disc carriage, which is a limiting factor in disk subsystems. Fortunately, the very nature of the attributes intensive applications suggests that the requirements of disk storage is relatively small (relative to the intensive data applications). In these circumstances often useful to include in the system configuration, instead of one large drive two or even four smaller capacity drive. While this configuration is slightly more expensive in cost per megabyte of memory, its performance has improved. For instance, two 1.05-GB disk capacity is about 15% more expensive than one disk capacity of 2.1 GB, but they provide more than twice the capacity of random I / O. Around the same attitude is just between drive capacity of 535 MB and 1.05 GB drive (see Table 4.2). Thus, for intensive on the attributes of a better environment to configure many small disks connected to the main moderate SCSI adapters. Disk capacity of 1.05 GB has excellent proprietary software, which minimizes the load SCSI bus. Disk capacity of 535 MB is similar characteristics. Recommended configuration is fully active 4-5 535 MB or 1 GB drives per SCSI bus, while six or seven disks can work without causing serious conflicts on the bus. The very system intensive features that require the use of disk capacity of 2.9 GB (for reasons of server or data), the best performance in 8 fully active disk on the bus fast / wide SCSI but can be fitted and 9 or 10 disk drives with only a small degradation of response time, I / O. As with intensive data systems configuration more drives on the SCSI bus provides additional memory capacity, but did not give further progress in productivity. Difficult to give any specific recommendations on the number of disc carriage, which requires intensive attributes on Wednesday, as the load varies widely. In such an environment, server response time depends on how quickly attributes can be found and handed over to the client. Experience has shown that often useful to configure at least one disk drive for every two fully active clients to minimize delays do these things, but the delay can be reduced with the help of additional main memory, allowing cache frequently used attributes. For this reason, lower-capacity drives are often preferred. For example, better use eight drive capacity of 535 MB of disk instead of two 2.1 GB capacity. One of the common problems is the lack of NFS server load balancing between disk drives and disk controllers. For example, to support a number of diskless clients often used configuration of the three disks. The first disc contains the operating system and server applications binary codes; Second-disk file system root and swap for all the diskless clients, and the third drive-user home directories diskless clients. This configuration balanced as possible on a logical rather than a real physical pressure on the disc. In such an environment, drive, store swap area (swap) for diskless clients are usually much more so than any of the other two discs : the drive almost all the time will be loaded to 100% and the other two by an average of 5%. Such distribution is often used also to work in another environment. To ensure transparent distribution of access to multiple disk drives can be used successfully as fission and / or mirroring, supported by a software type Online : DiskSuit. (Konkatenatsiya drive is minimal load balancing, but only when the drives are relatively well). In an environment with a high use of fission with a small cut provides increased capacity drives, as well as distribution services. Splitting drive significantly improves the performance of a consistent reading and writing. A good initial approach to value is the ratio of floors 64 MB / number of disks in the strip. In an environment with a high use of attributes that are characterized by random access, the default floors (one disc cylinder) is the most appropriate. While disk mirroring feature in DiskSuit primarily designed to ensure the sustainability of the prevention, the side effect of its use is to improve the access time and reduce the burden on disks by providing access to two or three copies of the same data. This is especially true for the environment, which is dominated read. Operation record on mirrored disk usually performed more slowly, as each sought logical operations actually required two or three operation. Currently, the computer industry recommended maximum load of each disk drive in the 60-65%. (The boot disk can find using the iostat (1)). Usually, in practice, not pre-planned distribution of data so as to ensure that the recommended boot drive. This required several iterations, which include removal of reorganization and related data. Moreover, it should be noted that the standard distribution of CDs with the times changing, sometimes dramatically. The distribution of the data, which provided very good job at the time of installation, may be very weak results a year later. When optimizing data on the existing set of disk drives are many other considerations second order.

The best areas disk drives.

Many are now supplied computer companies, benefit from the code, which are called "zone-bit recording zone bit recording). This type of encryption allows the use of the geometric properties of a rotating disk pack more data on the surface of the disk, which are further away from its center. The practical effect is that the number of addresses with lower numbers (corresponding to the external disk cylinders) exceeded the number of addresses with large numbers. Usually this limit is 50%. This type of coding increasingly affects productivity consistent access to the data (for example, 2.1 GB disk indicates data transmission speed range 3.5-5.1 MB / s), but it also affects the performance of random access to the drive. Data located on the outer cylinder drive, not only are faster under head read / write (and therefore speed above), but these cylinders as simply more in size. Assigned amount of data can be divided into a smaller number of large cylinders, leading to fewer mechanical movements caret. Ground rules for configuring the drive can be summarized as follows :-In an environment with intensive data should be configured from 3 to 5 fully active 2.9 GB disks for each primary adapter fast / wide SCSI. Must include at least three disk drive for each active client with FDDI network, or a CD for each active client with Ethernet or Token Ring. In an environment with a high use of attributes should be configured around 4-5 fully active 1.05 GB or 535 MB of disk for each primary SCSI adapter. Must include at least one CD for every two fully active clients (in any network environment). Each primary adapter can connect additional drives without substantial degradation of performance, until the number of normally active drives on the SCSI bus does not exceed the guidance given above. To load access multiple disks can recommend the use of software such as Online : DiskSuit 2.0. If possible, use the fastest zone on the disk. As many UNIX-sistemah (eg, Solaris) implemented caching files in virtual memory, most users tend to configure servers NFS very large main memory subsystems. But examples of typical use NFS client files show that the data retrieved from the buffer cache in fact relatively rare, and additional main memory is usually not necessary. In typical server space allowed customers to drive far exceeds the amount of main memory system. Repeated requests for blocks of data rather satisfied by reading from memory rather than from disk. Unfortunately, most clients work with their own files and rarely used common shared files. Moreover, the majority of applications are usually read the entire file into memory, then closes it. The client rarely drawn to the original file again. If none of the other customers did not use the same file before overwriting data in a cache, it means that the cached data should never again be used. However no additional overhead costs associated with caching of data, and their subsequent failure to use available. If you need memory to cache the other pages of data, in the pages just happens to record new data. Not necessarily dump page to disk, since memory to be retained is already on the disk. Certainly little of storage in the memory of the old pages, which are not used. When the amount of free memory falls below 1 / 16 of total memory pages unused in the recent past, become available for reuse. The biggest exception to this rule is empirical temporary file, which often opens early and closes at the end of the work. Since the file remains open for the client, the data associated with the file, be displayed (and cached) in the memory of the client. Virtual memory subsystem uses a client server as a backup storage temporary file. If the customer does not have enough physical memory to store these pages in their cache, some or all of the data will otkacheny or replaced by new content in meeting future operations, and repeated applications of the data will lead to the extradition read NFS to restore the data. The ability to cache server data is a certain advantage. Operation record can take advantage of caching UNIX server, because NFS protocol requires that any transaction records to a remote disk fully implemented simultaneously, to ensure a consistent state even in the event of server failure. The operation is called synchronous recording, in that sense, because the logical operation of the drive in a fully completed before it is confirmed. Although data queries, writes NFS not benefit from the buffering and postponing the date, normally performed by a write UNIX. Note that the client can and usually writes in the cache in the same manner as any other disk I / O. Customer, in any case, to check the results writes dumped on "disk", whether local or remote disk. Perhaps the simplest and most useful is the empirical rule, which is called the "rule of five minutes." This rule is widely used in the configuration database servers, much greater complexity which makes it very difficult to determine the size of the cache. The current ratio between the prices of memory and disk subsystem shows that the cost-effectiveness of data cache, which made treatment more than once every five minutes. In conclusion, a few simple empirical rules, which allow you to choose the configuration memory of NFS servers : If the server essentially provides custom data many customers to configure on the minimum memory. For small teams usually its volume is 32 megabytes, and for large groups of about 128 megabytes. The SMP configuration, always have at least 64 MB for each processor. Applications with intensive use of the attributes usually benefit from increased system memory slightly higher than applications with intensive data. If the allocated space on the server to store temporary files that will work very intensively with these files (a good example is the company Cadence Verilog) should be configured server memory roughly equal amount of active temporary files used on the server. For example, if the temporary file client about 5 MB and it is anticipated that the server will serve 20 fully active clients, there should be a (20 customers x 5 MB) = 100 MB of memory (128 MB of course is the most convenient target, which is easily configured). Often, however, this temporary file can be placed in a local directory such as / tmp, resulting in significantly higher performance customer, as well as significantly reduce network traffic. If the main task of the server is storing only executable files should be configured server memory about equal stock of heavily used binary. Do not forget the library! For example, the server predpolagayuschiysya storage / usr / openwin for some team members must have sufficient memory to cache Xsun, cmdtool, libX11.so, and libxwiew.so libXt.so. This application NFS differs significantly from a server model so that it is supplying the same files to all its customers and therefore is able to cache the data. Typically, customers do not use every page of all binaries, and so wise in the server configuration to provide only the amount of memory, enough to accommodate frequently used programs and libraries. The memory can be selected based on the five minute rule : 16 MB for the operating system plus memory for caching data, which will occur treatment more often than once every five minutes. Because NFS servers fail to meet user processes, a large swap space (swap) is not needed. For servers swap space of approximately 50% of the size of main memory is more than enough. Note that the majority of the exact opposite of what everyone expects. Disc operations inherently associated with mechanical movements heads in the drive, so they are slow to be fulfilled. Usually UNIX buffers writes to main memory and allows the issuing process continues, while the operating system will have to physically write data on the disk. Simultaneous principle writes in NFS means that they normally very slow (much slower than writes to a local drive). If the client issues a request records, requires that the server has updated the data on disk, and all associated metadata file system. For a typical file should be completed by 4 disk : each operation must update the data information in the directory file inditsiruyuschuyu date of last modification, and indirect heat; If the file is large, it will also update the second indirect heat. Before confirm that the request for records NFS, the server must implement all these updates and to ensure that they were on the disk. Operation record NFS can often last for 150-200 milliseconds (three or four simultaneous recording more than 40 milliseconds each), compared with the normal 15-20 milliseconds to write to your local disk. In order to accelerate the writing NFS, servers can use the stable memory (RAM-Non-NVRAM). This additional opportunity is based on the fact that the NFS protocol simply requires that the data writes NFS would be recorded in stable memory instead of fixing them on the disk. Until the server returns data, which confirmed the previous transaction records, it can keep the data in every way possible. PrestoServe and NVRAM exactly turning this semantics. The installation of these devices to the server device driver NVRAM intercepts requests synchronous writes to disk. These are not sent directly to the drive. Instead, the write down in a stable memory and confirmed as completed. This is much faster than waiting for the end of the mechanical operation of data on the disk. A short time later, the data are recorded on the disk. As a logical transaction records NFS performs three or four synchronous disk operations, the use of NVRAM significantly improves bandwidth writes NFS. Depending on the terms (of the file system, there are other requests to the disk, the size and location of records, etc.) using NVRAM accelerates NFS writes in 2-4 times. For example, a typical capacity in the discharge writes NFS running Solaris 2 is about 450 MB / s. Using NVRAM speed increased to about 950 MB / s and even higher if the network environment faster than Ethernet. There is no better time of the read NVRAM not. From the point of view of the disk subsystem or NFS clients additional opportunities PrestoServe and NVSIMM functionally equivalent. The main difference is that NVSIMM more effective because they require less manipulation of the data. The fee PrestoServe physically located on the bus SBus requires that data be copied to it through peripheral bus. In contrast, NVSIMM placed directly into the main memory. Recordable Disc data is not copied to NVSIMM via peripheral bus. Such copying can be done very quickly with the help of steering. For these reasons NVSIMM are preferable to a situation where both opportunities and NVSIMM PrestoServe are available. Due to the importance of accelerating Sun received recommends the use of NVRAM indeed, in all of which provide universal service NFS. The only exception to this rule are the servers that provide service only in reading files. The most typical example of such use are servers, store binary codes for a large group of customers. (The Sun known as a server / usr / dist or softdist). Because the device driver NVSIMM / PrestoServe be located on the disk in the root file system, accelerated by NVRAM can be obtained for the work with the root filesystem. NVRAM driver should be able to pump modified buffer to disk before any other will be an active process. If a root file system has been accelerated, it could prove to be "dirty" (retrofit) after the collapse of the system, NVRAM, and the driver could not boot. Another important consideration when comparing servers, which are equipped with NVRAM and without it, is that the use of such acceleration generally reduces the maximum capacity of about 10%. (Systems using NVRAM, NVRAM cache must manage and maintain a coherent position copies in the cache and disk space). However, the response time of an improved (around 40%). For example, the maximum capacity SPARCserver 1000 to test LADDIS without NVSIMM of 2108 operations per second with 49.4 ms response time. Studies NVSIMM system can perform only about 1,928 transactions per second, but the average response time is reduced to about 32 ms. This means that customers perceive the NFS server, equipped NVRAM, much faster than a server without NVRAM, while the total capacity of slightly reduced. Fortunately, the value of 10% is rarely a problem, since the maximum capacity of most systems far exceed the typical load, which is in the range of 10-150 per second on the network.

The backup and fault-tolerance problems.

backup copy file systems and sustainability for failure to NFS server similar to the problems encountered in the operation of any other system. Some suggestions for backup and fault tolerance can be summarized as follows : A simple, relatively small backups can be made by one or two tape drives. The location of these drives on the SCSI bus does not make much difference if they are not active during working hours system. Establishment of full backups requires locking file system to prevent modification. For such operations require special software, such as product Online : Backup 2.0. As in the previous case, the location of backup SCSI bus does not make much difference if the copying is performed outside working hours. Zerkalirovannye file systems enable face full waivers drive and, in addition, allow continuous access to even during a fully coherent backups. Mirroring results in a very small loss of capacity disks to write operation (up to 7-8% at random, and by 15-20% in the sequential access; In a system with many users, which includes the majority of servers, it is hoped that those numbers will decrease by half). Mirroring automatically improves capacity in the discharge of reading. The establishment zerkalirovannyh file systems, each mirror must be configured on a single SCSI bus. If backups should be done during normal working hours system, the device must be configured to copy or on its own SCSI bus, or on the same SCSI bus, which switched from the mirror (a separate and inactive), to avoid a number of problems with target response time. When the quick restoration of the file system within the intensive attributes should be in the configuration NVRAM. In an environment with intensive data should explore the possibility of high-speed mechanical devices such stekkerov tapes and storage devices. Work on the assessment burden on the future system is not very accurate, but often it is a good approach, which you can obtain in advance. The two basic approaches. The preferred method is to measure the parameters of the existing system. This method provides some confidence in the accuracy of the estimate of the burden, at least for the current time, although not of course guarantee that the pressure in the system in the future will be equivalent to the existing one. An alternative method is a rough calculation. It is useful, when the user does not have the necessary measuring system. To create a sufficiently precise configuration of the system need to know two things : a mixture of NFS and total system capacity. Mixture of NFS to show whether the system is intense on the attributes or data. There are many different mechanisms for the measurement of existing systems. The simplest of these is simply to use the nfsstat (8), which provides information on the mix of operations. As these statistics can be re-established through zero-z flag, the team nfsstat can also be used for the measurement bandwidth using Shell script, such as shown below. Out of NFS-vyzovov shows, which were served in a given interval, and therefore the speed with which transactions are processed NFS. It should be noted that when heavy loads team sleep may actually "sleep" considerably more than requested by 10 seconds, leading to inaccurate data (ie re quantity). In these cases must be a better way. There are many such funds, as described SunNetManager, NetMetrix of Metrix and SharpShooter from AIM Technologies. These funds make it possible to verify the capacity of the system under real pressure and mixture operations. To calculate the average capacity usually requires some further processing of data. You can use a variety of means (awk (1), spreadsheet type WingZ or 1-2-3). If the measuring current system is not available, it is often possible rough estimate based on the intended use of the system. Implementation of the assessment requires an understanding of how data will be manipulated by the customer. This method is accurate, if it falls into the category of intensive data. Some judicious assessment usually can be made and environment-intensive features, but a number of factors make such an assessment somewhat less accurate. The first step for such an assessment is to identify fully active query model client. This requires an understanding of customer behavior. If the intense pressure on these, it makes more sense just prosummirovat anticipated operations reading and writing and take a number as a burden for each client. Operation attributes are generally insensitive to the workload, which is dominated by operations on the data (on the one hand, they represent only a small percentage of all transactions, on the other hand, these operations are the minimum number of servers in relation to the work to be performed for a sample data). For example, the client workstation, performing software that searches of a given temperature in a certain amount of fluid. Model data set for this task is 400 megabytes. Usually he read portions of 50 Mbytes. Every piece is fully processed before moving to annex next. The processing of each segment takes approximately five minutes TSP, and the resulting files are saved to disk, the size of about 1 MB. Suppose that in a networked environment using FDDI. Maximum load NFS will arise when the client reads a portion of 50 Mbytes. The maximum speed of 2.5 MB / s client will be fully active for about twenty seconds, serving 320 reads per second. Since the launch of the programme took about 40 minutes (or 2,400 seconds) time, and one run needed (400 + 1) x 125 Mb ops / Mb = 50,125 ops, the average speed is about 20 ops / sec. The server will provide services to a peak speed of requests (320 ops / sec) for about 20 seconds out of every five minutes, or for about 7% of the time. This exercise can be drawn three portions of useful information : average speed active enquiries (20 ops / sec), the peak rate requests (320 ops / sec) and the likelihood that the peak speed is required. Based on this information can be formed assessment of the overall rate requests. If the system configuration will be 10 customers, the average speed of 200 requests ops / sec. (This speed should not be compared with test results LADDIS because of the mixture of very different). The likelihood that the client will seek two with a peak speed of about 0.07 x 0.07 = 0.049, or about 5%, and 3 client will require peak service only within 0.034% of the time. Thus, from this information prudent to withdraw the following conclusions : The possibility that the three customer will be active simultaneously, much less than 1%, the maximum load will exceed the individual peak load of 2-3. Requires only one network, as the maximum anticipated load is only 3 x 2.5 Mb / sec = 7.5 MB / s, or far below the maximum bandwidth of FDDI (12.5 MB / sec). As at any time be fully active only two or three clients, it takes at least 3 to 6 disk drives (although the model file size of 400 MB is very likely that it will take more than just six discs for data storage). At least two major SCSI adapter. Because the system is a high-speed network, the use of a server with two CPUs SuperSPARC / SuperCashe. Since it is unlikely that a very large cache file will be useful for a server requires a minimum amount of main memory, 128 MB is sufficient. If a relatively small farm drive, for example, of about 16 GB, the system SPARCstation 10 Model 512 very well be able to cope with this task, as one SBus slot is required for FDDI interface, and the remaining three slots can be used to install the main SCSI adapters, to a total of 4 interface FSBE / S, each of which connects disk drives for a total capacity of 4.2 GB. However, this application might better suit the system SPARCserver 1000, which will offer a greater capacity memory : a system with two systemic card allows configuration with the seven principal adapters and SCSI disk storage capacity of 28 GB (one mnogodiskovomu device 4.2 GB capacity on each card FSBE / S, not counting the four fixed disk capacity of 535 MB). In case you need large capacity drives can be configured system SPARCcenter 2000 with two system boards to give six interfaces DWI / S up to 12 chassis-drive capacity of 2.9 GB - about 208 GB of memory. All proposed system can be installed without the use NVSIMM SBus slots, and all are easy to maintain a required two processors. Using NVSIMM not very important because the proportion of entries too small (less than 1:400, or 0.25%). Note that the choice of system configurations for applications with intensive data generally not very useful to compare the estimated speed rating of requests from the server to SPECsfs_097, as a mixture of different so that the load can not be compared. Fortunately, this is usually accurate.

Evaluation of intensive attributes

In the previous example, it was assumed that load NFS operations with the attributes was negligibly small in comparison with transaction data. If it was not, for example, in an environment of software development, need to make some assumptions about the expected mixture NFS commands. In the absence of other information, the sample can be taken, for example, the so-called mixture Legato. There SPECsfs_097 (also known as LADDIS) used it is this mixture, in which operations on the data include 22% of read and write 15%. Consider a client workstation, the most intensive work which relates to recompile software system consisting of a source of 25 Mbytes. We know that workstations can compile the system in about 30 minutes. The compilation generates about 18 megabytes intermediate object code and binary codes. From this information we can conclude that the system will save the client to the server 18 Mbytes and read at least 25 MB (maybe more, since almost one-third of the source code consists of header files are included by multiple source modules). To prevent re-read these files can be used include a caching file system. Suppose that used CFS. During the "construction " must allocate 33 MB of real data, or 33 Mb x 125 ops / = 4,125 Mb of data for 30 minutes (1800 seconds), which is approximately 2.3 speed ops / sec. (This assumes that each operation is performed with 8 Kb of data, so to send 1 Mb of data required 125 operations). Because it is linked to the intensive use of the attributes necessary to evaluate a significant number of promahivayuschihsya attributes. Assuming that the mixture with a mixture of Legato, the overall rate will be approximately the same : For we also have (18 Mb to record 125 x ops / Mb) / 15% / 1800 seconds, or 8.33 ops / sec. The combination of reading and writing very similar to a mixture of Legato, but that might not be the case, for example, if the files were opened browser source code (the browser files of source code (browser source files) often 4-6 times the size of the source code). In this case, we have no way of measuring peak. If there are 20 work stations, working in the mode described above, we can conclude the following : Even if it is incredible condition, when all 20 workstations fully active at all times, the overall rate request is 8.33 ops / sec x 20 clients, or 166 ops / sec, or below the maximum of 200 ops / sec, which supports Ethernet. Cautious skonfiguriruyut people for a load of two, but if logistical considerations are excluded in advance, the same network will probably suffice. As pressure on the light, the SPARCstation 10 Model 40 is more than adequate. (Even in the very bad case, there are only two networks). Processing capacity of SPARCclassic also usually sufficient. While the total number of very small (25 MB of source code and object code 18 megabytes; even 20 complete copies of only 660 megabytes), the recommended disk configuration can include two 535 MB on disk. The assumption is that CFS can be and one disc, as header files are often read from the server (they cache clients). When one or two drives one of the SCSI bus full enough. Volume data is very small and most of them will be read and sent many clients repeatedly, it is certainly enough to configure memory for all the data cache : 16 MB of memory under the basic operating system, plus a 25 MB cache for the source code in the configuration of 48-64 MB. Because of this environment writes a frequent NVSIMM or PrestoServe are significant. The final version of the station can choose either entry-level SPARCstation 10, or well-configured stations SPARCclassic. For the common sense to see that the maximum speed of 166 queries per ops / sec at 75% less than that of SPARCclassic (236 ops / sec) to test LADDIS (remember that the speed of 166 ops / sec assumed that all 20 clients fully active at all times, although the use of real logs show that this never happens); Maximum load required half of which shows SPARCstation 10 Model 40 to test LADDIS (411 ops / sec). Comparison with the corresponding figure LADDIS situation intensive attributes, as results LADDIS intensive use of a mixture of attributes. Table 4.3. The LADDIS different NFS-serverov Sun with Solaris 2.3. A few (5%) higher speeds achieved with the use of FDDI, if there is no system for the measurement and application behavior is not well understood, it can be assessed based on a similar application load, as shown in Tables 4.4-4.6. These figures give some indication of the measured loads and NFS. This does not mean that they give a picture of what to expect from the burden of certain tasks. In particular, note that the data contained in these tables are the maximum anticipated load of real clients, as these figures reflect only the period of time when the system is active NFS-zaprosy. As noted above, sect. 3.1.4, the system almost never fully active at all times. A notable exception to this rule is computer servers, which in reality are constantly working packet machine. For example, the work of 486/33, serving 1-2-3 is shown in Table 4.2 and figure. 4.2. Although presented in the table peak load is 80 ops / sec, it is clear from the figure that the total load is less than 10% of the speed and the average five minutes, much less load 10 ops / sec. The average for a longer period of time, pressure is approximately 0.1 PC ops / sec. Most workstations class SPARCstation2 or SPARCstation ELC are averaging 1 op / sec, and most reasonable equivalents customers SPARCstation 10 Model 51, Model 512, HP 9000/735 or RS6000/375 1-2 ops / sec. Of course, these figures vary significantly depending on the individual user and application.
Welcome    Screenshots    Download    Buy now Write to Owl    Subscribe to Owl news   FAQ   Links  
Copyright 2004-2011 OwlCom Software .

OISV - Organization of Independent Software Vendors - Contributing Member Software Submit .NET - FREE and PREMIUM search engines submission and software submission and software promotion service. Owl Commander
Locations of visitors to this page

Valid HTML 4.01 Transitional