Owlcom software
basadOwlCom  Software
Let's Owl manage your files
Welcome Screenshots Buy now Download Write to Owl Subscribe to news FAQ Links

NFS and global networks.

in real life situations to arise where the client and server NFS may be located in different networks, integrated routers. Topology network can greatly affect the perceived user productivity and NFS server provided by the service. The effectiveness of NFS-servisa through integrated network should be carefully monitored. But at least know that you can successfully configure the network and applications in the global (wide) topology NFS. Perhaps the most important issue in this situation is the delay of operation : the time that elapses between receipt of the extradition request and response. The delay in the implementation of LAN not so much because of such networks relatively short distances can cause significant delays in data transmission environments. The global network delay operations can occur only when carrying packets from one point to another. Delay packet consists of several components : Delay router : routers spend a long (and often substantial) time to do proper routing packets from one network to another. Note that in constructing the majority of global networks (even in the construction of the line between two adjacent buildings), at least two router. Figure 4.6 provides topology typical university campus, which is usually between client and server, three or even four router. Delayed transmission network : physical medium used to transmit packets through the global network, can often make their own significant delay beyond the largest delay routers. For example, satellite bridges often associated with the emergence of a very long delays. Faulty transmission : a global network for possible order of magnitude more sensitive to errors transfer than most local area networks. These errors cause significant re-channel data, resulting in increased delays for operations, and to reduce the effective capacity of the network. The network is highly prone to error transfer of a block of data NFS often set equal to 1 MB instead of the normal 8 MB. This reduces the amount of data that must be rebroadcast in the event of errors. If an acceptable level of a litmus transfers data file service on the global network is possible. Most of the configuration of the network used high-speed synchronous serial point-to-point links, which are connected to one or more local area networks at each end. In the United States, such successive lines usually speed 1.544 Mbps (T1 line), or 56 Kbps. European communication companies offer a little more speed : 2.048 Mbps (E1 lines) or 64 kbit / s respectively. There are even more high-speed data link. These leased lines, known as T3, offer transmission speeds up to 45 Mbps (5.3 MB / sec). Today, the majority of T3 lines used for data transmission. At first glance, it seems that this line significantly slower than local networks to which they are connected. However, the rapid sequential line (T1) provide bandwidth is much closer to the real capacity of local networks. This is because successive lines can be used with almost 100% of capacity without incurring excessive overhead costs, while Ethernet usually hurt some already with 440 MB / s (3.5 Mbit / s), which is only about twice the bandwidth T1 line. For this reason, file service for high-speed serial links is possible and can transfer data at an acceptable speed. In particular, the organization is useful when transferring data between remote offices. The annexes to the intensive processing attributes NFS work on global network can be successful if the delay operations is critical. The global network of short packets are transmitted through each segment promptly (with high capacity), although delays routing and the environment often cause considerable delays in the operation. Conclusions : To the global services NFS suited consistent line T1, E1, or T3. For most of the NFS line at speeds of 56 and 64 kbit / s are usually not been fast enough. When NFS through the global network there are problems with delays and network routing. The capacity of the network is usually no problem. To significantly reduce traffic on the global network, you can use a client of caching file system (CFS), unless this traffic is dominated writes NFS. Given these considerations, to determine the proper type and number of networks can be used following empirical rule : If annexed dominated operations on the data, choose FDDI network or any other high-speed network. If, for logistical reasons, the laying of fibre-optic cabling is not possible, consideration should be given to implementing FDDI on twisted pair. When a new system should be borne in mind that ATM networks used the same cables as for FDDI. The network configuration should be one FDDI ring for each customer 5-7, while fully active in the sense of NFS and intensively working with the data. It should be remembered that very few intensive data applications continuously generate queries to the server NFS. In typical intensive data applications automate the design of electronic devices and systems studies the Earth's resources are often allowed to have up to 25-40 clients for the ring. In systems with intensive data, where the existing system is forcing cable to use Ethernet, a separate Ethernet network for every two active clients and customers as 4-6 to the same network. If the annex involves intensive processing attributes, it is enough to build Ethernet or Token Ring. In an environment with a high use of attributes should have one Ethernet network to 8-10 all active clients. Close exceed 20-25 clients on Ethernet regardless of expectations because of the sharp degradation caused activity in the case of many customers. As a reference point in terms of common sense suggests that Ethernet can support 250-300 NFS-operatsy per second to test SPECsfs_97 (LADDIS) even with a high level of conflict. Nor exceed 200 NFS operations per second state security. configure a network for every 10-15 TokenRing fully active clients in an environment with a high use attributes. If necessary, the Token Ring network can be connected with an excellent 50-80 clients characteristics of this type of network on the resistance to degradation under heavy load (over Ethernet). For systems that provide multiple classes of service users make sense of mixed network configuration. For example, and FDDI and Token Ring suited for the server, which supports both applications related to the display of (intensive data), and PC group performing annex financial analysis (perhaps intensified by attributes). Because many computers are universal system, which allow a great increase in the number connected to peripheral devices, almost always possible to configure the system so that the main limiting factor is the processor. Under NFS power processor is spent directly for processing protocols IP, UDP, RPC and NFS, as well as for managing devices (disk and network adapters) and the manipulation of the file system (blatantly suggests that the consumption of CPU time is growing proportionately in accordance with the order here). For example, Sun recommends the following empirical rules for configuring NFS-serverov : If the customer is dominated by intense attributes Wednesday and is less than 4-6 Ethernet or Token Ring, then to work as NFS-servera enough odnoprotsessornoy system. For systems with one or two smaller networks enough processor power machines entry-level SPARCserver 4. For very intense on the attributes of many network recommended dual system such SPARCstation 20 Model 502 or dual configuration SPARCserver SPARCcenter 1000 or 2000. If Wednesday intensive data, it is recommended to configure two SuperSPARC processor with a high-speed SuperCashe per network (such FDDI). If the existing restrictions on the organization wire dictates the use of Ethernet in such an environment, it is recommended to configure a processor SuperSPARC for every 4 Ethernet or Token Ring.

Configuration of disk subsystems and load balancing

configuration like network configuration is determined by the type of client disks. Productivity disk drives varies widely depending on the requirements of their access methods. The random access nature almost always nekeshiruem and demands that were actually let positioning disk for each I / O (mechanical movement, which significantly reduces productivity). When serial access, particularly consecutive hits in reading, much less mechanical movement caret drive to the operation (usually one per cylinder, approximately 1 MB), which gives a much higher capacity. Experience shows that the majority of applications to run in an environment with intensive data is consistent, even on the servers that provide data to many clients. However, as a rule, the operating system has been working hard to provide access to their devices. So if you want to provide services for applications with intensive data should be chosen configuration for a coherent environment. For example, his drive capacity 2.9 GB disk was the fastest for Sun consecutive applications. He could provide data via the file system at speeds of 4.25 MB / sec. It was also the most concise and drive Sun has been the most convenient for storing large amounts of data. High speed data to the SCSI bus speed (peak bus bandwidth is 20 Mbytes / s) determines the optimal configuration of the disk subsystem : 4-5 active disk capacity 2.9 GB on one main adapter (DWI / S). If additional capacity to store data, the inclusion of more disks for each primary adapter is perfectly acceptable, but it will not increase the performance of disk subsystems. Discs with 2.9 gigabytes of Sun posted on the set in a rack chassis (up to six disk drives to the chassis). Each chassis can be connected to two independent SCSI adapters principal. This possibility is recommended to configure servers, serving customers with intensive data requests. To ensure maximum disk storage capacity of up to 12 drives can be configured on a single adapter DWI / S. However, maximum performance only 4-5 reservoirs. In an environment with consistent access to calculate just how many disks required for peak periods. Every fully active client may require the disk subsystem capacity to 2.7 megabytes per second. (This assumes the use of high-speed transmission networks in a 100 Mbit / s and above). A good first approximation to a single 2.9 GB disk for every three fully active clients. It is a ratio, while each drive can transfer data at a rate of more than 4 megabytes per second, and customers seek only 2.7 MB / s, since the two active clients on the same disk will be constant movement forward and backward caret between the cylinder (or file systems), leading to significantly lower capacity. To balance the work drive, as well as to accelerate certain types of channel data can use special software like Online : DiskSuit 2.0 (Sec. If a network environment applies Ethernet or 16 Mbit Token Ring, one disc for each fully active user. If using NFS + is the attitude changed greatly since NFS + provides individual capacity in the client / server about speed network environment. Unlike environments with heavy data really all accessing files among intensive attributes to arbitrary access to the discs. When files are small, access to data dominates sample lines directories lines index descriptors and the first few blocks of indirect (positioning required to get all the pieces really meta), and each unit of the user. As a result of a carriage drive spends much more time "ryskaya" between different pieces of meta file system than the actual sample of the user. As a result, the criteria for the choice of intensive NFS attributes to be materially different from the criteria for the environment, with intensive use of the data. As in the time required for the operation of random I / O dominated time positioning caret drive, the total capacity of the disk in this mode is much less than in the sequential access. For example, the standard disk drive production in 1993 to work at a speed of 3.5-4 MB / s in sequential mode access, but provides the only access arbitrary 60-72 operations per second, or about the speed of 500 kb / s. In these circumstances, SCSI bus is much less taken, which allows the user to drive it much more before it came to handling tires. In addition, one of the tasks of configuring the system is the most reasonable number of disk drives, as it determines the number of disc carriage, which is a limiting factor in disk subsystems. Fortunately, the very nature of the attributes intensive applications suggests that the requirements of disk storage is relatively small (relative to the intensive data applications). In these circumstances often useful to include in the system configuration, instead of one large drive two or even four smaller capacity drive. While this configuration is slightly more expensive in cost per megabyte of memory, its performance has improved. For instance, two 1.05-GB disk capacity is about 15% more expensive than one disk capacity of 2.1 GB, but they provide more than twice the capacity of random I / O. Around the same attitude is just between drive capacity of 535 MB and 1.05 GB drive (see Table 4.2). Thus, for intensive on the attributes of a better environment to configure many small disks connected to the main moderate SCSI adapters. Disk capacity of 1.05 GB has excellent proprietary software, which minimizes the load SCSI bus. Disk capacity of 535 MB is similar characteristics. Recommended configuration is fully active 4-5 535 MB or 1 GB drives per SCSI bus, while six or seven disks can work without causing serious conflicts on the bus. The very system intensive features that require the use of disk capacity of 2.9 GB (for reasons of server or data), the best performance in 8 fully active disk on the bus fast / wide SCSI but can be fitted and 9 or 10 disk drives with only a small degradation of response time, I / O. As with intensive data systems configuration more drives on the SCSI bus provides additional memory capacity, but did not give further progress in productivity. Difficult to give any specific recommendations on the number of disc carriage, which requires intensive attributes on Wednesday, as the load varies widely. In such an environment, server response time depends on how quickly attributes can be found and handed over to the client. Experience has shown that often useful to configure at least one disk drive for every two fully active clients to minimize delays do these things, but the delay can be reduced with the help of additional main memory, allowing cache frequently used attributes. For this reason, lower-capacity drives are often preferred. For example, better use eight drive capacity of 535 MB of disk instead of two 2.1 GB capacity. One of the common problems is the lack of NFS server load balancing between disk drives and disk controllers. For example, to support a number of diskless clients often used configuration of the three disks. The first disc contains the operating system and server applications binary codes; Second-disk file system root and swap for all the diskless clients, and the third drive-user home directories diskless clients. This configuration balanced as possible on a logical rather than a real physical pressure on the disc. In such an environment, drive, store swap area (swap) for diskless clients are usually much more so than any of the other two discs : the drive almost all the time will be loaded to 100% and the other two by an average of 5%. Such distribution is often used also to work in another environment. To ensure transparent distribution of access to multiple disk drives can be used successfully as fission and / or mirroring, supported by a software type Online : DiskSuit. (Konkatenatsiya drive is minimal load balancing, but only when the drives are relatively well). In an environment with a high use of fission with a small cut provides increased capacity drives, as well as distribution services. Splitting drive significantly improves the performance of a consistent reading and writing. A good initial approach to value is the ratio of floors 64 MB / number of disks in the strip. In an environment with a high use of attributes that are characterized by random access, the default floors (one disc cylinder) is the most appropriate. While disk mirroring feature in DiskSuit primarily designed to ensure the sustainability of the prevention, the side effect of its use is to improve the access time and reduce the burden on disks by providing access to two or three copies of the same data. This is especially true for the environment, which is dominated read. Operation record on mirrored disk usually performed more slowly, as each sought logical operations actually required two or three operation. Currently, the computer industry recommended maximum load of each disk drive in the 60-65%. (The boot disk can find using the iostat (1)). Usually, in practice, not pre-planned distribution of data so as to ensure that the recommended boot drive. This required several iterations, which include removal of reorganization and related data. Moreover, it should be noted that the standard distribution of CDs with the times changing, sometimes dramatically. The distribution of the data, which provided very good job at the time of installation, may be very weak results a year later. When optimizing data on the existing set of disk drives are many other considerations second order.

Welcome    Screenshots    Download    Buy now Write to Owl    Subscribe to Owl news   FAQ   Links  
Copyright 2004-2011 OwlCom Software .

OISV - Organization of Independent Software Vendors - Contributing Member Software Submit .NET - FREE and PREMIUM search engines submission and software submission and software promotion service. Owl Commander
Locations of visitors to this page

Valid HTML 4.01 Transitional