Owlcom software
basadOwlCom  Software
Let's Owl manage your files
Welcome Screenshots Buy now Download Write to Owl Subscribe to news FAQ Links

The best areas disk drives.

Many are now supplied computer companies, benefit from the code, which are called "zone-bit recording zone bit recording). This type of encryption allows the use of the geometric properties of a rotating disk pack more data on the surface of the disk, which are further away from its center. The practical effect is that the number of addresses with lower numbers (corresponding to the external disk cylinders) exceeded the number of addresses with large numbers. Usually this limit is 50%. This type of coding increasingly affects productivity consistent access to the data (for example, 2.1 GB disk indicates data transmission speed range 3.5-5.1 MB / s), but it also affects the performance of random access to the drive. Data located on the outer cylinder drive, not only are faster under head read / write (and therefore speed above), but these cylinders as simply more in size. Assigned amount of data can be divided into a smaller number of large cylinders, leading to fewer mechanical movements caret. Ground rules for configuring the drive can be summarized as follows :-In an environment with intensive data should be configured from 3 to 5 fully active 2.9 GB disks for each primary adapter fast / wide SCSI. Must include at least three disk drive for each active client with FDDI network, or a CD for each active client with Ethernet or Token Ring. In an environment with a high use of attributes should be configured around 4-5 fully active 1.05 GB or 535 MB of disk for each primary SCSI adapter. Must include at least one CD for every two fully active clients (in any network environment). Each primary adapter can connect additional drives without substantial degradation of performance, until the number of normally active drives on the SCSI bus does not exceed the guidance given above. To load access multiple disks can recommend the use of software such as Online : DiskSuit 2.0. If possible, use the fastest zone on the disk. As many UNIX-sistemah (eg, Solaris) implemented caching files in virtual memory, most users tend to configure servers NFS very large main memory subsystems. But examples of typical use NFS client files show that the data retrieved from the buffer cache in fact relatively rare, and additional main memory is usually not necessary. In typical server space allowed customers to drive far exceeds the amount of main memory system. Repeated requests for blocks of data rather satisfied by reading from memory rather than from disk. Unfortunately, most clients work with their own files and rarely used common shared files. Moreover, the majority of applications are usually read the entire file into memory, then closes it. The client rarely drawn to the original file again. If none of the other customers did not use the same file before overwriting data in a cache, it means that the cached data should never again be used. However no additional overhead costs associated with caching of data, and their subsequent failure to use available. If you need memory to cache the other pages of data, in the pages just happens to record new data. Not necessarily dump page to disk, since memory to be retained is already on the disk. Certainly little of storage in the memory of the old pages, which are not used. When the amount of free memory falls below 1 / 16 of total memory pages unused in the recent past, become available for reuse. The biggest exception to this rule is empirical temporary file, which often opens early and closes at the end of the work. Since the file remains open for the client, the data associated with the file, be displayed (and cached) in the memory of the client. Virtual memory subsystem uses a client server as a backup storage temporary file. If the customer does not have enough physical memory to store these pages in their cache, some or all of the data will otkacheny or replaced by new content in meeting future operations, and repeated applications of the data will lead to the extradition read NFS to restore the data. The ability to cache server data is a certain advantage. Operation record can take advantage of caching UNIX server, because NFS protocol requires that any transaction records to a remote disk fully implemented simultaneously, to ensure a consistent state even in the event of server failure. The operation is called synchronous recording, in that sense, because the logical operation of the drive in a fully completed before it is confirmed. Although data queries, writes NFS not benefit from the buffering and postponing the date, normally performed by a write UNIX. Note that the client can and usually writes in the cache in the same manner as any other disk I / O. Customer, in any case, to check the results writes dumped on "disk", whether local or remote disk. Perhaps the simplest and most useful is the empirical rule, which is called the "rule of five minutes." This rule is widely used in the configuration database servers, much greater complexity which makes it very difficult to determine the size of the cache. The current ratio between the prices of memory and disk subsystem shows that the cost-effectiveness of data cache, which made treatment more than once every five minutes. In conclusion, a few simple empirical rules, which allow you to choose the configuration memory of NFS servers : If the server essentially provides custom data many customers to configure on the minimum memory. For small teams usually its volume is 32 megabytes, and for large groups of about 128 megabytes. The SMP configuration, always have at least 64 MB for each processor. Applications with intensive use of the attributes usually benefit from increased system memory slightly higher than applications with intensive data. If the allocated space on the server to store temporary files that will work very intensively with these files (a good example is the company Cadence Verilog) should be configured server memory roughly equal amount of active temporary files used on the server. For example, if the temporary file client about 5 MB and it is anticipated that the server will serve 20 fully active clients, there should be a (20 customers x 5 MB) = 100 MB of memory (128 MB of course is the most convenient target, which is easily configured). Often, however, this temporary file can be placed in a local directory such as / tmp, resulting in significantly higher performance customer, as well as significantly reduce network traffic. If the main task of the server is storing only executable files should be configured server memory about equal stock of heavily used binary. Do not forget the library! For example, the server predpolagayuschiysya storage / usr / openwin for some team members must have sufficient memory to cache Xsun, cmdtool, libX11.so, and libxwiew.so libXt.so. This application NFS differs significantly from a server model so that it is supplying the same files to all its customers and therefore is able to cache the data. Typically, customers do not use every page of all binaries, and so wise in the server configuration to provide only the amount of memory, enough to accommodate frequently used programs and libraries. The memory can be selected based on the five minute rule : 16 MB for the operating system plus memory for caching data, which will occur treatment more often than once every five minutes. Because NFS servers fail to meet user processes, a large swap space (swap) is not needed. For servers swap space of approximately 50% of the size of main memory is more than enough. Note that the majority of the exact opposite of what everyone expects. Disc operations inherently associated with mechanical movements heads in the drive, so they are slow to be fulfilled. Usually UNIX buffers writes to main memory and allows the issuing process continues, while the operating system will have to physically write data on the disk. Simultaneous principle writes in NFS means that they normally very slow (much slower than writes to a local drive). If the client issues a request records, requires that the server has updated the data on disk, and all associated metadata file system. For a typical file should be completed by 4 disk : each operation must update the data information in the directory file inditsiruyuschuyu date of last modification, and indirect heat; If the file is large, it will also update the second indirect heat. Before confirm that the request for records NFS, the server must implement all these updates and to ensure that they were on the disk. Operation record NFS can often last for 150-200 milliseconds (three or four simultaneous recording more than 40 milliseconds each), compared with the normal 15-20 milliseconds to write to your local disk. In order to accelerate the writing NFS, servers can use the stable memory (RAM-Non-NVRAM). This additional opportunity is based on the fact that the NFS protocol simply requires that the data writes NFS would be recorded in stable memory instead of fixing them on the disk. Until the server returns data, which confirmed the previous transaction records, it can keep the data in every way possible. PrestoServe and NVRAM exactly turning this semantics. The installation of these devices to the server device driver NVRAM intercepts requests synchronous writes to disk. These are not sent directly to the drive. Instead, the write down in a stable memory and confirmed as completed. This is much faster than waiting for the end of the mechanical operation of data on the disk. A short time later, the data are recorded on the disk. As a logical transaction records NFS performs three or four synchronous disk operations, the use of NVRAM significantly improves bandwidth writes NFS. Depending on the terms (of the file system, there are other requests to the disk, the size and location of records, etc.) using NVRAM accelerates NFS writes in 2-4 times. For example, a typical capacity in the discharge writes NFS running Solaris 2 is about 450 MB / s. Using NVRAM speed increased to about 950 MB / s and even higher if the network environment faster than Ethernet. There is no better time of the read NVRAM not. From the point of view of the disk subsystem or NFS clients additional opportunities PrestoServe and NVSIMM functionally equivalent. The main difference is that NVSIMM more effective because they require less manipulation of the data. The fee PrestoServe physically located on the bus SBus requires that data be copied to it through peripheral bus. In contrast, NVSIMM placed directly into the main memory. Recordable Disc data is not copied to NVSIMM via peripheral bus. Such copying can be done very quickly with the help of steering. For these reasons NVSIMM are preferable to a situation where both opportunities and NVSIMM PrestoServe are available. Due to the importance of accelerating Sun received recommends the use of NVRAM indeed, in all of which provide universal service NFS. The only exception to this rule are the servers that provide service only in reading files. The most typical example of such use are servers, store binary codes for a large group of customers. (The Sun known as a server / usr / dist or softdist). Because the device driver NVSIMM / PrestoServe be located on the disk in the root file system, accelerated by NVRAM can be obtained for the work with the root filesystem. NVRAM driver should be able to pump modified buffer to disk before any other will be an active process. If a root file system has been accelerated, it could prove to be "dirty" (retrofit) after the collapse of the system, NVRAM, and the driver could not boot. Another important consideration when comparing servers, which are equipped with NVRAM and without it, is that the use of such acceleration generally reduces the maximum capacity of about 10%. (Systems using NVRAM, NVRAM cache must manage and maintain a coherent position copies in the cache and disk space). However, the response time of an improved (around 40%). For example, the maximum capacity SPARCserver 1000 to test LADDIS without NVSIMM of 2108 operations per second with 49.4 ms response time. Studies NVSIMM system can perform only about 1,928 transactions per second, but the average response time is reduced to about 32 ms. This means that customers perceive the NFS server, equipped NVRAM, much faster than a server without NVRAM, while the total capacity of slightly reduced. Fortunately, the value of 10% is rarely a problem, since the maximum capacity of most systems far exceed the typical load, which is in the range of 10-150 per second on the network.

The backup and fault-tolerance problems.

backup copy file systems and sustainability for failure to NFS server similar to the problems encountered in the operation of any other system. Some suggestions for backup and fault tolerance can be summarized as follows : A simple, relatively small backups can be made by one or two tape drives. The location of these drives on the SCSI bus does not make much difference if they are not active during working hours system. Establishment of full backups requires locking file system to prevent modification. For such operations require special software, such as product Online : Backup 2.0. As in the previous case, the location of backup SCSI bus does not make much difference if the copying is performed outside working hours. Zerkalirovannye file systems enable face full waivers drive and, in addition, allow continuous access to even during a fully coherent backups. Mirroring results in a very small loss of capacity disks to write operation (up to 7-8% at random, and by 15-20% in the sequential access; In a system with many users, which includes the majority of servers, it is hoped that those numbers will decrease by half). Mirroring automatically improves capacity in the discharge of reading. The establishment zerkalirovannyh file systems, each mirror must be configured on a single SCSI bus. If backups should be done during normal working hours system, the device must be configured to copy or on its own SCSI bus, or on the same SCSI bus, which switched from the mirror (a separate and inactive), to avoid a number of problems with target response time. When the quick restoration of the file system within the intensive attributes should be in the configuration NVRAM. In an environment with intensive data should explore the possibility of high-speed mechanical devices such stekkerov tapes and storage devices. Work on the assessment burden on the future system is not very accurate, but often it is a good approach, which you can obtain in advance. The two basic approaches. The preferred method is to measure the parameters of the existing system. This method provides some confidence in the accuracy of the estimate of the burden, at least for the current time, although not of course guarantee that the pressure in the system in the future will be equivalent to the existing one. An alternative method is a rough calculation. It is useful, when the user does not have the necessary measuring system. To create a sufficiently precise configuration of the system need to know two things : a mixture of NFS and total system capacity. Mixture of NFS to show whether the system is intense on the attributes or data. There are many different mechanisms for the measurement of existing systems. The simplest of these is simply to use the nfsstat (8), which provides information on the mix of operations. As these statistics can be re-established through zero-z flag, the team nfsstat can also be used for the measurement bandwidth using Shell script, such as shown below. Out of NFS-vyzovov shows, which were served in a given interval, and therefore the speed with which transactions are processed NFS. It should be noted that when heavy loads team sleep may actually "sleep" considerably more than requested by 10 seconds, leading to inaccurate data (ie re quantity). In these cases must be a better way. There are many such funds, as described SunNetManager, NetMetrix of Metrix and SharpShooter from AIM Technologies. These funds make it possible to verify the capacity of the system under real pressure and mixture operations. To calculate the average capacity usually requires some further processing of data. You can use a variety of means (awk (1), spreadsheet type WingZ or 1-2-3). If the measuring current system is not available, it is often possible rough estimate based on the intended use of the system. Implementation of the assessment requires an understanding of how data will be manipulated by the customer. This method is accurate, if it falls into the category of intensive data. Some judicious assessment usually can be made and environment-intensive features, but a number of factors make such an assessment somewhat less accurate. The first step for such an assessment is to identify fully active query model client. This requires an understanding of customer behavior. If the intense pressure on these, it makes more sense just prosummirovat anticipated operations reading and writing and take a number as a burden for each client. Operation attributes are generally insensitive to the workload, which is dominated by operations on the data (on the one hand, they represent only a small percentage of all transactions, on the other hand, these operations are the minimum number of servers in relation to the work to be performed for a sample data). For example, the client workstation, performing software that searches of a given temperature in a certain amount of fluid. Model data set for this task is 400 megabytes. Usually he read portions of 50 Mbytes. Every piece is fully processed before moving to annex next. The processing of each segment takes approximately five minutes TSP, and the resulting files are saved to disk, the size of about 1 MB. Suppose that in a networked environment using FDDI. Maximum load NFS will arise when the client reads a portion of 50 Mbytes. The maximum speed of 2.5 MB / s client will be fully active for about twenty seconds, serving 320 reads per second. Since the launch of the programme took about 40 minutes (or 2,400 seconds) time, and one run needed (400 + 1) x 125 Mb ops / Mb = 50,125 ops, the average speed is about 20 ops / sec. The server will provide services to a peak speed of requests (320 ops / sec) for about 20 seconds out of every five minutes, or for about 7% of the time. This exercise can be drawn three portions of useful information : average speed active enquiries (20 ops / sec), the peak rate requests (320 ops / sec) and the likelihood that the peak speed is required. Based on this information can be formed assessment of the overall rate requests. If the system configuration will be 10 customers, the average speed of 200 requests ops / sec. (This speed should not be compared with test results LADDIS because of the mixture of very different). The likelihood that the client will seek two with a peak speed of about 0.07 x 0.07 = 0.049, or about 5%, and 3 client will require peak service only within 0.034% of the time. Thus, from this information prudent to withdraw the following conclusions : The possibility that the three customer will be active simultaneously, much less than 1%, the maximum load will exceed the individual peak load of 2-3. Requires only one network, as the maximum anticipated load is only 3 x 2.5 Mb / sec = 7.5 MB / s, or far below the maximum bandwidth of FDDI (12.5 MB / sec). As at any time be fully active only two or three clients, it takes at least 3 to 6 disk drives (although the model file size of 400 MB is very likely that it will take more than just six discs for data storage). At least two major SCSI adapter. Because the system is a high-speed network, the use of a server with two CPUs SuperSPARC / SuperCashe. Since it is unlikely that a very large cache file will be useful for a server requires a minimum amount of main memory, 128 MB is sufficient. If a relatively small farm drive, for example, of about 16 GB, the system SPARCstation 10 Model 512 very well be able to cope with this task, as one SBus slot is required for FDDI interface, and the remaining three slots can be used to install the main SCSI adapters, to a total of 4 interface FSBE / S, each of which connects disk drives for a total capacity of 4.2 GB. However, this application might better suit the system SPARCserver 1000, which will offer a greater capacity memory : a system with two systemic card allows configuration with the seven principal adapters and SCSI disk storage capacity of 28 GB (one mnogodiskovomu device 4.2 GB capacity on each card FSBE / S, not counting the four fixed disk capacity of 535 MB). In case you need large capacity drives can be configured system SPARCcenter 2000 with two system boards to give six interfaces DWI / S up to 12 chassis-drive capacity of 2.9 GB - about 208 GB of memory. All proposed system can be installed without the use NVSIMM SBus slots, and all are easy to maintain a required two processors. Using NVSIMM not very important because the proportion of entries too small (less than 1:400, or 0.25%). Note that the choice of system configurations for applications with intensive data generally not very useful to compare the estimated speed rating of requests from the server to SPECsfs_097, as a mixture of different so that the load can not be compared. Fortunately, this is usually accurate.

Evaluation of intensive attributes

In the previous example, it was assumed that load NFS operations with the attributes was negligibly small in comparison with transaction data. If it was not, for example, in an environment of software development, need to make some assumptions about the expected mixture NFS commands. In the absence of other information, the sample can be taken, for example, the so-called mixture Legato. There SPECsfs_097 (also known as LADDIS) used it is this mixture, in which operations on the data include 22% of read and write 15%. Consider a client workstation, the most intensive work which relates to recompile software system consisting of a source of 25 Mbytes. We know that workstations can compile the system in about 30 minutes. The compilation generates about 18 megabytes intermediate object code and binary codes. From this information we can conclude that the system will save the client to the server 18 Mbytes and read at least 25 MB (maybe more, since almost one-third of the source code consists of header files are included by multiple source modules). To prevent re-read these files can be used include a caching file system. Suppose that used CFS. During the "construction " must allocate 33 MB of real data, or 33 Mb x 125 ops / = 4,125 Mb of data for 30 minutes (1800 seconds), which is approximately 2.3 speed ops / sec. (This assumes that each operation is performed with 8 Kb of data, so to send 1 Mb of data required 125 operations). Because it is linked to the intensive use of the attributes necessary to evaluate a significant number of promahivayuschihsya attributes. Assuming that the mixture with a mixture of Legato, the overall rate will be approximately the same : For we also have (18 Mb to record 125 x ops / Mb) / 15% / 1800 seconds, or 8.33 ops / sec. The combination of reading and writing very similar to a mixture of Legato, but that might not be the case, for example, if the files were opened browser source code (the browser files of source code (browser source files) often 4-6 times the size of the source code). In this case, we have no way of measuring peak. If there are 20 work stations, working in the mode described above, we can conclude the following : Even if it is incredible condition, when all 20 workstations fully active at all times, the overall rate request is 8.33 ops / sec x 20 clients, or 166 ops / sec, or below the maximum of 200 ops / sec, which supports Ethernet. Cautious skonfiguriruyut people for a load of two, but if logistical considerations are excluded in advance, the same network will probably suffice. As pressure on the light, the SPARCstation 10 Model 40 is more than adequate. (Even in the very bad case, there are only two networks). Processing capacity of SPARCclassic also usually sufficient. While the total number of very small (25 MB of source code and object code 18 megabytes; even 20 complete copies of only 660 megabytes), the recommended disk configuration can include two 535 MB on disk. The assumption is that CFS can be and one disc, as header files are often read from the server (they cache clients). When one or two drives one of the SCSI bus full enough. Volume data is very small and most of them will be read and sent many clients repeatedly, it is certainly enough to configure memory for all the data cache : 16 MB of memory under the basic operating system, plus a 25 MB cache for the source code in the configuration of 48-64 MB. Because of this environment writes a frequent NVSIMM or PrestoServe are significant. The final version of the station can choose either entry-level SPARCstation 10, or well-configured stations SPARCclassic. For the common sense to see that the maximum speed of 166 queries per ops / sec at 75% less than that of SPARCclassic (236 ops / sec) to test LADDIS (remember that the speed of 166 ops / sec assumed that all 20 clients fully active at all times, although the use of real logs show that this never happens); Maximum load required half of which shows SPARCstation 10 Model 40 to test LADDIS (411 ops / sec). Comparison with the corresponding figure LADDIS situation intensive attributes, as results LADDIS intensive use of a mixture of attributes. Table 4.3. The LADDIS different NFS-serverov Sun with Solaris 2.3. A few (5%) higher speeds achieved with the use of FDDI, if there is no system for the measurement and application behavior is not well understood, it can be assessed based on a similar application load, as shown in Tables 4.4-4.6. These figures give some indication of the measured loads and NFS. This does not mean that they give a picture of what to expect from the burden of certain tasks. In particular, note that the data contained in these tables are the maximum anticipated load of real clients, as these figures reflect only the period of time when the system is active NFS-zaprosy. As noted above, sect. 3.1.4, the system almost never fully active at all times. A notable exception to this rule is computer servers, which in reality are constantly working packet machine. For example, the work of 486/33, serving 1-2-3 is shown in Table 4.2 and figure. 4.2. Although presented in the table peak load is 80 ops / sec, it is clear from the figure that the total load is less than 10% of the speed and the average five minutes, much less load 10 ops / sec. The average for a longer period of time, pressure is approximately 0.1 PC ops / sec. Most workstations class SPARCstation2 or SPARCstation ELC are averaging 1 op / sec, and most reasonable equivalents customers SPARCstation 10 Model 51, Model 512, HP 9000/735 or RS6000/375 1-2 ops / sec. Of course, these figures vary significantly depending on the individual user and application.
Welcome    Screenshots    Download    Buy now Write to Owl    Subscribe to Owl news   FAQ   Links  
Copyright © 2004-2011 OwlCom Software .

OISV - Organization of Independent Software Vendors - Contributing Member Software Submit .NET - FREE and PREMIUM search engines submission and software submission and software promotion service. Owl Commander
Locations of visitors to this page

Valid HTML 4.01 Transitional