Chapter 8 - File System Information and Statistics
This chapter describes the statistics about the server's VINES file system activity that you can view through VNSM. These statistics show file-related I/O activity on the server.
Viewing File System Statistics
To view file system statistics, select SHOW file system statistics from the VINES Network Summary menu. The File System Statistics screen appears.
Press ESC to return to the VINES Network Summary menu.
Total counts apply to all file system statistics shown on the screen.
Before you attempt to interpret the statistics on the screen, it is important to know the difference in meaning between opens on files and open files. The terms are defined as follows:
Opens on Files - Counts the number of successful file open requests, but not the number of open files. When multiple users or services have open access to the same file, this value increments for each user or service.
Open Files - Counts the number of files that are open. Each open file counts as one, no matter how many users have the same file open simultaneously. This value can be larger than the Opens on Files value because it includes operating system files that are internal to VINES.
The statistics that the File System Statistics screen displays are described in the sections that follow.
Total Cache Size
The amount (in kilobytes) of server memory used by the VINES filesystem to cache frequently accessed data. You can set the total cache size from the server console. See Chapter 15 for more information.
The VINES file system cache is used by file services to perform file I/O operations. All other processes on the server use UNIX cache.
Cache space is divided into temporary holding areas, called cache buffers, for recently accessed data from disk files. When a process such as a service asks the kernel (server operating system) to read or write data that resides in cache space, the kernel can complete the request quickly. The kernel does not have to read the data from its disk file or perform a complete write of the file to disk. This improves overall server performance.
To determine whether your server has a sufficient amount of cache space or an adequate cache buffer size, use the percentage of cache hits statistic. This statistic is described later in this chapter.
Keep in mind that increasing cache space on the server reduces the amount of available memory in which services can run.
Cache Buffer Size
The amount (in kilobytes) of server memory that makes up a cache buffer. The total file system cache is divided into cache buffers. You can set the cache buffer size from the server console. See Chapter 15 for more information.
You can determine the number of cache buffers by dividing Total Cache Size by Cache Buffer Size as follows:
Total Cache Size
---------------------------
Cache Buffer Size
Keep in mind that reducing the cache buffer size can produce unfavorable results. The server must assign cache buffers every time a file needs them, which takes up processor time. As the number of buffers increases, the amount of processor time used for file operations also increases. Do not decrease the cache buffer size if the file services on the server handle large files.
Max Open Files
The largest number of open files at a single point in time.
Current Open Files
The number of currently open files.
Current Record Locks
The number of current record locks.
Max Opens on Files
The largest number of open operations on files at a single point in time.
Current Opens on Files
The number of current opens on files.
This statistic indicates the percentage of requests for cache buffers that the kernel was able to service using existing cache buffers, without reading new data from disk. For most servers, 85 percent or higher is good. If you need extremely fast access to data, your goal should be 95 percent or higher.
If you reduce cache space to provide more memory for services to run in, try not to let your cache hit ratio go below 85 percent. Keep in mind that a low cache hit ratio is preferable to a lot of paging or swapping.
Always remember that your two main goals are a cache hit ratio of 85 percent or better and a Swavg value of 0.01 or less. If you have to choose between swapping or paging and a low cache hit ratio, the low-cache hit ratio is the better alternative. If neither alternative is acceptable, it's time to add more memory. Adding more physical memory should be a last resort.
If you think that services have more memory than they need, increase cache space until your percentage of cache hits is acceptable. Use the Swavg statistic to make sure that your services still have enough memory. When Swavg is greater than 0.01, stop increasing cache space.
See Chapter 3 for more information on the Swavg statistic.
In addition to increasing cache space to increase the percentage of cache hits, you can also make existing cache space more efficient by reducing cache buffer size, or increasing network throughput between users and file services. All memory that is not used for processes should be allocated to cache.
For example, the percentage of cache hits has decreased to 80 percent. You do not want to increase cache space further, so you reduce the cache buffer size from 8KB to 4KB. This action may increase your percentage of cache hits to 85 percent.
You can also try a combination of increasing the cache space and reducing the cache buffer size to increase the percentage of cache hits. For example, you estimate that the services on your server have about 1MB of extra memory in which to run, and you want to use some of that memory as cache space so that your percentage of cache hits increases from 75 percent to 85 percent. You increase cache space by 1MB.
Network congestion can reduce the percentage of cache hits. The file service stores data in cache buffers temporarily before transferring it to the user. The client must acknowledge receipt of each SPP message that contains this data before the file service can send more. Congested data links, such as slow-speed serial lines, can delay acknowledgment, causing data to become backed up while waiting for transmission. This may result in file service data consuming excessive amounts of cache. Figure 8-2 illustrates this problem.
The source of the network congestion can lie anywhere in the path between the file services and the users, as illustrated in Figure 8-3.
Keep in mind that a large pool of users who access file services over slow PC Dial-in lines or X.29 Dial-in lines can also reduce the percentage of cache hits.
If users access file services over serial lines, monitor the percentage of cache hits on the servers where the services reside. If the percentage of cache hits goes below 85 percent, consider increasing the amount of cache space, reducing cache buffer size, or reconfiguring your network to increase throughput between users and file services.
# Times Unavailable
The number of cache buffer requests that the system was not able to service without reading new data from disk.
Max Record Locks
The largest number of simultaneous record locks.