I/O caches (disk caches) are an important part of many I/O hierarchies. They are also useful for studying the I/O behavior of applications. The I/O cache simulator in TIME is suited for examining the referencing behavior of applications. It can keep track of the type of data requested, request sizes, and hit rates for each type. Request types can be selectively cached to evaluate interactions between requests in the cache, and to isolate and evaluate the spatial and temporal reference behavior.
Knowing how different types of data use an I/O cache can increase the effectiveness of the cache. Information from the system call level determines the type of requests. Since the trace is based on file names rather than disk block numbers, the filesystem maintenance, executable, and application data are all visible.
Each data type has different reference behavior. Each cache size supports different types to differing degrees. Properly exploiting these properties increases the reference hit rate in the I/O cache and reduces the number of references out of the cache to disk.
I/O caches reduce the I/O latency by eliminating disk accesses. Conventional approaches reduce latency by increasing the cache hit rate. But caches with the same hit rate, and the same disk, can have different miss latencies and different disk load characteristics. An alternate way to reduce the latency is to increase the efficiency of the load offered to the disk. The disk requests from both non-volatile and volatile cache configurations are evaluated. Neither configuration efficiently utilizes the cache to offer a low latency load to the disk. Ways to improve their load characteristics have been explored. The policies implemented in the cache impact the number of total disk requests as well as the time distribution of these requests. Figure 2 shows how typical management policies for non-volatile and volatile cache configurations affect the number of disk requests.