Locality of reference

From Citizendium
Revision as of 13:34, 20 February 2007 by imported>Markus Baumeister (spelling fix and minor nitpicking)
Jump to navigation Jump to search

Locality of Reference is a commonly observed pattern in memory accesses by a computer program over time. Simply stated, memory accesses that happen close to one another in time tend to occur close to one another in space (memory address). Locality of reference is the primary motivation for memory caches, which attempt to load a range of memory addresses at a time, under the assumption that the excess memory addresses will be loaded soon after.

Locality of reference can be exploited by a computer's memory controller for drastic improvements in memory access times. In general, whenever a memory access takes place, the memory controller will attempt to read a larger section of memory which contains the target address. In the common case, subsequent memory accesses will likely target memory addresses that have been loaded into the cache by that same read.

Thought Experiment: Fetch-Execute Cycle

When a typical computer is executing a program, it repeatedly reads the next instruction in memory and then executes it. Typically, those instructions are placed in sequential memory addresses, with exceptions for branches that occur for control structures such as loops, conditionals and function or method invocations.


Thought Experiment: Array Algorithms

Suppose we had an algorithm which was to select the largest number in an array. One straight-forward way to accomplish this (indeed, the optimal solution for an unsorted flat array) is to iterate over each element of the array in order, and check if each one is the largest so far. Thus, at time T=0, we check element 0, at T=1, we check element 1, and so on. Without a cache, the processor would need to spend a little bit of time during each instruction cycle to fetch the array element from main memory. But, if the processor employs a cache, we can achieve a speed-up.

Once the algorithm attempts its first read, the processor's memory controller will fetch not just that element, but the entire cache line which contains that element. The processor must wait for that element before it can proceed, but the memory controller can continue fetching the rest while the processor moves on to the next instruction. As a result, we can avoid a memory stall with each instruction.


Template:Stub