Stack frame: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Pat Palmer
m (revised header depth)
imported>Pat Palmer
Line 38: Line 38:


==Operating system management of stack growth==
==Operating system management of stack growth==
In a computer operating system, a special program, sometimes called a linking loader, launches programs by creating a [[process]] which contains a very large [[virtual memory]] address space.  In each process, there may be one or more [[thread]]s, and each thread will have its own stack.  Each stack is typically assigned a fairly large region in the virtual address space for the process.  If a stack grows to fill its entire address space, the operating system may be able to allocate additional chunks of contiguous virtual memory space--up to a certain pointIf too many subroutine calls occur due to an infinite loop or to recursive calling that continues too long, the stack may expand so far into the virtual memory space that it can no longer grow.  At this point, the operating system will terminate the process with an error.
In a computer operating system, a special program, sometimes called a linking loader, launches programs by creating a [[process]] which contains a very large [[virtual memory]] address space.  In each process, there may be one or more [[thread]]s, and each thread will have its own stack.  Each stack is typically assigned a fairly large region in the virtual address space of the enclosing process.  If a stack grows to fill its currently allocated address space, the operating system may be able to allocate additional chunks of contiguous virtual memory space--if contiguous address space is still unusedBut if too many subroutine calls occur (due perhaps to an infinite loop or to recursive calling that continues too long), the stack may not be able to continue to grow and grab more address space.  At this point, the operating system will terminate the process with an error.


==Limitations==
==Limitations==

Revision as of 08:26, 20 April 2007

In computer science, a stack frame is a memory management strategy used in some programming languages to create and destroy temporary (automatic) variables. Among other things, use of a stack allows programming languages to allow recursive calling of subroutines. Programs which utilize a stack frame build a stack which grows with each function invocation, and which contains space for actual parameters, local variables, temporary locations, and (in some architectures) information about the calling context such as the memory address of the calling subroutine.

Many architectures provide special hardware and operating system support for stack frames. Although many different programming languages make use of stack frames, not all compilers pack the stack (i.e., organize the contents of each stack frame) in the same way; this may make it difficult to call between different programming languages. For example, the success of the Pascal programming language was handicapped because it used the stack differently than C and C++, making it difficult for Pascal programs to call programs written in C or C++.

Principles of Operation

An illustration of the contents of a stack frame. Shown here are two frames (a function that has called another function). Blue arrows represent pointers. Note that parameters and locals can be addressed as FP ± k.


To use a stack frame, a program keeps two pointers, often called the Stack Pointer (SP), and the Frame (FP) or Base Pointer (BP). SP always points to the "top" of the stack, and FP always points to the "top" of the frame. Additionally, the processor also maintains a program counter (PC) which points to the next instruction to be executed. Then, whenever a function call takes place, the following steps take place in roughly this order:

  1. The caller saves local variables and temporaries, by pushing them onto the stack.
  2. The caller pushes the callee's actual parameters onto the stack.
  3. The caller branches to the callee, pushing PC onto the stack (on most architectures, this is a single instruction called CALL). When on the stack, the saved PC is called the return address.
  4. The callee pushes the value of FP onto the stack.
  5. The callee copies SP to FP.
  6. The callee adjusts SP, creating storage locations for local variables and local temporaries on the stack.

Steps 4--6 above are referred to as the function prologue, since they are the beginning of every function.

Within the body of the callee function, formal parameters and local variables can all be accessed at an address relative to the frame pointer. Because of this, a function may recurse, and automatically create a different storage location for each of its local variables.


Upon exit from the function, those steps are performed in reverse:

  1. The callee restores SP, and in doing so destroys the storage locations reserved for locals and temporaries.
  2. The callee restores FP, and in doing so returns to the previous frame.
  3. The callee branches back to caller by popping PC off of the stack (on most architectures, this is a single instruction called RETURN).
  4. The caller removes the actual parameters from the stack.
  5. The caller resotres local variables and temporaries, by popping them from the stack.

Steps 1--3 are referred to as the function epilogue, since they are at the end of every function.

Depending on the Application Binary Interface (ABI) of the particular architecture and platform, these steps may be performed in slightly different order. For instance, some systems will save temporaries in two groups---those saved by the caller, and those by the callee. Similarly, on some platforms, the program counter may be saved onto a different stack. Many other variants are possible.


Another notable variant of the stack frame is the use of canary values to thwart buffer overflow attacks.

Operating system management of stack growth

In a computer operating system, a special program, sometimes called a linking loader, launches programs by creating a process which contains a very large virtual memory address space. In each process, there may be one or more threads, and each thread will have its own stack. Each stack is typically assigned a fairly large region in the virtual address space of the enclosing process. If a stack grows to fill its currently allocated address space, the operating system may be able to allocate additional chunks of contiguous virtual memory space--if contiguous address space is still unused. But if too many subroutine calls occur (due perhaps to an infinite loop or to recursive calling that continues too long), the stack may not be able to continue to grow and grab more address space. At this point, the operating system will terminate the process with an error.

Limitations

Stack frames are not the most efficient way to manage short-lived values, such as formal parameters, local variables, and intermediate computations. On architectures that support it, it is faster to store these in registers than in memory locations. Unfortunately, this approach is often limited by separate compilation.

Stack frames assume that the lifetime of variables is delimited by the function which declares them, which is to say that after a function has returned, its local variables no longer exist. This is not the case for languages which support closures. For instance, in the C (programming language),

int *badFunction(void)
{
  int localVar = 4;
  return &localVar;
}

Can produce undefined results, since it returns a pointer to a memory location that will be reused by other functions. After a few function calls, it is unlikely that the return value will point to the integer 4. It may seem like other high-level languages, such as Java (programming language) Java or C#, allow this, but they do not; their local variables are actually pointers to objects, and so one can return the object, but cannot return a pointer to the local variable.

However, some languages, such as Ruby or Standard ML support closures. In Ruby, this example

def goodFcn()
 i = 4
 return Proc { return i }
end

Returns a closure that can access the local variable i, even after goodFcn() has terminated. As a result, languages such as these must employ more memory management infrastructure than a stack frame.

Security and "Stack Smashing" Attacks

Main Article: buffer overflow attack

A stack smashing attack is a special case of a buffer overflow attack. Stack smashing attacks take advantage of the stack frame by assuming two things:

  1. Both program data (variables) and control flow information (saved program counter) are stored on the stack, and
  2. The stack grows from high memory addresses towards low memory addresses (this is the case on many, if not most, architectures). In other words, when data is copied to sequential, increasing addresses, it will eventuall reach control flow information on the stack.

Because of this, it is sometimes possible to craft a special input which will trick a poorly written program into voluntarily overwriting its control flow information, thus taking control of the executing program. This is of particular importance in machines which provide services over a network.

See Also