The Full Wiki

Memory address: Wikis


Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.


From Wikipedia, the free encyclopedia

In computer science, a memory address is an identifier for a memory location, at which a computer program or a hardware device can store data and later retrieve it. Generally this is a binary number from a finite monotonically ordered sequence that uniquely describes the memory itself.

In modern byte-addressable computers, each address identifies a single byte of storage; data too large to be stored in a single byte may reside in multiple bytes occupying a sequence of consecutive addresses. Some microprocessors were designed to be word-addressable, so that the addressable storage unit was larger than a byte. Both the Texas Instruments TMS9900 and the National Semiconductor IMP-16, used 16 bit words. The efficiency of addressing of memory depends on the size of the address bus.

In a computer program, an absolute address, (sometimes called an explicit address or specific address), is a memory address that uniquely identifies a location in memory.[citation needed] This is different from a relative address, that is not unique and specifies a location only in relation to somewhere else (the base address). Virtual memory also adds a level of indirection.


Contents of a memory location

Each memory location, in both ROM and RAM, holds a binary number of some sort. How it is interpreted, its type meaning and use, only depends on the context of the instructions which retrieve and manipulate it. Each coded item has a unique physical position which is described by another number, the address of that single word, much like each house on a street has a unique number. A pointer is an address itself, stored in some other memory location.

So memory can be thought of either as data or instructions or both. This is called the von Neumann architecture. One can think of memory as just a bunch of numbers or as data—text data, binary numeric data, or as instructions themselves. This uniformity was introduced in the 1950s. It is usually credited to von Neumann, though some would be inclined to credit Alan Turing.

Some early programmers encouraged this practice as a way to save memory, when it was expensive: The Manchester Mark 1 had space in its words to store little bits of data— its processor ignored a small section in the middle of a word— and that was often exploited as extra data storage. Self-replicating programs such as viruses also exploit this, treating themselves sometimes as data and sometimes as instructions. Self-modifying code is generally deprecated nowadays, as it makes testing and maintenance disproportionally difficult to the saving of a few bytes, and can also give incorrect results because of the compiler or processor's assumptions about the machine's state. But is still used sometimes deliberately, with great care.

Instructions in a storage address are interpreted in their context to the system's main processing unit, and data is read or written by them first to an internal and isolated memory structure called a processor register, where the next instruction can manipulate it together with data and put it into other registers or memory locations.


Registers are the memory addresses within the part of the CPU known as the ALU, which responds to instructions in registers and uses combinational logic to determine how to add, subtract, shift or multiply (and so on) the contents of its data registers.

Word size versus address size

A word size is characteristic to a given computer architecture. It denotes the number of bits that a CPU can process at one time. Historically it has been sized in multiples of four and eight bits (nybbles and bytes, respectively), so sizes of 4, 8, 12, 16, 24, 32, 48, 64, and larger came into vogue with technological advances.

Very often, when referring to the word size of a modern computer, one is also describing the size of address space on that computer. For instance, a computer said to be "32-bit" also usually allows 32-bit memory addresses; a byte-addressable 32-bit computer can address 232 = 4,294,967,296 bytes of memory, or 4 gibibytes (GiB). This seems logical and useful, as it allows one memory address to be efficiently stored in one word.

But this does not always hold. Computers often have memory addresses larger or smaller than their word size. For instance, almost all 8-bit processors, such as 6502, supported 16-bit addresses— if not they would have been limited to a mere 256 byte memory. The 16-bit Intel 8088 had only an 8-bit external memory bus on early IBM PCs, and the 16-bit Intel 8086 supported 20-bit addressing, allowing it to access 1 MiB rather than 64 KiBs of memory. Popular Intel Pentium processors since introduction of Physical Address Extensions (PAE) support 36-bit physical addresses, while generally having only a 32-bit word.

The distinction between words and bytes has changed over the years; the DEC PDP 11 had nine-bit words. Knuth, in his seminal book The Art of Computer Programming bases his abstract MIX machine on the PDP 10 which had a shorter 7-bit word, but he never refers to it as a byte.

A modern byte-addressable 64-bit computer—with proper OS support can address 264 bytes (or 16 exbibytes) which as of 2009 is considered practically unlimited.

Virtual memory versus physical memory

Virtual memory is now universally used on desktop machines. It maps physical memory to different addresses using page tables. By doing so the operating system can decide to allocate memory as it deems most efficient without causing the program to halt from a long garbage collection process. Some literature[citation needed] suggests that, on the whole, garbage collection is the most efficient of memory reclamation strategies, but it is not ideal if it cuts in to 'tidy up' memory just as the program is controlling the firing of a missile or a landing on the moon, because it is non-deterministic.

The physical blocks of memory (typically 4 kb chunks) are mapped to virtual addresses by a virtual memory program in the kernel, and supported by the processor hardware, though it was formerly done all in software. The purpose of virtual memory is to abstract memory allocation, allowing the physical space to be allocated as best for the hardware (that is, usually in good sized blocks), and to still be seen as contiguous from a program or compiler's perspective. Virtual memory is supported by some operating systems (for example Linux and Windows but not MS-DOS). One may think of virtual memory as a filter, or an alternate set of memory addresses that allow programs (and by virtue, programmers) to read from memory as quickly as possible without requiring it to be at a particular place. Programs use these contiguous virtual addresses, rather than real, and often fragmented, physical addresses, to store instructions and data. When the program is actually executed, the virtual addresses are translated by the processor into real memory addresses. logical address is a synonym for virtual address.

Virtual memory also effectively allows the address space to be larger than - or rather, extend beyond - the amount of real (or physical) memory available; the computer can put rarely-accessed pages into secondary storage (similar, at a level, in behavior to CPU cache) and use the real memory (RAM) for new or active tasks. So the virtual address space might contain, say, twice as many addresses as main memory with the extra addresses mapped to hard disk space in the form of a swap file or page file (the default on Windows XP and later MS operating systems). When required they are copied back (called swapping) into main memory as soon as they are needed. These movements are performed, typically, as a background process - and in that sense are transparent to programs. However, if due care is not taken, this can often lead to thrashing.


See also

Simple English

In computer science, a memory address is an identifier for a memory location (where a computer program or a hardware device can store data and later retrieve it).

Got something to say? Make a comment.
Your name
Your email address