Memory (computers)

In computing, memory refers to the physical devices used to store programs (sequences of instructions) or data (e.g. program state information) on a temporary or permanent basis for use in a computer or other digital electronic device. The term primary memory is used for the information in physical systems which function at high-speed (i.e. RAM), as a distinction from secondary memory, which are physical devices for program and data storage which are slow to access but offer higher memory capacity. Primary memory stored on secondary memory is called “virtual memory“.

The term “storage” is often (but not always) used in separate computers of traditional secondary memory such as tape, magnetic disks and optical discs (CD-ROM and DVD-ROM). The term “memory” is often (but not always) associated with addressable semiconductor memory, i.e. integrated circuits consisting of silicon-based transistors, used for example as primary memory but also other purposes in computers and other digital electronic devices.

There are two main types of semiconductor memory: volatile and non-volatile. Examples of non-volatile memory are flash memory (sometimes used as secondary, sometimes primary computer memory) and ROM/PROM/EPROM/EEPROM memory (used for firmware such as boot programs). Examples of volatile memory are primary memory (typically dynamic RAM, DRAM), and fast CPU cache memory (typically static RAM, SRAM, which is fast but energy-consuming and offer lower memory capacity per area unit than DRAM) .

The semiconductor memory is organized into memory cells or bistable flip-flops, each storing one binary bit (0 or 1). The memory cells are grouped into words of fix word length, for example 1, 2, 4, 8, 16, 32, 64 or 128 bit. Each word can be accessed by a binary address of N bit, making it possible to store 2 raised by N words in the memory. This implies that processor registers normally are not considered as memory, since they only store one word and do not include an addressing mechanism.

Contents

[edit] History

Detail of the back of a section of ENIAC, showing vacuum tubes

In the early 1940s, memory technology mostly permitted a capacity of a few bytes. The first electronic programmable digital computer, the ENIAC, using thousands of octal-base radio vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits which were held in the vacuum tube accumulators.

The next significant advance in computer memory came with acoustic delay line memory, developed by J. Presper Eckert in the early 1940s. Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information within the quartz and transfer it through sound waves propagating through mercury. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient.

Two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred Williams would invent the Williams tube, which would be the first random access computer memory. The Williams tube would prove more capacious than the Selectron tube (the Selectron was limited to 256 bits, while the Williams tube could store thousands) and less expensive. The Williams tube would nevertheless prove to be frustratingly sensitive to environmental disturbances.

Efforts began in the late 1940s to find non-volatile memory. Jay Forrester, Jan A. Rajchman and An Wang developed magnetic core memory, which allowed for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor-based memory in the late 1960s.

Developments in technology and economies of scale have made possible so-called Very Large Memory (VLM) computers.[1]

The term “memory” when used with reference to computers generally refers to Random Access Memory or RAM.

[edit] Volatile memory

Volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either Static RAM (see SRAM) or dynamic RAM (see DRAM). SRAM retains its contents as long as the power is connected and is easy to interface to but uses six transistors per bit. Dynamic RAM is more complicated to interface to and control and needs regular refresh cycles to prevent its contents being lost. However, DRAM uses only one transistor and a capacitor per bit, allowing it to reach much higher densities and, with more bits on a memory chip, be much cheaper per bit. SRAM is not worthwhile for desktop system memory, where DRAM dominates, but is used for their cache memories. SRAM is commonplace in small embedded systems, which might only need tens of kilobytes or less. Forthcoming volatile memory technologies that hope to replace or compete with SRAM and DRAM include Z-RAM, TTRAM, A-RAM and ETA RAM.

[edit] Non-volatile memory

Non-volatile memory is computer memory that can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory (see ROM), flash memory, most types of magnetic computer storage devices (e.g. hard disks, floppy discs and magnetic tape), optical discs, and early computer storage methods such as paper tape and punched cards. Forthcoming non-volatile memory technologies include FeRAM, CBRAM, PRAM, SONOS, RRAM, Racetrack memory, NRAM and Millipede.

[edit] Management of memory

Proper management of memory is vital for a computer system to operate properly. Modern operating systems have complex systems to properly manage memory. Failure to do so can lead to bugs, slow performance, and at worst case, takeover by viruses and malicious software.

Nearly everything a computer programmer does requires him or her to consider how to manage memory. Even storing a number in memory requires the programmer to specify how the memory should store it.

[edit] Memory management bugs

Improper management of memory is a common cause of bugs.

  • In arithmetic overflow, a calculation results in a number larger than the allocated memory permits. For example, an 8-bit integer allows the numbers −128 to +127. If its value is 127 and it is instructed to add one, the computer can not store the number 128 in that space. Such a case will result in undesired operation, such as changing the number’s value to −127 instead of +128.
  • A memory leak occurs when a program requests memory from the operating system and never returns the memory when it’s done with it. A program with this bug will gradually require more and more memory until the program fails as it runs out.
  • A segmentation fault results when a program tries to access memory that it has no permission to access. Generally a program doing so will be terminated by the operating system.
  • Buffer overflow means that a program writes data to the end of its allocated space and then continues to write data to memory that belongs to other programs. This may result in erratic program behavior, including memory access errors, incorrect results, a crash, or a breach of system security. They are thus the basis of many software vulnerabilities and can be maliciously exploited.

[edit] Early computer systems

In early computer systems, programs typically specified the location to write memory and what data to put there. This location was a physical location on the actual memory hardware. The slow processing of such computers did not allow for the complex memory management systems used today. Also, as most such systems were single-task, sophisticated systems were not required as much.

This approach has its pitfalls. If the location specified is incorrect, this will cause the computer to write the data to some other part of the program. The results of an error like this are unpredictable. In some cases, the incorrect data might overwrite memory used by the operating system. Computer crackers can take advantage of this to create viruses and malware.

[edit] Virtual memory

Virtual memory is a system where all physical memory is controlled by the operating system. When a program needs memory, it requests it from the operating system. The operating system then decides what physical location to place the memory in.

This offers several advantages. Computer programmers no longer need to worry about where the memory is physically stored or whether the user’s computer will have enough memory. It also allows multiple types of memory to be used. For example, some memory can be stored in physical RAM chips while other memory is stored on a hard drive. This drastically increases the amount of memory available to programs. The operating system will place actively used memory in physical RAM, which is much faster than hard disks. When the amount of RAM is not sufficient to run all the current programs, it can result in a situation where the computer spends more time moving memory from RAM to disk and back than it does accomplishing tasks; this is known as thrashing.

Virtual memory systems usually include protected memory, but this is not always the case.

[edit] Protected memory

Protected memory is a system where each program is given an area of memory to use and is not permitted to go outside that range. Use of protected memory greatly enhances both the reliability and security of a computer system.

Without protected memory, it is possible that a bug in one program will alter the memory used by another program. This will cause that other program to run off of corrupted memory with unpredictable results. If the operating system’s memory is corrupted, the entire computer system may crash and need to be rebooted. At times programs intentionally alter the memory used by other programs. This is done by viruses and malware to take over computers.

Protected memory assigns programs their own areas of memory. If the operating system detects that a program has tried to alter memory that does not belong to it, the program is terminated. This way, only the offending program crashes, and other programs are not affected by the error.

Protected memory systems almost always include virtual memory as well.

[edit] See also

[edit] References

  • Miller, Stephen W. (1977), Memory and Storage Technology, Montvale.: AFIPS Press
  • Memory and Storage Technology, Alexandria, Virginia.: Time Life Books, 1988

[edit] External links

[edit] Footnotes

  1. ^ For example: Stanek, William R. (2009). Windows Server 2008 Inside Out. O’Reilly Media, Inc.. pp. 1520. ISBN 9780735638068. http://books.google.com/books?id=SbxixF4iAEcC. Retrieved 2012-08-20. “[…] Windows Server Enterprise supports clustering with up to eight-node clusters and very large memory (VLM) configurations of up to 32 GB on 32-bit systems and 2 TB on 64-bit systems.”



This article uses material from the Wikipedia article Memory (computers), which is released under the Creative Commons Attribution-Share-Alike License 3.0.