
"RAM" redirects here.
Random access memory (also hyphenated as random-access memory), usually known by its acronym RAM,
is a class of media used in computers for data storage and retrieval. A
RAM device is designed to allow data to be read or written in any
order—that is, "at random." In addition, the speed at which a set of
data can be accessed is independent of its location in the device.
In
today's computers, the main memory takes the form of a RAM device,
which is usually an integrated circuit containing millions of "memory
cells." This memory stores software programs as well as other data that
are actively being used. RAM is a volatile type of memory, where the
information is lost when the power is switched off.
RAM stands in
contrast to sequential access memory (SAM), where the reading or
writing functions are carried out sequentially. For example, a magnetic
tape stores data that can be accessed in only a sequential manner.
Historical highlights
An early type of widespread writable
random access memory was the magnetic core memory, developed in
1949-1951. It was used in most computers until the development of the
static and dynamic integrated RAM circuits in the late 1960s and early
1970s. Before that, computers used relays, delay lines, or various kinds
of vacuum tube arrangements to implement "main" memory functions (that
is, hundreds or thousands of bits), some of which were random access,
some not. Latches built out of vacuum tube triodes, and later out of
discrete transistors, were used for smaller and faster memories such as
registers and (random access) register banks. Prior to the development
of integrated read-only memory (ROM) circuits, permanent (or read-only) random access memory was often constructed using semiconductor diode matrices driven by address decoders.
Overview
Types of RAM
Various
types of RAM have been developed. The most common type of semiconductor
RAM is called "dynamic random access memory," or DRAM. In this case,
each memory cell is formed by pairing a transistor with a capacitor, and
each bit of data is stored as an electric charge in a capacitor. There
are millions of memory cells in each memory chip. In a DRAM chip, the
content (or charge) at every location is held for a fraction of a second
and needs to be refreshed repeatedly.
A second type of
semiconductor RAM is called "static random access memory," or SRAM.
Here, a bit of data is stored in the state of an electronic "flip-flop."
An SRAM chip retains its contents at all locations as long as the power
supply is present.
Many other types of memory can be classified
as RAM as well, including most types of read only memory (ROM) and a
kind of flash memory called NOR-Flash. RAM of the read-only
type uses a metal mask to permanently enable/disable selected
transistors, instead of storing a charge in them. Some types of RAM have
circuitry to detect and/or correct random faults called memory errors in the stored data.
As both SRAM and DRAM are volatile,
other forms of computer storage, such as disks and magnetic tapes, have
been used as "permanent" storage in traditional computers. Many newer
products instead rely on flash memory to maintain data between sessions
of use: examples include PDAs, small music players, mobile phones,
synthesizers, advanced calculators, industrial instrumentation and
robotics, and many other types of products; even certain categories of
personal computers, such as the OLPC XO-1, Asus Eee PC, and others, have
begun replacing magnetic disk with so called flash drives (similar to
fast memory cards equipped with an IDE or SATA interface).
There
are two basic types of flash memory: the NOR type, which is capable of
true random access, and the NAND type, which is not. The former is
therefore often used in place of ROM, while the latter is used in most
memory cards and solid-state drives, due to a lower price.
Memory hierarchy
Many
computer systems have a memory hierarchy consisting of CPU registers,
on-die SRAM caches, external caches, DRAM, paging systems, and virtual
memory or swap space on a hard drive. This entire pool of memory may be
referred to as "RAM" by many developers, even though the various
subsystems can have very different access times, violating the original
concept behind the random access term in RAM. Even within a
hierarchy level such as DRAM, the specific row, column, bank, rank,
channel, or interleave organization of the components make the access
time variable, although not to the extent that rotating storage media or
a tape is variable. (Generally, the memory hierarchy follows the access
time with the fast CPU registers at the top and the slow hard drive at
the bottom.)
In many modern personal computers, the RAM comes in an easily upgraded form of modules called memory modules or DRAM modules
about the size of a few sticks of chewing gum. These can quickly be
replaced should they become damaged or too small for current purposes.
As suggested above, smaller amounts of RAM (mostly SRAM) are also
integrated in the CPU and other ICs on the motherboard, as well as in
hard-drives, CD-ROMs, and several other parts of the computer system.
The
overall goal of using a memory hierarchy is to obtain the higher
possible average access performance while minimizing the total cost of
entire memory system.
Swapping
If a computer becomes
low on RAM during intensive application cycles, the computer can perform
an operation known as "swapping." When this occurs, the computer
temporarily uses hard drive space as additional memory. Constantly
relying on this type of backup memory is called thrashing, which is
generally undesirable because it lowers overall system performance. In
order to reduce the dependency on swapping, more RAM can be installed.
Other uses of the "RAM" term
Other
physical devices with read/write capability can have "RAM" in their
names: for example, DVD-RAM. "Random access" is also the name of an
indexing method: hence, disk storage is often called "random access"
because the reading head can move relatively quickly from one piece of
data to another, and does not have to read all the data in between.
However the final "M" is crucial: "RAM" (provided there is no additional
term as in "DVD-RAM") always refers to a solid-state device.
RAM disks
Software
can "partition" a portion of a computer's RAM, allowing it to act as a
much faster hard drive that is called a RAM disk. Unless the memory used
is non-volatile, a RAM disk loses the stored data when the computer is
shut down. However, volatile memory can retain its data when the
computer is shut down if it has a separate power source, usually a
battery.
Shadow RAM
Sometimes, the contents of a ROM
chip are copied to SRAM or DRAM to allow for shorter access times (as
ROM may be slower). The ROM chip is then disabled while the initialized
memory locations are switched in on the same block of addresses (often
write-protected). This process, sometimes called shadowing, is fairly common in both computers and embedded systems.
As
a common example, the BIOS in typical personal computers often has an
option called “use shadow BIOS” or similar. When enabled, functions
relying on data from the BIOS’s ROM will instead use DRAM locations
(most can also toggle shadowing of video card ROM or other ROM
sections). Depending on the system, this may or may not result in
increased performance. On some systems the benefit may be hypothetical
because the BIOS is not used after booting in favor of direct hardware
access. Of course, somewhat less free memory is available when shadowing
is enabled.
Recent developments
Several new types of non-volatile
RAM, which will preserve data while powered down, are under
development. The technologies used include carbon nanotubes and the
magnetic tunnel effect. In summer 2003, a 128 KB (128 × 210 bytes)
magnetic RAM (MRAM) chip manufactured with 0.18 µm technology. In June
2004, Infineon Technologies unveiled a 16 MB (16 × 220 bytes) prototype
again based on 0.18 µm technology. Nantero built a functioning carbon
nanotube memory prototype 10 GB (10 × 230 bytes) array in 2004. Whether
some of these technologies will be able to eventually take a significant
market share from either DRAM, SRAM, or flash-memory technology,
however, remains to be seen.
Since 2006, "Solid-state drives"
(based on flash memory) with capacities exceeding 642 gigabytes and
performance far exceeding traditional disks have become available. This
development has started to blur the definition between traditional
random access memory and "disks," dramatically reducing the difference
in performance. Also in development is research being done in the field
of plastic magnets, which switch magnetic polarities based on light.
Memory wall
The
"memory wall" is the growing disparity between the speed of the central
processing unit (CPU) and that of the memory outside the CPU chip. An
important reason for this disparity is the limited communication
bandwidth beyond chip boundaries. From 1986 to 2000, CPU speed improved
at an annual rate of 55 percent, while memory speed improved at only 10
percent. Given these trends, it was expected that memory latency would
become an overwhelming bottleneck in computer performance.
Currently,
CPU speed improvements have slowed significantly, partly because of
major physical barriers and partly because current CPU designs have
already hit the memory wall in some sense. Intel summarized these causes
in its white paper titled "A Platform 2015 Workload Model," as follows:
The RC delays in signal transmission were also noted in a report
titled "Clock Rate versus IPC: The End of the Road for Conventional
Microarchitectures," which projects a maximum of 12.5 percent average
annual CPU performance improvement between 2000 and 2014. The data on
Intel Processors clearly show a slowdown in performance improvements in
recent processors. However, Intel's new processors, Core 2 Duo
(codenamed Conroe), show a significant improvement over previous Pentium
4 processors; due to a more efficient architecture, performance has
increased while clock rate actually decreased.
Security concerns
Contrary
to simple models (and perhaps common belief), the contents of modern
SDRAM modules aren't lost immediately when the computer is shutdown;
instead, the contents fade away, a process that takes only seconds at
room temperatures, but which can be extended to minutes at low
temperatures. It is therefore possible to get hold of an encryption key
if it was stored in ordinary working memory (i.e. the SDRAM modules).
This is sometimes referred to as a cold boot attack.