APS News

August/September 2003 (Volume 12, Number 8)

Physics and Technology Forefronts

Non-Volatile Storage For Information Access

By Charles C. Morehouse

Figure 1
Figure 1: Illustrates the gap in access time and cost between storage and memory.

Figure 2
Figure 2: Examples of probe storage-Atomic Resolution Storage
Most of us are confronted with the need for storing and retrieving information every day. Our expectations for having access to information of importance is with us constantly, and growing all the time. As complex and sophisticated tools became more widely used, particularly in the industrial revolution of the 18th and 19th centuries, a means of "storing" and "retrieving" information for use only by the tools emerged.

Tools Needing Storage
The first storage device for use by a machine only was the music box in the late 18th century. These machines used a handcrafted metal cylinder as the medium for "storing" a musical tune.

At the beginning of the 19th century, a purely industrial use of memory was the pioneering programmable mechanical loom invented by Joseph Jacquard. This invention used a pattern stored on thin cardboard cards. Charles Babbage used punched cards as input to calculations in his 1883 Analytical Engine, a mechanical computing machine. Punch cards were used for the first time in a large data analysis project in the 1890 Census, using an electromechanical machine invented by Herman Hollerith. This machine was the forerunner of an industry, based on punch cards for data storage and analysis.

As mechanical, electro-mechanical and fully electronic machines developed in the 20th century, memory had to be developed in parallel. Paper cards and other perforated media such as paper and Mylar tape had a long life (nearly a century). Aluminum foil was the first medium used in audio recording, but gave way to wax cylinders and other sturdy plastics.

Magnetic recording of data was first demonstrated by Vladimar Poulsen in the late 19th century, using a wire coated with magnetic particles. Finally, in the 1930s, magnetic recording was demonstrated on thin plastic tape coated with magnetic particles.

Computers became more capable, smaller, less power hungry and much faster during the middle 20th century. The need for a fast means of data storage, both persistent and temporary, drove the development of a number of technologies during the 1940s and 1950s: CRT storage, delay lines, magnetic core and magnetic drums and disks. The first electronic super-computer, the ENIAC, used magnetic core memory for its internal memory.

The characteristics of a useful information storage scheme vary with the intended use: how long the information will be needed, how large a capacity is required, and how fast the information needs to be accessed and transferred.

Paths to nonvolatile storage
As machines have come to dominate the processing of information, the machines themselves have driven the development (and even the naming) of storage and memory technologies. It is primarily the increase in speed of computation that has forced the development of new technologies: speed of computation requires speed of data access, but speed also drives the required capacity of stored data, and the need for rewritable data storage.

Digital computers perform almost all of the financial and technical computations performed in the world today. The millions of customers and millions of modest-sized transactions lead to the need for high-speed computation and large amounts of storage. Digital imaging and digital document creation have brought about the need for large data storage, and transforming and distributing these documents and images requires high-capacity storage as well as high-speed computation and communications. So a dual need is established?high-speed data access and very large capacity storage. These two demands are satisfied by different technological solutions.

Landscape of nonvolatile storage
A typical storage device uses a semipermanent scheme for storing information-and the information is usually maintained even if the power is lost to the storage device. We call this kind of storage nonvolatile. Storage devices typically have large capacity, and their design is a compromise to maintain a low price at the expense of access time (the time required to access the first bit of desired information). This compromise is achieved by using a minimum number of the complex, expensive components (recording heads, drive motors, etc.) but a large amount of the inexpensive commodity (primarily the recording medium). Rigid disks, flexible disks, magnetic tapes, CD-ROMs, CD-RWs, DVD-ROMs, DVD+RWs are examples of these sorts of devices.

Components that store data briefly within a computer are called memory devices. While the computer is powered up, all the information is kept in memory by the live circuitry. If there is an interruption of power, the majority of the information within the computer memory is lost?the memory is volatile. Some of the memory (DRAM) has to be refreshed even when the power is on. In our own personal experience using computing machines, we never think that some of the volatile memory is being rewritten constantly?the computer is doing that for us in the background. The speed of typical memory is gained by having a random access architecture, which provides an address to every bit whether or not it is needed by the processor at any given instant. The random access designs are achieved through the use of semiconductor technology based on photolithography. Thus, typical computer memory is very similar in cost to other semiconductor devices, like microprocessors. Memory is the most expensive form of information storage, but it can be as fast as the fastest processors (nanosecond delay times to get to the first desired bit).

In many applications, information because of a power loss is acceptable, since it can often be recalculated or recreated when the computer is turned back on. In a growing number of applications, however, the information has taken so long to create, or is so unique to the situation that it should be kept in nonvolatile memory. Thus there is a need to fill in the gap between the slow, inexpensive, storage and the fast, expensive memory. The gap is very large: about six orders of magnitude in "speed" or access time and roughly three orders of magnitude in cost per bit.

Development directions for nonvolatile memory/storage
The fundamental architectural choices behind memory and storage have to do with their data access architecture. Storage is based on a movable read-write mechanism which can be positioned to access any desired bit of information. The only way to speed up the mechanical system of storage is to use smaller components (or employ steerable beams to access the data). The cross-point architecture for memory is achieved via a lithographic process on silicon wafers. The most effective way to reduce the cost of the memory devices (relative to the cost of other semiconductor devices) is to get more bits on the same area of silicon (multi-bit recording or via multi-layering) or to change the fundamental materials of the cross-point memory devices.

Physically smaller storage devices were imagined after the discovery of the Scanning Tunneling Microscope by Gerd Binnig and Heinrich Rohrer in 1986. The STM made everyone aware that atomic scale motion could be achieved at affordable prices. A long period of investigation has yielded a number of active efforts to produce so-called "probe storage" devices, all using Micro-Electro-Mechanical-Systems (MEMS) for the positioning mechanisms, but different schemes for the actual writing and reading of the data bits. The MEMS part of the devices is used to position a small movable element with respect to the read-write components. The very dense packing of the data is accomplished by an x-y mover (Cartesian coordinate system) rather than the rotating disk and radial arm motion of the disk drive (Cylindrical coordinate system). For the read-write mechanism, HP is perfecting a phase-change scheme in its Atomic Resolution Storage program, Carnegie-Mellon University is pursuing a magnetic scheme in its CHIPS program, IBM is working on a physical deformation of a polymer in its Millipede (now NanoDrive) program.

Other approaches are also being tried using charge trapping (Canon), ferroelectric polarization (HP), and probe phase change (LETI) to name a few. These and other "probe storage" approaches are in various stages of development and a race is on to bring them into the hands of customers. The schedules for product introductions are closely held secrets of the participating players.

Nonvolatile memory is also being pursued actively by a large number of parties, and a few examples of these technologies are already on the market. Flash memory exists today, as a nonvolatile memory technology. Flash has the disadvantage that it is not infinitely recyclable and not fast enough in writing to work alongside processors. Researchers are looking at nonvolatile memory technologies different from the flash charge storage scheme. Leading the way are MRAM (Magnetic RAM; a host of companies are announcing or suggesting product introduction dates in 2003 and beyond), FeRAM (Ferroelectric RAM; modest capacity products are available today from Ramtron, and many others working on it), phase change (Ovonyx has announced partnerships with Intel and STM), NROM (charge trapping in nitrides, pursued most actively by Saifun Semiconductors and AMD) and polymer memory (pursued by a number of companies). Each of these nonvolatile memory technologies has its advantages and disadvantages, and the winner will be determined by the extent to which they succeed in addressing problems of customers, and being successful in the marketplace.

These new technologies will address a wide range of requirements, traditional, emerging and future. For example, not all the markets need the full range of capabilities imagined for a "perfect" memory component. A digital camera user does not expect the digital film to be capable of an infinite number of read-write cycles. An archival data storage system does not have to run at nanosecond access times. There will be a large number of products, with a large variety of capabilities squeezing into, between and beyond the spaces occupied by the current magnetic and optical phase- change storage and the high-speed silicon- based computer memory. We technologists have the wonderful opportunity to be both the developer of future products based on the application of many different physical principles, as well as the first users of them.

Charles Morehouse is director of the Information Access Laboratory at Hewlett Packard.

©1995 - 2024, AMERICAN PHYSICAL SOCIETY
APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed.

Editor: Alan Chodos
Associate Editor: Jennifer Ouellette

August/September 2003 (Volume 12, Number 8)

APS News Home

Issue Table of Contents

APS News Archives

Contact APS News Editor


Articles in this Issue
APS Study Questions Feasibility of Boost-Phase Missile Defense
News Background: Pentagon Seeks Functioning Booost-Phase System by 2010
BPI Study Group Members
Visa Issue Impacts 2004 March Meeting
Visa Rules Must Promote Science as well as Security
Visa Problems Continue to Plague Foreign Students
Rosenberg is New APS Congressional Fellow
APS Selects Three as 2003 Mass Media Fellows
APS Selects 25 as 2003-2004 Undergraduate Minority Scholars
Jakobsson is First Director of NIH Center for Bioinformatics
Gammasphere's Starring Role in THE HULK
Letters
The Back Page
This Month in Physics History
PRL Top Ten: #1
Physics and Technology Forefronts