February 4, 1997 New Youk Times

Incredible Shrinking Transistor Nears Its Ultimate Limit: The Laws of Physics

By WILLIAM J. BROAD

It was just before Christmas in 1947 that a team of scientists at Bell Laboratories in Murray Hill, N.J., created the first transistor. Neither they nor anyone else knew where it would go. When the invention was unveiled publicly in 1948, it received scant attention. The New York Times ran a four-paragraph article saying the half-inch device had "several applications in radio where a vacuum tube ordinarily is employed."

Today, almost a half-century later -- with transistors shrunk dramatically in size, amassed by the millions on computer chips and humming away as the electronic brains of toys, cameras, wristwatches, faxes, cellular telephones, radios, musical instruments, cars, jets, computers, televisions, rockets, satellites, space probes and countless other devices -- scientists around the globe are making a furious assault on the last frontier of electronics, perhaps foreshadowing an end to at least one aspect of the revolution.

They are striving to create transistors that work by virtue of the movement of a single electron, the subatomic particle that is a building block of matter and the fundamental unit of electricity. The parts of these most Lilliputian of all transistors are about one five-millionth the size of those of the first rudimentary transistor shown to the public, in theory allowing a phenomenal one trillion of them to crowd a computer chip the size of a fingernail. Scores of research teams are now racing for such tininess.

One dream is to combine such chips into tiny personal supercomputers that would ride inconspicuously on a person's body to digitally record and recall everything read, heard and seen.

The feat of creating a transistor operated by a single electron has recently been achieved in the laboratory. But translating this into commercial products is daunting, and could take decades to achieve, if ever.

Still, experts are confident that one way or another, transistors will continue to shrink in the near future.

"The equations start to break down as things get really small," Dr. William J. Brinkman, head of material sciences research at Bell Labs, said in an interview. "But every time we run into a problem, we've been able to figure a way around it."

Brinkman estimated that the transistor revolution to date had gone "a little past half-way" toward a time when the shrinking might come to a forced end, brought to a standstill by the laws of physics.

Then again, the whirl might continue as scientists find clever ways to circumvent the natural limits of particular materials and methods.

"If you know where the wall is, you can make profound changes," said Dr. Bernard S. Meyerson, a senior manager in chip design at the Watson Research Center of the International Business Machines Corp. in Yorktown Heights, N.Y. "You simply do an end run," he said. "You change the rules, or tweak them."

The first transistor, built by William Shockley, John Bardeen and Walter H. Brattain in 1947, worked like a vacuum tube to amplify small electric currents in a way that was remarkably simple.

The scientists first made a sandwich of semiconductive materials and then applied a small current, or flow of electrons, to its middle region (called the base). That current changed the conductive properties of the material and let a relatively large number of electrons flow between the semiconductor's outer layers (the emitter and collector).

Importantly, the large flow varied proportionally in relation to the size of the small base current, allowing all kinds of amplification. For instance, a series of transistors could boost a microphone's tiny currents into ones powerful enough to drive a loudspeaker.

Relatively cheap, transistors did their amplification without the heat, bulk and heavy power drain of vacuum tubes, whose evacuated glass cylinders glowed like dim light bulbs.

The three Bell physicists won a Nobel Prize for their discovery in 1956 as the invention became increasingly dependable and gave birth to an age of miniaturized electronics.

One advance let a relatively small base voltage excite a full flow of electrons that had no fluctuations, allowing transistors to act as on-off switches, or, in mathematical terms, "0" and "1." This digital mode of operation gave rise to logic circuits and central processing units of computers that were increasingly compact and fast.

The other breakthrough came as researchers began to carve many transistors into a single "chip" of semiconductive material, usually made of silicon. Jack Kilby of Texas Instruments did so in 1958, and Robert Noyce of Fairchild Semiconductor advanced the idea in 1959, his refinement paving the way for mass production.

By the 1960's, companies were racing to cram ever more transistors into devices known as integrated circuits, rapidly changing the face of electronics as the tiny chips became increasingly powerful.

In 1965, Gordon Moore of Fairchild, who three years later joined with Noyce to found the Intel Corp., predicted that the number of transistors that designers could pack on a chip would double every 18 months or so, an axiom later known as Moore's law. Outlandish though it was, the law has turned out to be prophetically true, holding up remarkably well over the decades.

The law is based on the steady shrinkage of transistor size. A human hair has a width of about 100,000 nanometers, one nanometer being a billionth of a meter, which is equal to 39.37 inches. In 1970, the constituent parts of transistors were about 12,000 nanometers wide. By 1980, these parts had shrunk to 3,500 nanometers. By 1990, they were 800 nanometers. And today, they are nearing 300 nanometers.

Amid this shrinkage, chips went from having hundreds of transistors to millions. And on the horizon are billions, a goal widely seen as reachable early next century as line widths shrink to near 100 nanometers.

Today, for instance, a Pentium Pro chip made by Intel, the heart of some of the most advanced personal computers, has 5.5 million transistors. Intel expects to cram a billion transistors onto its central processors by the year 2011. And memory chips, which are much easier to make than number crunchers, are expected to have many billions of transistors in a decade or so, allowing computers the size of refrigerators to shrink to fit onto desks or laps.

"There's been an explosion in component density," Dr. Michael Riordan, the author of "Crystal Fire," a history of the transistor to be published this year by W.W. Norton, said in an interview.

"The only explosion I can compare it to is the big bang at the birth of the universe. It's that kind of exponential growth, and it's giving rise to a universe we can only imagine."

Despite decades of blazing success, the race to make transistors smaller is getting increasingly difficult and costly. So the question keeps arising of whether Moore's law will lapse in the next decade or so, ending the era of explosive growth.

To some extent, the outcome depends on the cleverness of chip designers in finding ways to go beyond the interplay of light and photographic reactions, the main way that thin wafers of silicon and other semiconductive materials today are cut into diminutive mazes of transistors. A host of new cutting tools are now being developed, including ones based on X-rays, electrons themselves and ultrafine molds that work like cookie cutters.

In the most futuristic push of all, researchers are racing to cut microscopic structures in which a single electron does the transistor job. The work, while hard, has its advantages.

For instance, all electrons carry a negative charge and thus tend to repel one another -- an effect known as Coulomb repulsion that complicates handling when groupings of electrons get very small. But a solitary electron has no neighbors and no such interactions.

The ultimate goal is to stay in the forefront of one of the world's most competitive and profitable industries. As Science magazine noted in a Jan. 17 article on single-electron transistors, "small is where the cash is."

At the IBM Watson Research Center, scientists are making silicon structures as small as 30 nanometers in which the movement of a single electron in and out of a tiny storage area constitutes the heart of a transistor memory circuit. Storing a single electron is generally seen as a first step on the road to the more difficult job of directing a steady flow of single electrons in a logic circuit.

Dr. Sandip Tiwari, an IBM scientist, said his group had succeeded in storing anywhere from one to seven electrons to create a memory circuit, a feat considering that traditional devices use tens of thousands of electrons for the memory job.

Three scientists at the University of Minnesota, Dr. Lingjie Guo, Effendl Leobandung and Dr. Stephen Y. Chou, writing in the Jan. 31 issue of Science, report the development of a transistor memory circuit in which a single electron is stored in a tiny dot of silicon that is just seven nanometers wide.

In an interview from Tokyo, where he was visiting scientists doing similar work, Chou, the Minnesota team leader, estimated that about 60 groups around the globe were vying to perfect single-electron transistors, some 20 each in Japan, Europe and the United States.

"Many superb devices have been invented in the lab," Dr. Chou said. "But they cannot be commercialized because of the lack of an appropriate method of manufacturing."

His own group, he added, is developing a press that, working like a cookie cutter, in theory might be able to churn out infinitesimal transistors by the trillions.

While big challenges loom, scientists say the likelihood is that the barriers to making small structures will keep falling, allowing the microelectronics revolution to continue to move at a steady clip for many decades to come.

"Innovation is accelerating," said Dr. Albert Yu, head of microprocessor products at Intel. "I'm very optimistic. I don't think things are going to slow down as long as the market keeps going."