Ed’s Threads 070921Musings by Ed Korczynski on September 21, 2007
Flash and DRAM rule future of IC memory
A one-day technical symposium on “New Frontiers in Memory”
, sponsored by the IEEE and Applied Materials, was held Sept. 20th at the Hotel Valencia in San Jose, CA. Amidst the ostentatious splendor of the flashy hotel, a standing-room-only crowd of technologists learned about the leading-edge of manufacturing the densest, fastest, cheapest IC memories. The takeaway theme: The two trains of DRAM and flash memory technologies have long “left the station” and unless and until they stop, other technologies such as phase-change RAM (PRAM) and magneto-resistance RAM (MRAM) will be relegated to niche applications.
Sung-Joo Hong, VP of R&D for Hynix, discussed the scaling limits of trench-DRAM technology determined by the control of subtle topography variations inside storage-node trenches. Retention time of the recess cell transistor will be challenged again with the introduction of 1.2V devices. With inherently smaller storage area and higher fields at junctions, extending current device architectures would result in excessively low retention time. The lowest equivalent oxide thickness (EOT) of 3Å in a 50:1 aspect ratio trench is not sufficient for 3Xnm node technology. Selective epitaxy and/or finFET (with p+ poly gates) are possible solutions, though DIBL is an inherent challenge for finFETs.
George Samachisa, VP of technology at SanDisk, showed that as flash capacity has improved while cost has dropped, it has come to overlap with hard disk drives (HDD) and DRAM/SRAM. With another 10x reduction in price, SanDisk projects that flash cost/bit could actually be less than DRAM. NAND flash costs ~$10/GB today, with ~$1.0/GB likely in 5-7 years time, and today's capacity of 16Gb/chip is expected to increase to 128-256Gb chips in 5-7 years. To continue scaling, the NAND and controller must work together for defect management, wear-out leveling, cell-cell interference mitigation, file/bad-block management, standard I/O, and DSP error-correction control to enable >2 bits/cell.
SanDisk has pushed five generations of technology in just as many years of production. In 2004 most production was 130nm, while by the end of this year the majority will be 70nm, and 2008 will be mostly 56nm with some 43nm in volume. Alternative NAND technologies (SONOS and TONOS) have so far not lived up to expectations, so SanDisk believes that floating gate is still the best candidate for scaling down to the 20nm technology node. Adding SONOS would allow NAND to be scaled one more node to 1Xnm, with 3D technology the likely successor.
Prof. H.S. Wong of Stanford U., formerly with IBM's T.J. Watson R&D center, discussed the bleeding-edge of “emerging memories” including change-storage, phase-change, nano-filament formation, ferroelectrics, magneto-resistance, stiction force, and mechanical deformation. Wong cautioned that any researcher observing hysteresis in physical phenomenon is tempted to claim a "new memory technology" -- but density, scalability, and manufacturing cost constraints tend to eliminate most from serious consideration.
Tom Andre, Freescale Semiconductor's head of toggle MRAM technology, explained that 0.18µm MRAM technology provides data retention of >20 years and unlimited endurance at 125°C for a 4Mb toggle MRAM running on 3.3V power supply and 26mm2 chip size (based on a 1.26µm2 cell size). The market space for fast and non-volatile memory allows for a price of US$4/Mb. Spin-torque MRAM, as opposed to the toggle variant, allows for more efficient writing based on current-density instead of energy transferred through a field. Distributions of write-currents can be a problem, particularly for the high-end where excessive currents can induce breakdowns.
PRAM seems promising, and the fact that ex-Intel-Flash-leader and CTO Stefan Lai has joined Ovonyx is encouraging, but this technology has been pushed for nearly 40 years
by Energy Conversion Devices, Ovonyx's parent company. (This time for sure…) Samsung’s 512Mb PRAM in 90nm technology uses PN diodes, complex top contacts, and other unique processes on top of standard CMOS. Intel plans for PRAM production, too.
Metal-oxide memories have been shown with NiO, TiO, Nb2O5, Al2O5, Ta2O5, and Cr-doped perovskites. The exact mechanism is not clear, but some manner of conductive filament formation seems to be involved. Consequently, the on-current should be area-independent while the off-current should be area-dependent. Solid electrolytes such as Cu-WO3 and Cu-Cu2S could be used in the future, and theoretically scaled down to a single-atom between electrodes. HP’s crossbar nano-array architecture
might fit into this categorization, too.
Any new memory technology must meet a market need, and must compete with DRAM and flash in terms of cost and functionality. “There’s a lot of room to scale DRAM before we need new memory technology,” said Applied Materials Fellow Reza Arghavani, in an exclusive interview with WaferNEWS. Arghavani points out that equipment companies can bring to memory manufacturing innovations that have been in use in logic fabs for generations, such as copper interconnects, epi-layers, HK+MG, and low-k dielectrics. “They have to be re-optimized and re-integrated, but fundamentally they are the same technologies,” he said. Charge-trap memories are just like HK+MG stacks, in the need for work-function engineering of the materials interfaces, he pointed out. “The physics of it is identical.”
For at least the last few nodes, logic has driven thin-films and new materials development, while memory has driven lithography development. “Flash is driving litho resolution, while overlay is currently being driven by DRAM,” clarified Rudi Hendel, Applied Materials' managing director, technology programs, in an exclusive interview with WaferNEWS.
Humans like to sort and store information, and the ever-greater ability to store data in digital form continues to spur demand for IC memory. SanDisk presented recent data (May 2007) from Gartner Dataquest that forecasted NAND bit demand will increase 40x from 2006 to 2011, with major demand for PC, mobile phones, USB, and media players. In the last ten years, flash has already replaced a host of older storage mediums (35mm film, floppy/Zip/Clik/tape drives) and is well on the way to replacing CDs and ultra-small HDD (<1.3”). The message is clear -- other promising memory technologies have a tough train to catch.
A comment (below) that this blog entry does not distinguish between stand-alone and embedded applications is certainly correct; stand-alone memory IC technology can be more easily compared in terms of cost/density/performance, while embedded applications must consider additional cost and performance increases. Such analysis is a bit beyond the scope of what can be covered in a relatively short blog entry.
Labels: DRAM, flash, future, IC, memory, MRAM, PRAM
posted by [email protected]
070921: Flash and DRAM rule future of IC memory
Ed’s Threads 070914Musings by Ed Korczynski on September 14, 2007
Missing micrograms and measurement accuracy
The “one true” kilogram cannot be trusted anymore. All standards must be based on a reference, and the master reference for mass on planet earth is a platinum-iridium-alloy cylinder kept in a special vault in Sevres, southwest of Paris. The 118-year-old master cylinder now appears to have lost 50µg compared with the average of dozens of copy-masters, and the reason is a mystery. "They were all made of the same material, and many were made at the same time and kept under the same conditions, and yet the masses among them are slowly drifting apart," said Richard Davis of the International Bureau of Weights and Measures in Sevre, France. "We don't really have a good hypothesis for it."
Each copy-master, officially termed a “National Prototype,” is used as the main reference in different countries (the Figure shows the US National Prototype Kilogram, held by NIST) to calibrate measurement systems. Scientists tend to care that a kilogram is absolutely a kilogram. Engineers tend to care that they get about the same amount of something every time, relatively speaking. The difference is between “accuracy” and “precision” in measurements.
Accuracy is defined as how closely a measurement matches an actual or “true” value
, while precision is the repeatability of multiple measurements
. How we can ever really determine the true value is another question.
The real world of our experience is never “ideal.” The surface of our planet is hot enough that random kinetic energy within atoms as lattice vibration induces finite vapor-pressure so solids may alter and be altered by their environment. Thus the act of measurement may alter that which is being used to measure, which is not a macro-scale variation on Heisenberg’s Uncertainty Principle
, but an honest acceptance of the fact that macroscopic solid surfaces interact with their environments. Copies and redundancy may be used to detect any such drift of mass, and this is where we now find a problem—either the copy-masters accreted mass due to some as-yet-inconceivable phenomenon, or the master lost mass. Neither scenario is easily explained.
How might this possible loss of an absolute mass reference effect semiconductor manufacturing? Though chip fabs use technologies in common with other industries such as specialty gases and vacuum pumps, relative references are sufficient. Based on the inputs, engineers always “center processes” which then become relative standards. "Copy Exactly", as defined and developed by Intel
, fully embraces this concept; once an input is proven in manufacturing, external references may be ignored. As long as a process is very reproducible—precisely—it’s accuracy can be relative.
Absolute standards just aren’t essential for this industry to test chips before shipping them to customers either. Digital chips are designed to functions as circuits of binary units, so a slight shift in internal relative values wouldn’t matter. Even analog chips or sensors are still designed to typically allow for calibration of some sort, so for example the gain could be tweaked to allow for a drift in a basic parameter. Given the inherent variability of batch processing with the need for consistent IC functionality, the industry has learned to handle slight shifts in parameters.
So, we can all relax and not worry about our industry losing its way if “The kilogram” has lost 50 parts-per-billion (ppb) of mass. Companies such as Process Specialties Inc.
and VLSI Standards
still provide “NIST-traceable reference standards” for the industry, which are more than adequate for our needs. What more can be done?
For over two years now, NIST and other standards groups have advocated for a kilogram standard
based on something beyond a physical master, though more work is needed. One option would be to assume Avogadro’s constant (the number of atoms in a mole of matter) and then measure spacing in a “perfect crystal” to determine the number of atoms in a reference mass. Another option would be to count the number of electrons flowing through superconducting coils needed to balance a mass accelerated by gravity. “Currently, both methods are 10-100x less precise than the measurement uncertainty produced when comparing the kilogram artifact to national standards,” according to consensus from the Royal Society of London.
Supposedly, one of the leading alternatives for a 21st-century kilogram is a sphere made out of a Silicon-28 isotope crystal, though to the best of my knowledge any macro-scale crystal made up of gazillions (a technical term) of atoms on the surface of planet earth (with temperature ~298°K resulting in “random” energy) will have defects. The lattice spacing may be uniform and measurable, but vacancies and defects will still exist. These are some of the issues associated with pushing the limits of physical standards.
Humans have imagined absolute standards for thousands of years. Just like the conceived Platonic Solids
, however, absolutes don’t exist in the real world. So we can keep dreaming of perfect standards, but back in reality we’ll still be counting exceptions and measuring variations.
Labels: accuracy, kilogram, measurement, precision, standard
posted by [email protected]
070914: Missing micrograms and measurement accuracy
Ed’s Threads 070907Musings by Ed Korczynski on September 7, 2007Lam & Novellus both strip wafer edges
This is a tale of two companies, two machines, and two different ways to solve one related problem: wafers have edges. Silicon wafer edges perturb plasma flows in process chambers, and so induce inherent non-uniformities in processing. Silicon wafer edges are seemingly the main source of defects for immersion lithography. Advanced fabs today typically specify a 2mm edge exclusion for wafers, and Novellus and Lam have responded with new hardware to dry strip edges.Novellus’ downstream dry edge strip
. Depth-of-focus along with etch-rate selectivity challenges have led to the need for hardmasks in advanced IC lithography. The hardmask material must be properly chosen for selectivity to the underlying layer to be etched. In many cases, it can be an amorphous carbon PECVD thin-film that is “ashable” (a misnomer since it can be dry stripped without any ash-like residue remaining). A wide variety of hydrocarbon precursors may be used, and deposition parameters must be properly controlled to ensure the final film structure is composed of sp2 carbon-bonds for transparency and film stability. “We’re getting 20:1 selectivity, and extinction coefficient value at 633nm of 0.11,” claimed Julian Hsieh, senior director of product management for the dielectrics business group at Novellus Systems.
To eliminate any edge particles that could kill dice, the Vector Express PECVD tool from Novellus
now provides a new dry edge-bead removal (EBR) capability into the outgoing loadlock (which SST recently reviewed
). Using an off-the-shelf downstream plasma generator to crack O2 into mono-atomic oxygen (Fig.1), amorphous carbon (red in the figure) is stripped off the wafer edge while the top-surface is masked by center shield hardware.
Field-retrofittable to the Vector platform, the EBR has additional potential applications. Since mono-atomic oxygen is extremely reactive, it may be able to clean other PECVD films off of the edge/bevel of wafers. ”If you have this capability you may be able to use it to solve other problems,” admitted Hsieh.
In addition to clean wafer edges, it’s essential that deposited film properties remain constant all the way to the 2mm edge exclusion. Ensuring a uniform deposition environment across the wafer—in terms of temperature, plasma energy parameters, and precursor flows—requires careful optimization of chamber hardware. Consequently, Novellus modified the Vector Express chamber hardware to include new plasma confinement shields.Lam’s plasma ring edge strip
. Also using a physical shield, Lam Research Corp. now sells a plasma edge clean module that can be part of a cluster on the company’s 2300 hardware platform
. A capacitively coupled plasma is shielded from the wafer topside by a shield precision engineered to float fractions of a millimeter above the wafer surface (a gap too small to be seen in Fig.2). No electrostatic chuck is used to minimize cost.
“If we as an industry had recognized the value of bevel clean, we would have done it earlier,” said Rick Gottscho, group VP and GM of Lam's etch business, noting that this market opening started with Korean memory customers. Yield improvements of 1%-4% are possible using rigorous dry edge strip, he said, adding that a 3-4 chamber cluster of these edge strippers may see production.
Lam quietly released this tool in 1Q07, and now claims to be engaged with 18 of the top 25 capital spenders. “Most of our customers today are in evaluation phases, looking at the yield benefits, and the applications first to use it, but the pull is very strong,” said Gottscho. He said that chamber throughputs are close to what you’d expect from a stripper dealing with low-k etch processes.
Both Novellus and Lam have released useful tools for high-volume production, and both use a hardware shield to protect wafer top-sides while stripping films from edges. However, they are inherently different in the plasma hardware. Novellus’ remote generator design is safe and simple and fits into a load-lock without taking up chamber space. Lam’s capacitively coupled plasma ring provides an additional degree of processing freedom with ion bombardment, but requires the space of a process chamber to do so.
Applications-specific hardware solutions such as these are just what the industry needs to maintain productivity while ramping the production of nanometer-node ICs. While the core technologies are not new, they have been combined in new ways based on direct feedback from end-users. The natural evolution of sophisticated hardware continues within the industry ecosystem.
Labels: edge, Lam, Novellus, plasma, silicon, strip, wafer
posted by [email protected]
070907: Lam & Novellus both strip wafer edges