Bookmark This Page! (Ctrl+D)
080331: MRS meeting covers nanostuff and microthings
Ed’s Threads 20080331Musings by Ed Korczynski on March 31, 2008
MRS meeting covers nanostuff and microthingsOver 4000 researchers were in San Francisco last week for the annual Materials Research Society (MRS) spring meeting
, to discuss advances in materials for electronics, energy, health, and transportation. Over 40 technical session run in parallel, with >10 sessions of interest to the semiconductor manufacturing industry at any given time. Theory and results for new IC memory cells, extensions of CMOS logic, and future quantum-dots and nano-rods were shown. Graphene
still seems like a possible replacement for silicon in ICs.
In his Kavli plenary lecture in nanoscience, Prof. A. Paul Alivisatos of UC-Berkeley
described recent work by his group and others on transformations in nanocrystals. Chemical transformations can be used to obtain complex nanocrystalline structures though sequential chemical operations. In an example, CdSe reacted with Ag+ to form Ag2Se which could then be combined with Cd2+ to completely reverse back to CdSe, while the volume of the nanoparticle was completely preserved. Such cation exchanges can occur in semiconductor nanorods and hollow spheres with shape preservation, but when shapes do transform their final forms are currently difficult to predict.
Much of the new materials work is targeted toward finding nanoscale structures which can switch between two measurable states to function as memory cells. Two of the newer random-access memory (RAM) cell types under development are phase-change RAM (PRAM) and resistive RAM (ReRAM). With Numonyx now officially launched to commercialize PRAM
along with Flash, there were many papers looking at manufacturing process flows to optimize the deposition and programming of the antimony-telluride (SbTe) family of “calcogenide” materials which undergo thermally-assisted transitions between crystalline and amorphous phases
. Independent of the MRS meeting, materials supplier ATMI recently announced co-development plans with Ovonyx for calcogenide CVD precursors
ReRAM using metal-oxides as switching elements comes in two fundamentally different variations: one-time programmable through the growth of nano-metallic-filaments, and reversible through ionic transport between electrodes. ReRAM materials may be used in PRAM-like cells, or also used as the switching element in cross-bar architecture arrays
. HP Labs, US NIST, and Hokkaido University all showed advances in hybrid circuits built using cross-bar arrays.
For extensions of CMOS logic, with a somewhat clear path forward in new materials for high-k and metal gates, a lot of research now centers on doping technologies. G. Lansbergen et al. (B3.7) from TU Delft (The Netherlands) along with Purdue (USA), University of Melbourne (Australia), IMEC (Belgium), and Caltech (USA) showed the ability to work with a single Arsenic dopant atom in a p-MOS finFET; their experiments represent the first evidence of the ability to engineer the quantum state of a single-donor electron by surface gate control. While single-ion doping is way beyond today’s fab specs, more precise control is needed for the placement of often <100 atoms for channels and contacts.
Wilfried Vandervorst of IMEC showed that Laser Spike Anneal (LSA) which is essentially “diffusion-less” calls for re-integration from prior rapid-thermal annealing (RTA) schemes where lateral diffusion is significant. Due to the very low thermal budgets needed to form ultra-shallow junctions (USJ)
, LSA is more subject to pocket dopant fluctuations than spike anneals. Random dopant fluctuations must be controlled, along with structural variations on gate cross-sections which appear as undercuts and footing. LSA helps equivalent oxide thickness (EOT) scaling for gate dielectrics by elimination of a 2-3Å thick re-growth layer. However, to ensure reliability in gate stacks, an RTA step can be added after LSA to improve the situation somewhat. Looking forward to embedded SiGe, LSA so far induces junction leakage and defects gliding along certain crystalline planes which unfortunately relaxes desired strain. LSA for embedded SiC, however, avoids SiC relaxation which improves the strain retention in nMOS. Gate profile control is critical for diffusion-less USJ, which may mean gate-last integation schemes will be easier to integrate.
Karuppanan Sheker, of SemEquip, presented on how to use cluster-carbon implants to improve the Si:C layer formation. There is ~2% limit to how much C can be substituted in silicon lattice. At the VLSI Technology Symposium 2007
, IBM showed [C]sub of 1.65% with mono-atomic C implants and pre-amorphizing implants (PAI). Using clustered carbon eliminates the need for the PAI and provides [C]sub >2%. The source is two benzene rings in the form of C14H14, which upon striking a silicon crystal in the 6-10keV implant energy range automatically induces amorphization with depth of 20nm-40nm. The greater the amorphous layer thickness the higher the percentage C which can be substitutionally incorporated.
Newer finFET architectures, which may first be used for SRAM arrays, require unique integration flows. Mark van Dal, NXP-TSMC Research Center, showed that when implants into fins amorphized the silicon material, the re-crystallization in complex fin shapes results in scattering and other sources of variability. The exact reason for the device degradation is not known, but using either BF2 or B+Ge implants (both of which induce amorphization) result in more transistor variability. At fin widths of 1µm there is no difference, but for fins <0.1µm wide the effect is clearly seen. When non-amorphizing B implants are used, no device performance degradation is observed.
Labels: CMOS, finFET, graphene, materials research, nano, PRAM, ReRAM
posted by [email protected]
080331: MRS meeting covers nanostuff and microthings
080324: Etching new IC materials at 32 and 22nm
Ed’s Threads 080324Musings by Ed Korczynski on March 24, 2008
Etching new IC materials at 32 and 22nm
Silicon Valley was once the center of the silicon-based IC manufacturing world, and though IC fabs are now located globally the valley maintains momentum as the center of IC R&D. The North-California Chapter of the American Vacuum Society (NCCAVS) still runs regular users groups on important industry topics
, and the plasma-etch users group (PEUG) meeting on March 13th featured presentations by IBM and Applied Materials on advanced etch processes for 32nm and 22nm node ICs
Nicolas Gani, of the silicon etch division of Applied Materials, presented on work done in collaboration with IBM on plasma etching for gate-stacks for 45nm and 32nm node CMOS transistors. Since the stack is composed of multiple materials, different single-wafer etch chambers for different etch conditions are ideally clustered together into a single tool. One chamber is designed for poly-silicon etching at relatively low temperature, while another chamber is designed for high-k/MG material removal at relatively higher temperatures of 130-220°C.
High-k materials such as HfO2 demonstrate etch rates in Cl2 plasmas with zero bias power that increase linearly by ~4X over the 100-200°C range, though rate-studies indicate there is some ionization component to the etch even without bias. High source-power can actually induce polymerization which shuts down the HfO2 etching. Using 20 W bias allows for 100s Å/min etch rate. One of the key issues in tuning etch processes is the elimination of any “foot” at the bottom cross-section of line-stacks. Applied Materials has shown that etching at >200°C leaves <1nm of a foot, while a 3-4nm foot is seen at <100°C. The temperature control is modest since for etching at greater than ~150°C the reaction is surface limited so that uniformity across the wafer is guaranteed even with an ESC only controlling to ~5°C.
Nicolas Fuller of IBM Research talked about plasma etching challenges for 22nm node etching, with most of the work done at Yorktown Heights, though unit process work was also done at East Fishkill and Albany. Device options for 22nm include finFETs and SOI, and both structures create unique etch challenges. “The fin itself can charge,” explained Fuller. “It may have a hardmask, and charging during the etch can produce an ion steering effect that induces greater etch rate in the middle of structures.” Going to 3D represents a challenge, and—as per the classic wisdom—also an opportunity. “Here charging potentially represents an advantage. You might want to charge the metal gate to induce ion steering to minimize footing,” claimed Fuller.
As complex as today’s leading edge 45nm production may be, halving the scale seems like it could be an order of magnitude more difficult. For sidewall image transfer (SIT) to get the types of structures at 22nm node fin pitches we may need some manner of atomic-level etching (ALE) to conceptually match ALD. New line-edge roughness (LER) and line-width roughness (LWR) issues will be induced by multiple exposures and multiple etches anticipated in 22nm integrated double-patterning process flows.
IBM shows us that after lithography to form 24nm wide lines at 80nm pitch there was 2.6/4.9 LER/LWR (3 sigma); and best lab results were 1.4/2.3 has been achieved with multistage etches of organic/inorganic materials as masks using boutique combinations of e-beam and optical litho. Plasma etch work ongoing at Albany now suggest that high-frequency plasma parameters are the main factors which must be controlled to minimize LER/LWR. There’s barely any CD error budget left, and etch has to share the vanishingly few nanometers with lithography and metrology and deposition. Hold tight.
Labels: 22nm, 32nm, etch, LER, materials research, NCCAVS, PEUG
posted by [email protected]
080324: Etching new IC materials at 32 and 22nm
080317: There is no more noise...
Ed’s Threads 080317Musings by Ed Korczynski on March 17, 2008
There is no more noise...
There is only signal. In controlling the manufacturing processes used for advanced nano-scale IC, the aspects of metrology which we used to be able to ignore as “just noise” are now essential signal we must control. Where to draw the line, and how close is close are just some of the challenges in ensuring that data streams become productive information for fabs. Metrology sessions at SPIE this year
shone fractional wavelengths of light into the darkness of controlling accuracy
When IC features were greater than the wavelength of light used in photolithography—and likewise much greater than a countable number of physical atoms—there were many aspects of manufacturing which we could simply ignore. With the smallest IC feature, typically defined by the minimum half-pitch spacing between lines, now reaching ~45nm (which is less than one-quarter of the 193nm wavelength used in litho) we now experience “second-order” and “third-order” effects which must be controlled.
Vladimir Ukrainstev of Veeco Instruments co-led a panel discussion at SPIE 2008 on the need for CD-SEMs to be accurately calibrated with 3D-AFMs. Researchers have reportedly seen a mere 1° change in the sidewall angle of a device structure result in a 2nm change in the CD measured by a standard 2D SEM. With the allowable budget for CD variation shrunk down to 3nm-4nm, this sidewall angle dependence must be controlled. The greatest risk is in process drift in an etch chamber, where sidewall angle can change spacially (e.g., from the center to the edge of wafers) or temporally (from wafer to wafer over time), which can induce substantial error in the CD-SEM measurement.
With tight feedback loops in advanced fabs, erroneous CD-SEM data can be mistakenly used to set the wrong etch parameters for following lots, which can degrade yield. “Instead of changing CD etch time by the week, we’re changing by the lot or the wafer as part of APC,” explained Kevin Heidrich, Nanometrics’ senior director of new business development, in an exclusive interview with WaferNEWS
. Total CD control is ~4nm for all variability; a normal rule of thumb for precision over tolerance is 0.1, so the total budget for metrology is 0.4nm.
All measurement techniques are subject to some error, and even the best 3D-AFM is still subject to tip-wear and calibration. Veeco has been working with 3rd
-party specialists to optimize AFM tips for different applications, with great results reported for various shapes nano-machined from single-crystal silicon for strength and then coated with some manner of a carbon coating for wear-resistance. NIST showed SPIE attendees this year that even with a slow, expensive, and destructive technique like TEM, there is still 0.33nm (standard deviation, 1σ) of the sidewall angle uncertainty. Everything else adds up to 0.63nm of total uncertainty. Calibration is vital to minimize the propagation of uncertainties.
One of the issues in determining the side-wall angle is what portion of the sidewall to include in the analysis. For features with corner rounding, this could be challenging even with ideal 90 degree sidewalls. Just considering 2nm radii of curvature on the top corners of etched polysilicon lines of 32nm to 45nm widths, and ~10% of the linewidth varies with where a CD-SEM draws the line for the edge.
To help control APC in all manner of deposition and removal processes, Nanometrics recently announced the delivery of the company’s 1000th integrated metrology sub-system
; the milestone system was integrated into an advanced plasma etch system used to control gate CD in advanced logic devices.
At SPIE, IBM (Daniel Fischer et al.) showed OPC requirements for 32nm and the metrology tool calibrations need to support this advanced node. Modeling calibration sites per mask level has increased dramatically: normalized to the 90nm node, 65nm had 10×, and 32nm is 100×. There are now multiple CDs per contour, which results in a reduced number of measurement sites per wafer. For tool calibration, fundamental parameters of magnification, rotation, etc. each must be properly considered in modeling. The researchers showed that scanning a line array in orthogonal directions in a CD-SEM induced up to 2% variation in measurement due to the beam’s oval shape. It’s not noise anymore. “The users must understand the measurement techniques and have them constant or have a consistent offset to be able to use the data,” said Fischer. He added that with real device structures, 144nm was seen by a 2D tool while 160nm was measured by a 3D tool, so some manner of rigorous automated edge-detection is essential.
OCD looks very extendable to finFETs, too. SEMATECH and KLA-Tencor presented a paper on metrology for high-k
finFETs at SPIE. Using high-k
HfSiO thicknesses of 1.5nm and 3nm over Si3
, and using TiN as the metal gate, a thorough DOE of depositions over fins was done. Then using KLA-Tencor's next-generation spectroscopic ellipsometer
(measuring 225nm and up) for OCD, and CD-SEM from AMAT and also HR-TEM, cross-checks between the OCD and standard thin-film measurements showed that the offset was ~1nm. For the metal gate measurements, it was found that the TiN optical properties varied due to what is suspected to be some manner of slight oxide formation. Data from dense arrays showed serious offset from the pad areas, so correlations must be considered. Measuring in the fin area seems to provide sufficient resolution for process control for both the high-k and metal-gate depositions
. OCD measurement precision was at the 1% level or better, and in good agreement with reference measurements. OCD looks very promising for finFET gate stack characterization.n&k Technologies
has modified the optical path of their spectroscopic ellipsometer tool to add a pinhole lens which narrows the transmitted beam spot size from 400μm to 50μm. Since real-world ICs and photomasks tend to have designed areas with regular 50μm arrays, this opens up the ability to measure many more real structures. Collecting the reflectance and transmission in both s- and p-polarizations using 50μm spots provides four separate signals to be used in determining all the layer thicknesses on the mask, including quart etch dimensions for phase-shift masks.
In pushing the limits of signals, IBM and Hitachi recently announced a unique, two-year joint semiconductor metrology research agreement for 32-nm and beyond characterization and measurement of transistor variations
. Engineers from the two companies and Hitachi's subsidiary, Hitachi High-Technologies, will conduct joint research at IBM's Thomas J. Watson Research Center in Yorktown Heights, NY and at the College of Nanoscale Science and Engineering's Albany NanoTech Complex. Combining individual research strengths and IP will help "reduce the significant costs associated with research needed to advance the next generation of chip technology
," said Bernie Meyerson, VP of strategic alliances and CTO for IBM's systems & technology group, in a statement.Rudolph Technologies has become the first OEM to join SEMATECH's Metrology Program
headquartered at the College of Nanoscale Science and Engineering (CNSE) of the University at Albany. The initial program addresses a range of issues, including the metrology of thin films and metal gate stacks; wafer front, back, and edge macro defect inspection; and inspection and metrology for through silicon vias (TSV) and three-dimensional integrated circuits (3DIC).-- E.K.
Labels: accuracy, CD, fab, manufacturing, metrology, noise, SEM, semiconductor, SPIE
posted by [email protected]
080317: There is no more noise...
080304: DFM matures along with industry
Ed’s Threads 080304Musings by Ed Korczynski on March 4, 2008
DFM matures along with industry
Hundreds of technologists over-packed the room in the San Jose Convention Center at 8am on the fourth day of SPIE
to hear keynotes from IBM, Intel, and TSMC on the real reality of design for manufacturability (DFM) in the IC fab industry. As the two leading integrated device manufacturers (IDM) in DFM, IBM and Intel provided thorough overviews of technologies and methods used at 65 to 45nm nodes. In contrast, TSMC gave what seemed like a sales pitch which was not well received by the audience of peer technologists. A raucous panel discussion that evening raised the need for a modeling environment to test new DFM approaches in virtual space and time.
“It’s always possible to increase yield by throwing money at the problem,” declared IBM’s Lars Liebmann. “We need to keep costs under control because we’re chasing incremental yield.” One of the most costly aspects of implementing DFM today is quantification experiments to prove the value of a considered new technology. “You have to convince management that yield will increase, and that value is unique to the product, the time in the current manufacturing node, and the business goals,” said Liebmann. “…there is no universal DFM.”
New DFM tools build upon the proven models used in the past. Critical area-analysis (CAA) is one of the oldest predictors of mature device yield
, since the area of the wafer subject to failure due to random physical defects (such as particles and scratches) can still be accurately extracted from any new design. “While there’s a lot of hype about systematic problems taking over from random problems, CAA is still an excellent indicator of yield,” said Liebmann.
Intensive number-crunching will be needed for design-technology co-optimization, and one example that has already been demonstrated is electrically-driven optical proximity correction (OPC). Instead of tuning a mask to produce optimized shapes, the mask is tuned to produce shapes with optimal electrical performance.
IBM showed that using highly restrictive design-rule (RDR)—specifically pdBrix layouts created by software now owned by PDF Solutions
—at 65nm created dice with the same area and performance with fewer hotspots and less variability in a PowerPC405 core.
“DFM has sort of just barely scratched the surface,” said Liebmann. “We’ve just reached our teenage years and the best years are still ahead.”
In his keynote address, Intel Fellow Clair Webb
explained that his group does a lot of simulation and modeling of design rules, such that the first test-chip is expected to confirm and calibrate the rules which are not supposed to change. Intel ramps processes to very high volumes very quickly, so the process must be very robust, and a very fast yield learning rate is essential.
What is really meant by co-optimization? For Intel, the many factors to be considered include the characteristics of litho tools, resists and illumination sources, tape-out technology, mask processing, device performance targets and architectures (incl. variability requirements), and ultimately even the product targets (incl. power/density, cell layout, time to market, CAD tools available, etc.).
Design rules for pitch start with a 1D target set by first-order density goals. “All the fun comes with the 2D targets,” quipped Webb. Starting with learning from the previous process, Intel then extrapolates 2D models for OPC, illumination techniques, reticle enhancement techniques (RET), and photoresist for critical parameters (e.g., DOF, MEEF, etc.). All of this leads to an OPC/litho test-chip to quantify the models for things like new off-axis illumination techniques. “The test-chip is the outcome, it is not part of the modeling process,” explained Webb. By the time the first design hits the fab, 80% of the design-rules should already be set. “We may take learning and feed it forward into the next process, but we’re not going to change the rules at ramp,” explained Webb. “If I have to do design-rule changes at ramp then I’ve made a mistake.”
For example, modeling variations in line-length on MEEF and CD with different illumination sources showed that line-lengths between 0.2 and 0.3 µm created problems. The real-world DFM trade-off involves checking back with the designers to determine whether they really need lines of this length, and searching for another interdependent parameter which can be constrained with a rule to eliminate the MEEF problem.
Intel’s Webb explained that 90 nm was the first time that the design rules started to change dramatically with each node, where there was a 47% increase in the number of rules for poly. At 65nm there was a 65% increase in poly rules, primarily to enable PSM and to handle proximity effects, though there was still variable poly pitch and width, and two directional routing. By 45nm, Intel has gone to the extreme constraint of gridded layout rules (GLR), and the total number of design rules went down 37% compared to 65nm. “It’s hard to measure the results of any one particular rule, since it would take thousands of wafers,” explained Webb.
In regards to the trade-offs between design and process, “As a foundry, we say the customer wins a lot…which means the design wins,” said Dr. Fu-Chieh Hsu, vice president of Design Technology Platform, TSMC. “We’re always challenged by designs pushing the limits of design rules.” Since the foundry supports legacy processes, as well as half- and quart-nodes for customers, TSMC sees a continuum of process technologies instead of discrete jumps between nodes. TSMC therefore sees broad general trends in process-design trade-offs. TSMC DFM solutions start with a DFM-design-kit, and include certified DFM-compliant EDA tools and 3rd-party IP, all of which has been used on over 113 tape-outs based on ~1000 IP validated blocks.
In a lively evening panel discussion moderated by Mark Mason (TI) and Juan Antonio Carballo (Argon Venture Partners), the prevailing sentiment seemed to be that of hope over hype. Joe Sawicki (Mentor Graphics) and Srinivas Raghvendra (Synopsys) provided perspective on the business constraints of commercial EDA vendors, while the playfully soft-spoken Riko Radojcic (Qualcomm CDMA)
expressed the perspective of the designer.
Regarding the challenges faced in attempting to model manufacturing variability and then feed that information back to designers in some way, Radojcic opined, “The two communities speak entirely different languages. If a manufacturing guy says this is the variability that you have, what does the designer do with it?” Radojcic advocated for a simulation environment which could be used to explore DFM options in virtual space and time, instead of waiting for expensive “spins” in silicon.
Radojcic said that the main limitation to the use of new DFM tools is quantifying benefits, “Trade-offs in area, variability, yield, and cost at the whole chip level are nightmares, so we all just shrug our shoulders and keep doing what we did before.” Raghvendra replied that, “We’re making progress towards solutions that are holistic, where you can look at the whole picture, and it’s not like you look at timing and lose power.”
Extensive DFM will continue to be needed for ICs made using less than quarter-wavelength lithography: 45nm and below for 193nm litho tools. The trouble really started at half-wavelength (~90nm), and unless EUV (~13.5nm) becomes an option to get patterning back to super-wavelength litho, the world will need more and more DFM going forward. IBM’s Liebmann says that high-index 193nm immersion won’t be ready for the 22nm node, and so litho will be addressed 100% computationally using ultra-regular layouts, extreme RET ( incl. source mask optimization, ‘SMO’), and virtual fab-ing using predictive modeling.
To that end, the recently formed DFM Consortium (DFMC)
—founding members including Cadence, Freescale, IBM, Samsung, ST, and TI, and—announced new members including Infineon, Intel, Mentor Graphics, and UMC
. Let us hope that the DFMC now has sufficient leverage to develop new standard models and metrics to allow for innovation to be quantified and rapidly implemented into design flows.
Labels: CAA, DFM, gridded design rules, modeling, restricted design rules, virtual fab
posted by [email protected]
080304: DFM matures along with industry