Bookmark this Blog Subscribe to an RSS Feed of this Blog.
<< Home

080505: When is a Memristor a ReRAM?
Ed’s Threads 080505
Musings by Ed Korczynski on May 5, 2008

When is a Memristor a ReRAM?
HP published that they are the first to have fabricated a novel circuit element first predicted in 1971 called the “memristor.” The HP authors claim that, “until now no one has presented either a useful physical model or an example of a memristor.” HP is certainly leading the world, but as one of many companies working on this technology for resistance-change random-access memory (ReRAM) applications. This spring’s Materials Research Society meeting featured an afternoon session on ReRAM with presentations by HP as well as Fujitsu, FZ Jülich, IMEC, Panasonic, and Samsung.

Antique circuit theories are rarely invoked at MRS meetings, so the focus of the ReRAM session was all about how you engineer complex atomic-layer oxide elements. Another sub-session covered organic switching elements for printable ultra-dense memories in the far future. In other memory technology, the usual suspects are still doing the same tap-dances about FeRAM and MRAM, but PRAM seems to have new momentum due to investments by Intel and ST in Numonyx and so may take over some of the mainstream.

Robert Muller of IMEC presented fundamentals of ReRAM cells based on Cu+ and Ag+ charge-transfer complexes for memory applications. Using Ag/CuTCNQ/Al structures, Cu+TCNQ- is a solid ionic conductor, and so a potential can reduce alumina to aluminum along with a corresponding oxidation of the “noble” metal on the other side. The main resistance change is expected as an interfacial effect within a few nm gap between the solid ionic conductor and the aluminum electrode, where Cu filaments form as conductors. IMEC has seen retention time of up to 60 hours so far, but theoretically this can be much higher. The integration problem is that TCNQ begins to degrade at 200°C, so another material may be needed for dense IC memories.

Z. Wei et al. of Panasonic talked about FeOx ReRAM, as first presented by S. Muraoka et al. at IEDM 2007. Fe3O4 reduces to higher-resistance Fe2O3. Both bipolar and unipolar transitions are possible, however, the bipolar high-resistance state (HRS) degrades in only ~100 hours at 85°C, while the unipolar transition retains high resistance to >1000 hours. Interestingly, the low-resistance state (LRS) of the unipolar mode shows metallic (instead of semiconducting) dependence of resistivity to temperature. Both fast switching and long retention may be achieved by combining bipolar (<100ns>1000 hours @85°C) modes.

Herbert Schroeder et al. of Jülich Forshlungszentrum (“FZ Jülich”) showed a simple stack geometry using 100nm thick Pt top and bottom electrodes with a central TiO2 layer 27-250nm thick. As produced, Pt/TiO2/Pt is insulating (in the MΩ to GΩ range) so that “electroforming” is needed. Up to 30mA is needed for the reset current with simple unipolar stacks, though HRS/LRS is ~1000 which is excellent and has been shown with read-out voltages of 0.3V over up to 80 cycles. Bipolar switching has a HRS/LRS of only ~5, but the reset current is merely 1mA and so applicable to real-world circuits. Room-temperature reactive sputtering of Ti results in polycrystalline TiO2 with columnar grains of 5-20nm dia. The possible mechanism of “forming” is the electro-reduction of TiO2 into TiO or Ti which creates oxygen ions to drift to the anode and appear as voids.

H. Kawano et al. of Fujitsu Labs (along with the Nagoya Institute of Technology) explained some of the inherent trade-offs in device properties depending upon the top electrode used with Pr0.7Ca0.3MnO3 bipolar switching material. The mechanism for bipolar switching is more complex and the switching speed strongly depends on the electrode material; using Ag or Au as the top electrode results in 100-150ns, while an easily oxidized metal such as Al or Ti results in ~1ms. Ta forms a thinner oxide which allows 100ns switching with HRS:LRS of 10 at 7V, and this ratio was maintained up to 10,000 cycles. With Pt as both electrodes they saw no ReRAM effects.

Julien Borghetti of HP Information and Quantum System Lab (IQSL) said that they use a TiO2 target to sputter ~30nm TiOx and after a forming step the HRS:LRS ratio is 1000-10,000 for bipolar switching. After formation, the HRS shows essentially no temperature dependence on the conduction, which implies that tunneling current must be responsible for the conduction. From IV curves at different temperatures and biases, it seems that most of the TiOx has parallel degenerate or metallic states which account for ~200Ω resistance which is present in both the HRS and LRS. Then there is a tunneling gap which accounts for the difference between the two states, and it seems to be <3 nm thick and consists of some defects which assist in the tunneling. Cryogenic tests down to 3°K show resonant tunneling through a degenerate gas of electrons.

More details on the HP ReRAM manufacturing process can be found in my recent SST article, “Imprint litho forms arrays for new fault-tolerant nanoscale circuits” (Solid State Technology, April 2008) which summarizes the main information the company has presented at IEDM, SPIE, and MRS conferences in the last half-year. HP has shown how cross-bar circuits built with ReRAM switches can function both as interconnects and as logic elements. The titania/platinum materials set which can provide reversible ReRAM is not ready for production, but alumina/aluminum is ready to go and can provide irreversible effects. HP Corvalis in Oregon, with its old subtractive Al metal fab, has all the processing capability needed to integrate alumina/aluminum ReRAM with traditional CMOS circuitry for FPGA applications.

Does calling the fundamental switching element in a ReRAM a “memristor” make it switch any faster or retain a state any longer? HP’s labs and fabs do great work and deserve recognition, but unless HP plans to use memristors as novel circuit elements it’s confusing to use the term for ReRAM memory arrays. One blogging circuit designer has already imaged the possibility of building large-scale analog neural networks out of memristor arrays. Now that we’ve discovered that our ReRAMs could be memristors, the next question is: what do we do with them?

—E.K.

Labels: , , , , , , ,


posted by [email protected]
080505: When is a Memristor a ReRAM?

Post a Comment

8 Comments:

Anonymous Anonymous said...

Maybe Stan (now a Bay Area radio star) should have read reference 4 in his own paper. It contains stuff like, oh, results and facts on a whole bunch of devices that walk and quack like "memristors". Other references in his paper and the Nature Materials review suggest that there are also previous models that allowed some fairly sophisticated circuit design. So, what did HP "invent", like, exactly? I think this big PR over-the-top-fest will come back to haunt the good Dr. Williams and his otherwise competent team.

Tue May 06, 11:01:00 PM PDT  
Anonymous Tarun Kansal said...

ReRAM seems to be a grear technology for future. But I m wondering what are its advantages comparing to our ongoing memory technologies??

Wed May 07, 01:15:00 AM PDT  
Blogger SST's Ed's Threads said...

In response to the comment by "anonymous," I can only say that I suspect that this PR is part of “building the new blueprint for corporate research” as directed by new HP Labs head Prith Banerjee, to convert “scientific discoveries into the marketplace.”

Wed May 07, 01:03:00 PM PDT  
Anonymous Anonymous said...

Memristor is NOT ReRAM. Typically a resistance-change memory device changes its state only above a certain threshold voltage. However, according to the equations of memristor in HP's paper, the memristance M is a function of q, which is the integral of idt. So applying low voltage is also gradually changing the state, and the resistance state should be continuous.

Wed May 07, 01:29:00 PM PDT  
Blogger SST's Ed's Threads said...

A memristor may technically never "be" a ReRAM, yet essentially identical engineered materials are used for both devices. Theoretically an analog memristor should indeed demonstrate continuous change in resistance, while a ReRAM is intended to store digital information as two or more discrete resistance levels. The control circuitry must be completely different between the two, yet the engineered oxide which changes resistance may be identical.

Wed May 07, 03:41:00 PM PDT  
Anonymous Anonymous said...

Hello anonymous - hey, same name as me (maybe we even work for the same Higher Power). The threshold thing may be a bit of a red herring as the key to memristor action is "history dependent resistance" and all of the other referenced devices possess this. But you do bring up a very good point - how can you build a memory cell, let alone an array of them, if you are using a device that does not have some kind of threshold voltage? A threshold-less device would not be a memory cell, more of a read-disturbistor. In any case, the "real" TiO2 device shown in the HP paper does have a threshold (somewhere between 0.5 and 1 volt). So I ask again, what exactly did HP invent here? A really bad memory cell or a device that has been in the literature for decades?

Wed May 07, 09:44:00 PM PDT  
Blogger SST's Ed's Threads said...

Think of it this way: how can a capacitor be the memory element of a DRAM cell? In the same manner a memristor can be the memory element in a ReRAM cell (only the capacitor leaks and so is volatile, while the memristor retains resistance and so is non-volatile). ReRAM uses the voltage-induced switching effect between high- and low-resistance states, which can be read as the 1s and 0s of digital memory.

Thu May 08, 01:11:00 PM PDT  
Blogger Marcelo said...

I found it quite unfortunate
that S. Williams et al. in their recent Nature paper have simply ignored our recent work in theoretical modeling of
the non-volatile resistive switching effect in MIM structures that use transition metal oxide dielectrics.

Our first work appeared in 2004 in Physical Review Letters, and subsequent work appeared in PRL and APL.

Contrary to their claim in the opening paragraph of the Nature paper
"... until now no one has presented either a useful physical model or
an example of a memristor.", our 2004 paper does introduce a model,
which certainly seems to have been useful as demonstrated by the over 100 citations it has received so far.

M. J. Rozenberg, I. H. Inoue and M. J. Sanchez, Phys. Rev. Lett. 92, 178302 (2004).

M. Rozenberg
[email protected]

Sat May 10, 04:14:00 AM PDT  

Post a Comment

<< Home

080317: There is no more noise...
Ed’s Threads 080317
Musings by Ed Korczynski on March 17, 2008

There is no more noise...
There is only signal. In controlling the manufacturing processes used for advanced nano-scale IC, the aspects of metrology which we used to be able to ignore as “just noise” are now essential signal we must control. Where to draw the line, and how close is close are just some of the challenges in ensuring that data streams become productive information for fabs. Metrology sessions at SPIE this year shone fractional wavelengths of light into the darkness of controlling accuracy, too.

When IC features were greater than the wavelength of light used in photolithography—and likewise much greater than a countable number of physical atoms—there were many aspects of manufacturing which we could simply ignore. With the smallest IC feature, typically defined by the minimum half-pitch spacing between lines, now reaching ~45nm (which is less than one-quarter of the 193nm wavelength used in litho) we now experience “second-order” and “third-order” effects which must be controlled.

Vladimir Ukrainstev of Veeco Instruments co-led a panel discussion at SPIE 2008 on the need for CD-SEMs to be accurately calibrated with 3D-AFMs. Researchers have reportedly seen a mere 1° change in the sidewall angle of a device structure result in a 2nm change in the CD measured by a standard 2D SEM. With the allowable budget for CD variation shrunk down to 3nm-4nm, this sidewall angle dependence must be controlled. The greatest risk is in process drift in an etch chamber, where sidewall angle can change spacially (e.g., from the center to the edge of wafers) or temporally (from wafer to wafer over time), which can induce substantial error in the CD-SEM measurement.

With tight feedback loops in advanced fabs, erroneous CD-SEM data can be mistakenly used to set the wrong etch parameters for following lots, which can degrade yield. “Instead of changing CD etch time by the week, we’re changing by the lot or the wafer as part of APC,” explained Kevin Heidrich, Nanometrics’ senior director of new business development, in an exclusive interview with WaferNEWS. Total CD control is ~4nm for all variability; a normal rule of thumb for precision over tolerance is 0.1, so the total budget for metrology is 0.4nm.

All measurement techniques are subject to some error, and even the best 3D-AFM is still subject to tip-wear and calibration. Veeco has been working with 3rd-party specialists to optimize AFM tips for different applications, with great results reported for various shapes nano-machined from single-crystal silicon for strength and then coated with some manner of a carbon coating for wear-resistance. NIST showed SPIE attendees this year that even with a slow, expensive, and destructive technique like TEM, there is still 0.33nm (standard deviation, 1σ) of the sidewall angle uncertainty. Everything else adds up to 0.63nm of total uncertainty. Calibration is vital to minimize the propagation of uncertainties.

One of the issues in determining the side-wall angle is what portion of the sidewall to include in the analysis. For features with corner rounding, this could be challenging even with ideal 90 degree sidewalls. Just considering 2nm radii of curvature on the top corners of etched polysilicon lines of 32nm to 45nm widths, and ~10% of the linewidth varies with where a CD-SEM draws the line for the edge.

To help control APC in all manner of deposition and removal processes, Nanometrics recently announced the delivery of the company’s 1000th integrated metrology sub-system; the milestone system was integrated into an advanced plasma etch system used to control gate CD in advanced logic devices.

At SPIE, IBM (Daniel Fischer et al.) showed OPC requirements for 32nm and the metrology tool calibrations need to support this advanced node. Modeling calibration sites per mask level has increased dramatically: normalized to the 90nm node, 65nm had 10×, and 32nm is 100×. There are now multiple CDs per contour, which results in a reduced number of measurement sites per wafer. For tool calibration, fundamental parameters of magnification, rotation, etc. each must be properly considered in modeling. The researchers showed that scanning a line array in orthogonal directions in a CD-SEM induced up to 2% variation in measurement due to the beam’s oval shape. It’s not noise anymore. “The users must understand the measurement techniques and have them constant or have a consistent offset to be able to use the data,” said Fischer. He added that with real device structures, 144nm was seen by a 2D tool while 160nm was measured by a 3D tool, so some manner of rigorous automated edge-detection is essential.

OCD looks very extendable to finFETs, too. SEMATECH and KLA-Tencor presented a paper on metrology for high-k finFETs at SPIE. Using high-k HfSiO thicknesses of 1.5nm and 3nm over Si3N4, and using TiN as the metal gate, a thorough DOE of depositions over fins was done. Then using KLA-Tencor's next-generation spectroscopic ellipsometer (measuring 225nm and up) for OCD, and CD-SEM from AMAT and also HR-TEM, cross-checks between the OCD and standard thin-film measurements showed that the offset was ~1nm. For the metal gate measurements, it was found that the TiN optical properties varied due to what is suspected to be some manner of slight oxide formation. Data from dense arrays showed serious offset from the pad areas, so correlations must be considered. Measuring in the fin area seems to provide sufficient resolution for process control for both the high-k and metal-gate depositions. OCD measurement precision was at the 1% level or better, and in good agreement with reference measurements. OCD looks very promising for finFET gate stack characterization.

n&k Technologies has modified the optical path of their spectroscopic ellipsometer tool to add a pinhole lens which narrows the transmitted beam spot size from 400μm to 50μm. Since real-world ICs and photomasks tend to have designed areas with regular 50μm arrays, this opens up the ability to measure many more real structures. Collecting the reflectance and transmission in both s- and p-polarizations using 50μm spots provides four separate signals to be used in determining all the layer thicknesses on the mask, including quart etch dimensions for phase-shift masks.

In pushing the limits of signals, IBM and Hitachi recently announced a unique, two-year joint semiconductor metrology research agreement for 32-nm and beyond characterization and measurement of transistor variations. Engineers from the two companies and Hitachi's subsidiary, Hitachi High-Technologies, will conduct joint research at IBM's Thomas J. Watson Research Center in Yorktown Heights, NY and at the College of Nanoscale Science and Engineering's Albany NanoTech Complex. Combining individual research strengths and IP will help "reduce the significant costs associated with research needed to advance the next generation of chip technology," said Bernie Meyerson, VP of strategic alliances and CTO for IBM's systems & technology group, in a statement.

Rudolph Technologies has become the first OEM to join SEMATECH's Metrology Program headquartered at the College of Nanoscale Science and Engineering (CNSE) of the University at Albany. The initial program addresses a range of issues, including the metrology of thin films and metal gate stacks; wafer front, back, and edge macro defect inspection; and inspection and metrology for through silicon vias (TSV) and three-dimensional integrated circuits (3DIC).

-- E.K.

Labels: , , , , , , , ,


posted by [email protected]
080317: There is no more noise...

Post a Comment

0 Comments:

Post a Comment

<< Home

080111: Flood of used 200mm tools
Ed’s Threads 080111
Musings by Ed Korczynski on January 11, 2008

Flood of used 200mm tools
Semico Research, working with affiliated Semiconductor Partners, has released a new study of the near-term forecast for used 200mm wafer processing tools soon to flood the market. In addition to identifying companies that are likely to either purchase or sell a fab and their expansion or divestiture plans, this study includes five-year device forecasts by technology node and detailed wafer demand. The market for used equipment is expected to grow from $300 million in 2007 to more than $8 billion in 2009.

"As leading edge digital memory and logic manufacturers build 300mm fabs for process technologies of 65nm or less, this will obsolete their 200mm fabs at 130nm or 90nm and some of their 300mm fans at 90nm. Analog and mixed signal manufacturers will have a need for these fabs to meet for expansion to satisfy the growing analog, mixed signal, and RF markets," explained Morry Marshall, Partner, Strategic Technologies at Semiconductor Partners.

The number of used tools forecasted in this study may be estimated by considering the average selling prices (ASP), and this varies widely depending upon tool categories. Tom Cheyney’s well written recent ChipShots blog mentions the standard 10%-20% cost of new tools, which certainly has been the historic average. Unfortunately, we’re entering into a new era where the lessons from history may not hold.

The upper limit of used tool sales prices comes from unique specialty process tools, needed to expand capacity on existing lines, which are no longer sold new. Like a legendary musical instrument (e.g., Fender Stratocaster pre-CBS electric guitar, or Selmer Mark VI saxophone) with only so many made, any still working are highly functional, and if you’ve built your business using them you’re willing to pay a premium price to keep using them. In the last year, I have heard of rebuilt 150mm tools with warranties selling for >$1.5M. In some cases this could be >200% of what had been the new sales price.

The lower limit of used tool sales prices comes from mainstream memory and logic fabs lacking uniqueness in the toolset. Since the used-tool ASPs are primarily determined by the supply/demand balance, a supply glut can lead to what-the-market-will-bear prices below 10% of new. If a seller tries to hold out for a more "reasonable" price only to find no takers, the line has to be shutdown and sold "as is" for even less money.

A working fab is a proven thing. There is risk in shutting down, decontaminating, shipping, and re-setting up a line, but at least if you start with a working line you have some baseline reference. A shuttered fab is full of extra risk. Every process chamber must be re-checked and proven; every gas line feeding every tool is now suspect. How much is a shuttered line worth? About two years ago I spoke with the general manager of a Chinese fab about used 200mm toolsets and supply and demand. He told me that he’s routinely approached by people wanting to sell lines for ~US$50M, and he tells them to not bother him until the price drops to $25M.

So who might be buying used 200mm lines? The Semico report mentions the general truism that, "Production of some device types, such as discretes or MCUs, will not move forward appreciably to more advanced technology nodes." MEMS and discrete chips have been produced in recent years primarily on 150mm silicon wafers, but STMicroelectronics and Freescale now like 200mm silicon wafers for dedicated MEMS production. MCUs for appliances, automotives, and general industrial applications may be industry entry points for new IC fab companies based on China (and eventually India, after infrastructure issues are eventually resolved). Philips likes 200mm for integrated passives and MEMS for advanced packaging, primarily through "PASSI" branded passives integration. So there is certainly demand. But the lingering impression is that it won’t keep up with the supply glut, and it will be a classic "buyer’s market."

A recent example of this dynamic is Atmel's sale of the 200mm tools in its North Tynesides, UK fab last year. Atmel originally tried to sell the entire facility to a company that would keep the line running in the UK. Leading broker of fabs Colliers ATREG was retained to try to make a deal happen with the constraint that there was "no opportunity to acquire the tools separately." At the end, TSMC bought the tools only for $82M, with expectation that they will add capacity in Shanghai, China.

The Semico report forecasts the value of available used equipment for the next four years (2008-2012) to be $5.4B, $8.2B, $6.5B, and $3.9B, for a total of $24B in value. For ASPs of ~15% of new prices, the corresponding equivalent in new tool sales value would be $160B. A rough guesstimate from these numbers would seem to imply >100 fabs with ~20k wspm at 0.13-0.25μm minimum linewidth capacity will flood the market over the next four years. For relative scale, with the SEMI Silicon Manufacturers’ Group forecasting ~10B square inches of silicon being processed for semiconductors each year, this translates into >800 fully loaded fabs globally running wafers in 200mm-equivalents.

With 20-40 extra fabs for sale each year, it seems certain that used tool ASPs will have to drop and the revenues to sellers and brokers may not be as high as forecast. Regardless of ultimate pricing, all of these tools under consideration are highly productive (most are still currently cranking out production) and most will eventually find a home.

The industry may be able to use most of these tools to manufacture MEMS, discrete devices, integrated passives, and silicon interposers. If a used 200mm tool glut floods over to mainstream CMOS, however, then it could permanently disrupt global pricing for MCUs and other logic ICs.

—E.K.

Labels: , , , , ,


posted by [email protected]
080111: Flood of used 200mm tools

Post a Comment

1 Comments:

Anonymous ajfoyt in ATex said...

Ed,

Would be interesting to see if the cost of tool install normally @ 5-10% of new tool price becomes the dominant cost to fab capital expenditures. Using that math and your 15% of new capital cost being the average used tool price what will the component cost be for refurb/reconfiguration? Who's looking at this total market opportunity? Interesting, based on the layoffs announced by AMAT today? Who's going to configure all of these refurbs and who will start them up?

Tue Jan 15, 08:45:00 PM PST  

Post a Comment

<< Home

071026: Soitec catalyzes SOI consortium
Ed’s Threads 071026
Musings by Ed Korczynski on 26 October 2007

Soitec catalyzes SOI consortium
Earlier this month after the SEMICON Europa show, Soitec COO Pascal Mauberger, led me on a tour of the company’s two manufacturing and one R&D lines in Bernin, France across the creek from ST in Crolles. Soitec has taken a bit of a gamble on expanding capacity with a new line in Singapore, just when volumes for SOI wafers have publicly stalled. However, strong technical advantages should result in new demand for engineered substrates, and CEO André-Jacques Auberton-Hervé is now leading an industry consortium to catalyze chip-makers’ adoption of SOI.

The “chateau” built to house Soitec has the classic design element of a bridge over a moat, while the mirrored sides of the building reflect the awesome beauty of the French Alps. Inside the complex is the Class1 ballroom layout of Bernin1, the company’s first fab that is now capable of producing 800K/year on ≤200mm wafers. Connected by a walkway, Bernin2 is the company’s Class10-100 ballroom layout 300mm dedicated line (also 800K/year). An overhead transport was added two years ago to increase output to handle the increased demand for all the latest-generation game consoles and AMD’s microprocessor ramp in Dresden. Though PS3 sales have been weak, Xbox and Wii game platform sales have been strong, and all use SOI chips.

Both Bernin1 and Bernin2, as well as the new 300mm line announced for Singapore, use completely standard industry tools from established OEMs to do the specialty implants and thermal treatments needed for their layer transfer process. Among the setup are TEL furnaces, Applied Materials implanters, EVG bonders (a bit customized at 300mm, instead of the standard 200mm size used in MEMS fabs), Mattson and Applied Materials RTP, and KLA-Tencor metrology tools. Over 1000 Soitec employees are running these lines 24/7 and essentially 365 day/year.

Bernin3, a stone’s throw from Bernin2, was built originally by MEMSCAP as its own fab. Essentially just a shell when it was acquired by Soitec in mid-2006, it now has three 500 m2 cleanrooms doing R&D on III-V materials such as Nanosmart GaN development, and complex pattern transfers. Transferring already patterned layers (not blanket layers) was work originally started at LETI, spun out as TraciT Technologies and then acquired by Soitec; the first product was imagers using backside illumination. Bernin3 runs 100mm, 125mm, and 150mm wafers, so the R&D tool set is flexible to handle any of these wafer sizes. If any device captures serious demand, then pilot production could occur with dedicated tools in the (currently empty) fourth space in the fab shell. Including its PicoGiga division's work on MBE epitaxy for GaN, Soitec has a lot of IP and know-how to bring to the development of high-efficiency and high-brightness LED production.

Soitec keeps only a handful of finished goods inventory on site, since the company is completely integrated into a just-in-time integrated supply-chain. Soitec maintains at least one month’s of inventory at each customer site, maintaining ownership until each wafer enters the IC fab line. Likewise, three suppliers maintain starting wafer inventory at Soitec, only “delivering” the wafers when they enter the SOI production line.

Auberton-Hervé, Soitec CEO and newly elected chairman of the SOI Industry Consortium, is modest about Soitec’s role in bringing the possibility of cost-effective SOI manufacturing to the semiconductor industry over the last decade. “We were a bit of the catalyst, but the demand was from the ecosystem,” he claims. The consortium in current form did grow out of periodic SOI user workshops Soitec had sponsored, and Auberton-Hervé notes that interactions between device researchers during a September 2006 workshop led to the demand for the creation of an open ecosystem.

To be sure, the proprietary IBM-ecosystem has had SOI design-flows, design IP, and appropriately tuned manufacturing processes for lease for many years. Yet not every company has been willing or able to work with the folks in East Fishkill, NY, and so this new consortium may really open up a new avenue to add value for many companies.

“The value of the consortium is in the ability to accelerate innovation,” said Auberton-Hervé. “We have to be more efficient in how we bring value to the whole food-chain. Roadmaps for cost in each segment will help, but it’s more global than that.” Most people think that finFETs really call for SOI, and both represent huge power-savings for portable battery-powered applications. From first-principles it seems that SOI has advantages for mixed-signal isolation. Embedded memory using ZRAM structures (license to Innovative Silicon) is also an attractive option.

With Auberton-Hervé committed to “doing well by doing good” in leading this consortium for the industry as well as for his company and shareholders, much more of the industry may end up using SOI. It may help with functional integration at 45nm and beyond, and that may help double battery life for next-generation iPods and e-Phones. SOI and other layer-transfer technologies will almost certainly become increasingly useful as simple x-y scaling inevitably slows, and Mauberger will be coordinating the operations of global Soitec fabs to keep the wafers flowing around the world.

—E.K

Labels: , , , ,


posted by [email protected]
071026: Soitec catalyzes SOI consortium

Post a Comment

0 Comments:

Post a Comment

<< Home

071012: Managing mature fabs
Ed’s Threads 071012
Musings by Ed Korczynski on October 12, 2007

Managing mature fabs
Associated with SEMICON Europe 2007, the Fab Manager’s Forum gathered representatives of Europe’s semiconductor fabs to discuss operations of primarily mature fabs. Michael Lehnert, of Renesas Semiconductor, presented examples of the benefits derived from fault detection besides yield improvement in mature fabs. Renesas Semiconductor Europe Landshut (RSEL) has a 200mm line with 13-15k wspm running 0.5 to 0.15 µm for MCUs (the line was originally a DRAM line).

Fault Detection and Classification (FDC) is a challenge for a fab running several hundred products, with 10 to 100 parameters/tool resulting in up to 5 GB/day of data. With 300,000 SPC charts and 10 entries per chart, and with the data normally distributed and applied 3-sigma limits with a 0.3% false alarm rate, a fab must handle ~1000 false alarms every day. Manufacturing engineers need to change how they work, spending more time with abstract analyses looking at computer screens, and less time crawling through the fab poking at tools. Monitoring facilities parameters such as gas flows and pressures may provide additional relevant data streams.

FDC improvement in wafer-scrap yield was expected, but an additional benefit has been in engineering productivity, with gathering time reduced and more accurate data. Spare-parts and consumables evaluation is now easier, so there is greater confidence in being able to change to less expensive sources when possible. Greater confidence allows for reduction in sampling frequency and reduces the need for dummy wafers. Better preventative maintenance (PM) planning—for example monitoring the filament current in an implanter—results in reduced consumables costs, equipment uptime, and even turn-around time (TAT) due to greater tool availability.

Dr. Detlef Nagel, Sr. Director Product Engineering, Qimonda Dresden discussed how to manage APC in worldwide DRAM fabs. Future business requires an evolution from APC to predictive process control (PPC), which will in turn require a revolution in data-mining, multi-variate control, and yield prediction. Technology complexity can be kept under control by generic run-to-run (R2R) controllers and virtual metrology.

Qimonda uses SMIC and Winbond as foundries to balance production, along with their own fabs in Europe, the US, and Malaysia. Fast distribution of knowledge is a problem due to regional cultural differences, and the inherent difference between development and volume fabs. One innovative solution is the use of a network of senior equipment engineering specialists, with individuals responsible for an assigned toolset within some areas of expertise. This worldwide captive network improves equipment throughput and reliability at Qimonda fabs; there is traditional information exchange with the foundry partners but not the expert knowledge.

Peter Schaffler, global yield enhancement manager for TI, talked about yield enhancement in the Freising fab. It was originally a 3” Bosch fab, and has been continually upgraded to the current level of 0.2µm processing on 200mm wafers; the line runs CMOS/BiCMOS with 20k active reticles used on 400k wafers/year. TI now does tool qualification with product wafers, challenging costs and tool availability. Sampling strategy directly affects your costs: too much wastes expenses, while too little guarantees lost yield. Typical these days is 10-20% of lot starts, but sampling frequency should be determined by the number of lots at risk and the complexity of the mask level, which results in tool-specific dynamic sampling. Or course, an efficient data analysis system is needed to provide macros for data drill-downs using tool, parametric metrology, final electrical test, and other data sets. Proper charting and visualization in an interactive GUI allows new analyses to be done.

Single-wafer tracking allows for the extraction of yield-loss signatures like the wafer number in the lot, first or last wafer effects, and different lots with single wafer excursions. For example, electrical-test data that may originally show no signature can be sorted to obtain a clear clustering of parameters into groups of five wafers, which in turn could point directly at a TEL furnace which was the only toolset running batches of five.

A breakout session on the dynamics of the used equipment market provided a fantastic perspective on the status of the current market. In addition to third-party brokers, OEMs now provide refurbished tools with full one-year warrantees for typically 40-80% of the original selling price. As always, the price is set by markets: the price to acquire the tool, cost to properly refurbish, and the customer demand for the tool. At the high-end of pricing, the used tool is sold with all new tool specs and it may then be considered as almost just another new tool for capacity. If the market forces align in certain ways, even a used 150mm tool may be sold for US$2M.

If you buy through a broker, it is somewhat common to then have to purchase a use license (often for the software) from the OEM. These licenses can range from $10k to $700k for complex tools; and are the single greatest hurdle for customers of 3rd-party brokers. The consensus was that licenses are not unreasonable in principle, but customers really expect to receive some value in terms of software upgrades and service support for their payment. Service-contracts from OEMs certainly minimize the risk of working with used or refurbished tools, regardless of the seller.

Hallway discussions with equipment brokers revealed that they’re tracking a tremendous number of 200mm tools which are planned to be decommissioned over the next 1-2 years. How the industry will absorb these tools remains to be seen, but with SECS/GEM interfaces and modular sub-system designs, it’s likely that most of these tools will remain productive somewhere in the world.

--E.K.

Labels: , , , , ,


posted by [email protected]
071012: Managing mature fabs

Post a Comment

1 Comments:

Anonymous Ray Bunkofske said...

FDC is a proven contributor to the efficiency of the semiconductor manufacturing process. Modern FDC applications available from multiple suppliers provide a solid means to address the concerns mentioned in your Blog. Instead of making separate charts for every parameter, product and recipe combination it is much preferred to build multivariate models where the parameters are normalized in the background so as to greatly reduce the number of charts. There is no reason for more than two or three charts per tool-chamber combination to provide fault detection. Once a fault is detected then additional text or charts appropriate to the fault can be displayed to guide the resolution of the problem. Data analyzed in this way not only provides superior fault detection with virtually 0% false alarms, it enables prediction of several down stream metrics such as metrology results (film thickness, etch rate) and end of line electrical test results such as threshold voltage or overlap capacitance. Combine the signals from the tool with data from auxiliary sensors such as chamber impedance or full spectrum OES and you have a very powerful diagnostic and predictive tool. None of these capabilities are difficult to implement but they do require some care and preparation, something that could take six months to a year and it is difficult to get over this activation barrier. These efforts do not require huge investments in software, infrastructure or people, just a management with enough vision to stay the course and take advantage of the benefits as they materialize. Once through this phase of the program the system should pay back in 6-12 months through improved process capability and reduced time to detect.

Tue Oct 16, 09:29:00 AM PDT  

Post a Comment

<< Home

071005: Fairchild at 50 still milking the cash cow
Ed’s Threads 071005
Musings by Ed Korczynski on October 5, 2007

Fairchild at 50 still milking the IC cash cow
The 50th anniversary of the founding of Fairchild Semiconductor was celebrated on October 5th and 6th at the Computer History Museum in Mountain View, California. With
Jay Last also in the audience, and with many call-outs to other Fairchildren living and dead, E. Floyd Kvamme (Marketing) led a panel discussion of Gordon Moore (R&D), Wilf Corrigan (Manufacturing), and Jerry Sanders (Sales). Marketing has really never gotten any respect in the chip industry (unlike at Apple and some software companies), while the other three domains have combined to create the uniquely chaotic culture that is Silicon Valley.

Why do ICs seem to always get cheaper and do more each year? Why do we send manufacturing jobs to other countries? Why are huge egos rewarded in high-tech industries? It all comes from the trial and error experiences of the people who worked at Fairchild Semiconductor in the first ten years of the company’s existence. Driven by a vision, and fueled by caffeine and alcohol, scientists and engineers created new technologies, new companies, and new ways of doing business.

Fairchild Semiconductor was formed by the famous “traitorous eight” who quit en masse from Shockley Semiconductor in 1957. Gordon Moore explained, ”Shockley was an unusual personality. Someone said that he could see electrons, but he couldn’t work with people.” After trying to get Shockley replaced, Moore confessed, “We discovered that a bunch of young PhDs didn’t have a good chance to displace a recent Nobel Prize laureate.” After first trying to all get hired by another company, they were eventually convinced that they should just start their own company and found funding with Fairchild Camera and Instrument.

R&D was the foundation for everything at Fairchild Semiconductor. Inventing a new industry takes a lot of work, and new devices, processes, and equipment were designed and deployed regularly; at the peak it was “one new product per week.” With intense commercial competition, as long as something works and is reproducible it just doesn’t matter if you know the theory of why it works. “We had a lot of technology that worked but we didn’t understand why,” admitted Moore. Another initial mystery was why technology transfer from R&D became more difficult as the manufacturing people became more technically competent. Eventually, it was discovered that the manufacturing people thought that they would add value by “improving things,” but generally only changed things for the worse. The logic solution to this problem is the “Copy Exactly!” manufacturing strategy of Intel.

Manufacturing semiconductors has always been technically risky and yet like any manufacturing line it must be controlled with a conservative mindset. Trying to conservatively manage risk results in a sort of unique schizophrenia, and has inadvertently accelerated global technology transfer. Wilf Corrigan explained that when he joined Fairchild in 1968 from Motorola, he was immediately struck by the difference in high-volume assembly strategies. “Motorola was very focused on manufacturing excellence, and had thousands of people making automated tools.” In contrast, Fairchild had thousands of well-trained women earning low-wages in Hong Kong. “Using a global approach to drive costs down was part of the legacy of Fairchild,” said Sanders. Moore commented that Intel’s first assembly line using Asian female manual laborers was faster than the then-state-of-the-art automated IBM assembly line, and could rapidly adjust to handle new wafer sizes and package designs. So “outsourcing” has been part of Silicon Valley almost from the beginning.

Sales has always been the vital third leg for the industry. New IC products can break open entirely new lucrative markets, but it still takes someone to go get the order despite problems with manufacturing volumes. Sanders told the story of selling planar transistor against grown-junction transistors by TI, and winning one aerospace contract by putting lit matches to both and showing that the leakage-current went out-of-spec on the TI chip. Kvamm told of selling glob-top packages that failed so easily with a fingernail flick that they were derisively called “pop-tops.” Moore added, “We sold the rejects from our Hong Kong packaging line as eyes on teddy bears.”

Sanders confessed, “When I started with Fairchild, I was single and had no concept of home life. I’d show up at a sales-guy’s house at 7:30 in the morning on a Saturday to start work.” Sanders seemingly has selling in his genes; after 30 years he’s still trying to sell the original AMD mission statement, and during the panel he couldn’t stop himself from making gratuitous pitches for AMD chips. Still, he typifies the “shooting ahead of the target” mindset of a salesman who knows what his customer will need in advance of formal demand. Huge egos are just par for the course, and Sanders proudly recounts signing off on the claimed largest bar bill in Hilton Hawaii history for a global sales meeting. Getting everyone drunk and happy in a group setting was supposedly the only way to keep egomaniacal individuals working together as a team.

The history of the industry is really just the combined stories of individuals, and nearly every classic Silicon Valley success story starts out with a chapter on gross incompetence of top executives at a soon-to-be-former employer. Fairchild drove out Charlie Spork to create National Semiconductor in 1966. Sanders commented, “When Charlie Spork resigned I was stunned. I said to him you can’t do this, and Charlie just went off on the incompetent corporate management. Charlie said he just couldn’t work here anymore.” When Robert Noyce was passed over to be the CEO of Fairchild in1968, he decided to leave and took Moore with him to found Intel.

The Fairchildren were smart and worked hard, but timing and luck were also keys to success. “The fact that Fairchild started in the technology areas that were the ones that continued—manufacturing use of diffusion, batch processes—was lucky,” admitted Moore. “If you ask me about Intel, I’d say a lot of luck was involved.” Of three different technology and product directions started upon by Intel, only the silicon-gate MOS process was successful. “If it was much harder we might have run out of money before proving it, and if was much easier then others would have copied it,” said Moore.

Fairchild ultimately infected the area to be known as Silicon Valley with the culture of the engineer/entrepreneur archetype, stock-options, and high R&D spending. When there was still innovation to be done, this resulted in tremendous creativity and technology growth. With the major innovations in silicon IC manufacturing essentially in place by the mid-1970s—with the exceptions of lithography and EDA—the last 25 years have been mostly about milking the technology cash cow.

At the reception, one of the Fairchildren pitched his new chip design to me and wanted to know if I could hook him up with some financing for a start-up. Old habits die hard, and I’ve been infected with the entrepreneurial meme so I can relate, but I can’t help feeling that the time is past for chip startups. Too many competitors have evolved to fill all market niches, and IC functionality has seemingly reached a point of saturation such that software now adds the incremental value. The future of bold innovation belongs to software startups like Netscape and Google, while IC folks can really only anticipate more milking of the herd of cows already bred by the Fairchildren. Pass the milking stool.

—E.K.

Labels: , , , ,


posted by [email protected]
071005: Fairchild at 50 still milking the cash cow

Post a Comment

4 Comments:

Blogger David Binkerd said...

Nice history, thank you. I think you'll find that National Semiconductor was around for some time before Charlie Sporck left Fairchild. In the early 1960s, from two southern California subsidiaries of Alloys Unlimited we supplied silicon wafers (2-1/2" diameter) and hermetic packages (TO-5 and TO-18) to Peter Sprague, National's founder, who was then running the company from an ancient, blocks-long, former textile mill near Danbury, Connecticut. The structure was so big and empty, they used to joke: "When we want to expand we just sweep out another part of the building"; but when National decided to try for the big time it was the man from Fairchild who would get them there.

Wed Oct 17, 10:27:00 AM PDT  
Anonymous Mike Clayton said...

Nice story to add to my collection. National got the manufacturing guys, and Intel got the R/D guys...together they would have made a great company, as each suffered for several years. Luckily, National also got Widler and Talbert to create the Linear IC's. They got to National just before Charlie, and had stock options that made them rich, and Charley's options were a little higher priced. Someone should write a book on Widler and Talbert's work some day. LM101 et al.

And Intel eventually figured out manufacturing... although sometimes that was near disaster.

Moto stupidity gave Intel a big IBM design win that Motorola passed up by refusing to customize their micro-code on their actually superior microprocessor that IBM wanted first. Stupidity is not rare, it is widely distributed. Andy Grove's huge book is a fun read on those times.

Mike Clayton

Wed Oct 17, 11:29:00 AM PDT  
Anonymous Mike Clayton said...

Nice story to add to my collection. National got the manufacturing guys, and Intel got the R/D guys...together they would have made a great company! Luckily, National also got Widler and Talbert to create the Linear IC's. They got to National just before Charlie, and had stock options that made them rich, and Charley's options were a little higher priced. Someone should write a book on Widler and Talbert's work some day. LM101 et al. Moto gave Intel a big IBM design win when Motorola passed up by refusing to customize their micro-code on their actually superior microprocessor that IBM wanted first. And eventually, Intel learned manufacturing!

Mike Clayton

Wed Oct 17, 11:34:00 AM PDT  
Anonymous Mike Clayton said...

Blackstone milking IC cash cow at Freescale would be an interesting story!

Wed Oct 17, 11:44:00 AM PDT  

Post a Comment

<< Home

070727: Working together to reach nirvana
Ed’s Threads 070727
Musings by Ed Korczynski on July 27, 2007

Working together to reach nirvana
SEMICON West hasn’t been a “selling show” (i.e., a tradeshow where you actually sell stuff) for well over a decade, so why do people still bother to attend it? There are still endless meetings and seminars and panel discussions that provide vital connections and information to keep the industry going. Manufacturing ICs with minimal dimensions below 45nm creates technical challenges that combine with consumer-market challenges to create extreme rewards for success and extremely expensive penalties for failure. For any IC fab company to succeed in the future, partners will be needed and new ways of working together will have to become new habits, as detailed in two separate panel discussions held on succeeding days by Praxair Electronics and DuPont Electronic Materials.

The first few decades of the semiconductor industry were based on vertical business integration like that championed by Henry Ford at the carmaker's Rouge Plant, where controlling the stream of raw materials and custom-built equipment resulted in massive economies of scale. Vertical organization under a strong top customer leads to a clear hierarchy of power, and corresponding norms of one-way information flow, dual-source strategies for all suppliers, and limited motivation for fixed relationships.

By the 1990s, however, the global semiconductor industry had became vertically dis-integrated, with separate levels for original equipment manufacturers (OEM) and specialized subsystems manufacturers — yet the mindset of vertical integration typically remained.

Today, we’re in an era where the complexity of manufacturing has increased to the point that even the biggest integrated device manufacturers (IDM) like Intel and IBM and TI have to partner to develop technology. With consortia and joint-development projects (JDP) now driving the creation of most new intellectual property (IP) in the industry, and with the increased costs and risks of nanometer-era IC fabrication, we must develop new habits of working together and sharing information.

Carrying the theme that “In sharing knowledge we can achieve true enlightenment,” Praxair’s July 17th event at SEMICON West featured keynotes by SEMATECH’s Raj Jammy and processing expert John Borland, discussing the technical challenges of 32nm node transistor fabrication. In the panel discussion that followed (which I had the pleasure of moderating), I attempted to express some “Zen-like” ideas about working together in a harmonious ecosystem. More details from the Praxair panel can be found in SST On the Scene video interviews available online.

Meanwhile, DuPont’s July 18th seminar entitled "Technology Partnerships and Tools for the Future" featured presentations by executives from IDMs, OEMs, academia, and a consortium (SEMATECH's Raj Jammy again) on how cooperation is needed to meet the increasingly demanding requirements of advanced ICs.

Mansour Moinpour, materials technology and engineering manager for Intel’s global fab materials organization, showed that even the largest company in the industry with potentially the greatest internal resources has used an ever increasing number of partners over the last decade. Large companies today have typically systematized interactions with universities and other research organizations. “I think the challenge is going to be how to make sure that we facilitate the interaction of the small companies with the universities,” explained Farhang Shadman, Regents Professor of Chemical and Environmental Engineering at the U. of Arizona, and director of its Center for Environmentally Benign Semiconductor Manufacturing. “I think this is very important, because they are in greatest need of research facilities.”

Basic human trust is essential to making deals that can quickly bear fruit, combined with prior aggregate experience, and some manner of mutual benefit on a strategic level. Jammy said that templates and standards have allowed SEMATECH to reduce the time needed to get a signed contract from one-half year down to weeks. John Behnke, VP of process development and transfer for Spansion, commented, “There are some pretty good templates that the legal community and the different corporations begin with, which helps the process. I think it has matured in the last maybe two to three years. So that helps.”

Behnke reminds us that trust is still vital to efficient business, and trust that your ideas will not be stolen is perhaps the most vital. “Let's say that the room is dark and the solution to that is to invent the lightbulb,” he explained. Once the hard work of creating a working lightbulb is complete, “the person who said the room was dark thinks that it's all theirs.” This sort of mindset was not uncommon in the past. Fortunately, it seems that most of us now realize that an attitude of “enlightened altruism”—in which we all work for mutual benefit—really does result in the greatest individual benefit too.

–E.K.

Labels: , , , ,


posted by [email protected]
070727: Working together to reach nirvana

Post a Comment

0 Comments:

Post a Comment

<< Home

070129: Intel wins race to be Intel
Ed’s Threads 070129

Musings by Ed Korczynski on January 29, 2007


Intel wins race to be Intel
How did it happen? How could Intel present 45nm transistor results with high-k dielectrics and dual metal gates (HK+MG) years ahead of everyone else? Mark Bohr, Intel senior fellow in logic technology development, stated, “I don’t believe any other company will have high-k and metal gates until the 32nm node or later.” If this is true, it is only because IBM and other companies felt that they wouldn’t need HKMG for 45nm so they did not start manufacturing work two years ago. Thus, Intel has won a very difficult race as the single contestant.

It seems that the company even surprised itself with these results. On Thursday Jan. 25th, the day before the official announcement, Intel invited journalists to a last-minute show-and-tell at its Robert Noyce HQ building in Santa Clara, CA. PCs running on 45nm “Penryn” chips were shown—all of which came from the “first-silicon” wafer with these new materials processed using the first mask-set. Packaged first-silicon chips received at Intel’s Folsom test lab at 1:00 am had functioned, and the team immediately rushed one into a motherboard which promptly booted a software OS two hours later. Intel showed a photo of the team toasting their success with Martinelli’s sparkling cider at 3:00 am—give Intel credit for maintaining entrepreneurial zeal with nearly 100,000 people.

Two core competencies were at work to get to these results: extreme discipline in manufacturing execution, and proprietary design and yield-learning methodologies. Since Intel has always had to live in the brutal merchant market, it has always aimed for the sweet spot in the middle of manufacturing-cost and chip-performance, and then relentlessly driven to meet its goals. Instead of silicon-on-insulator (SOI), Intel pushed traditional planar transistors on bulk silicon wafers to the limits of traditional materials for its current 65nm node manufacturing.

Looking at 45nm options about two years ago, Intel decided to stick with bulk silicon wafers and add HK+MG. In Jan 2006 it announced yielding SRAM TEG chips with >1B transistors, but kept secret that these chips used HK+MG. Still secret is the hafnium-based dielectric composition, both of the metal gate materials, and whether the process flow is “gate-first” or “gate-last.” The new transistors still maintain strain in the channel regions for maximum carrier mobility. Innovative design rules and advanced mask techniques will be used to extend the use of 193nm dry lithography, which we may assume includes orientation limitations in harmony with illumination sources. All these changes result in new process integration challenges and new yield-loss mechanisms, so we might expect it to take a while longer to ramp yield. Amazingly, Intel shows a 45nm yield-learning curve that tracks the last three nodes (see figure, above).

CEO Paul Otellini—dressed all in black like an international jewel thief, perhaps due to having spent excessive time around Steve Jobs—stated, “The plan is to have microprocessors in end-users hands by the end of 2007.”

Meanwhile, with timing that just could not be coincidence, on January 26th SEMATECH announced R&D; of a gate-first HK+dualMG process. “Be aware of the difference between a real manufacturing commitment, and research papers that continue to fall short of these results,” stated Intel's Bohr. The very next day IBM/AMD/Sony/Toshiba said that they will use HK+MG with their 45nm transistors sometime in 2008. We may assume that this announcement was rushed out in response to the Intel press release, since it erroneously refers to HK+MG as a single material—either the IBM alliance plans to use only one of the two, or IBM needs a technologist to review their press releases.

Technology development continues in the industry. Intel’s use of HK+MG materials in mainstream 45nm commercial manufacturing is certainly a significant milestone. Certainly other companies will follow, though in their own ways and in their own times. Due to the extreme complexities involved in any nanometer-era IC manufacturing, it’s getting more and more difficult to compare results from different companies. Fortunately, you can trust SST and WaferNEWS to sort the reality from the hype.

E.K.

Labels: , , , , , ,


posted by [email protected]
070129: Intel wins race to be Intel

Post a Comment

1 Comments:

Blogger Cyrus said...

I love the title of this thread, but it is clear that they are technology leaders. Nice web site.
thanks!

Sun Feb 25, 11:13:00 PM PST  

Post a Comment

<< Home



Ed's Threads is the weekly web-log of SST Sr. Technical Editor Ed Korczynski's musings on the topics of semiconductor manufacturing technology and business. Ed received a degree in materials science and engineering from MIT in 1984, and after process development and integration work in fabs, he held applications, marketing, and business development roles at OEMs. Ed won editorial awards from ASBPE, including interviews with Gordon Moore and Jim Morgan, and is not lacking for opinions.