Bookmark this Blog Subscribe to an RSS Feed of this Blog.
<< Home

080304: DFM matures along with industry
Ed’s Threads 080304
Musings by Ed Korczynski on March 4, 2008

DFM matures along with industry
Hundreds of technologists over-packed the room in the San Jose Convention Center at 8am on the fourth day of SPIE to hear keynotes from IBM, Intel, and TSMC on the real reality of design for manufacturability (DFM) in the IC fab industry. As the two leading integrated device manufacturers (IDM) in DFM, IBM and Intel provided thorough overviews of technologies and methods used at 65 to 45nm nodes. In contrast, TSMC gave what seemed like a sales pitch which was not well received by the audience of peer technologists. A raucous panel discussion that evening raised the need for a modeling environment to test new DFM approaches in virtual space and time.

“It’s always possible to increase yield by throwing money at the problem,” declared IBM’s Lars Liebmann. “We need to keep costs under control because we’re chasing incremental yield.” One of the most costly aspects of implementing DFM today is quantification experiments to prove the value of a considered new technology. “You have to convince management that yield will increase, and that value is unique to the product, the time in the current manufacturing node, and the business goals,” said Liebmann. “…there is no universal DFM.”

New DFM tools build upon the proven models used in the past. Critical area-analysis (CAA) is one of the oldest predictors of mature device yield, since the area of the wafer subject to failure due to random physical defects (such as particles and scratches) can still be accurately extracted from any new design. “While there’s a lot of hype about systematic problems taking over from random problems, CAA is still an excellent indicator of yield,” said Liebmann.

Intensive number-crunching will be needed for design-technology co-optimization, and one example that has already been demonstrated is electrically-driven optical proximity correction (OPC). Instead of tuning a mask to produce optimized shapes, the mask is tuned to produce shapes with optimal electrical performance.

IBM showed that using highly restrictive design-rule (RDR)—specifically pdBrix layouts created by software now owned by PDF Solutions—at 65nm created dice with the same area and performance with fewer hotspots and less variability in a PowerPC405 core.
“DFM has sort of just barely scratched the surface,” said Liebmann. “We’ve just reached our teenage years and the best years are still ahead.”

In his keynote address, Intel Fellow Clair Webb explained that his group does a lot of simulation and modeling of design rules, such that the first test-chip is expected to confirm and calibrate the rules which are not supposed to change. Intel ramps processes to very high volumes very quickly, so the process must be very robust, and a very fast yield learning rate is essential.

What is really meant by co-optimization? For Intel, the many factors to be considered include the characteristics of litho tools, resists and illumination sources, tape-out technology, mask processing, device performance targets and architectures (incl. variability requirements), and ultimately even the product targets (incl. power/density, cell layout, time to market, CAD tools available, etc.).

Design rules for pitch start with a 1D target set by first-order density goals. “All the fun comes with the 2D targets,” quipped Webb. Starting with learning from the previous process, Intel then extrapolates 2D models for OPC, illumination techniques, reticle enhancement techniques (RET), and photoresist for critical parameters (e.g., DOF, MEEF, etc.). All of this leads to an OPC/litho test-chip to quantify the models for things like new off-axis illumination techniques. “The test-chip is the outcome, it is not part of the modeling process,” explained Webb. By the time the first design hits the fab, 80% of the design-rules should already be set. “We may take learning and feed it forward into the next process, but we’re not going to change the rules at ramp,” explained Webb. “If I have to do design-rule changes at ramp then I’ve made a mistake.”

For example, modeling variations in line-length on MEEF and CD with different illumination sources showed that line-lengths between 0.2 and 0.3 µm created problems. The real-world DFM trade-off involves checking back with the designers to determine whether they really need lines of this length, and searching for another interdependent parameter which can be constrained with a rule to eliminate the MEEF problem.

Intel’s Webb explained that 90 nm was the first time that the design rules started to change dramatically with each node, where there was a 47% increase in the number of rules for poly. At 65nm there was a 65% increase in poly rules, primarily to enable PSM and to handle proximity effects, though there was still variable poly pitch and width, and two directional routing. By 45nm, Intel has gone to the extreme constraint of gridded layout rules (GLR), and the total number of design rules went down 37% compared to 65nm. “It’s hard to measure the results of any one particular rule, since it would take thousands of wafers,” explained Webb.

In regards to the trade-offs between design and process, “As a foundry, we say the customer wins a lot…which means the design wins,” said Dr. Fu-Chieh Hsu, vice president of Design Technology Platform, TSMC. “We’re always challenged by designs pushing the limits of design rules.” Since the foundry supports legacy processes, as well as half- and quart-nodes for customers, TSMC sees a continuum of process technologies instead of discrete jumps between nodes. TSMC therefore sees broad general trends in process-design trade-offs. TSMC DFM solutions start with a DFM-design-kit, and include certified DFM-compliant EDA tools and 3rd-party IP, all of which has been used on over 113 tape-outs based on ~1000 IP validated blocks.

In a lively evening panel discussion moderated by Mark Mason (TI) and Juan Antonio Carballo (Argon Venture Partners), the prevailing sentiment seemed to be that of hope over hype. Joe Sawicki (Mentor Graphics) and Srinivas Raghvendra (Synopsys) provided perspective on the business constraints of commercial EDA vendors, while the playfully soft-spoken Riko Radojcic (Qualcomm CDMA) expressed the perspective of the designer.

Regarding the challenges faced in attempting to model manufacturing variability and then feed that information back to designers in some way, Radojcic opined, “The two communities speak entirely different languages. If a manufacturing guy says this is the variability that you have, what does the designer do with it?” Radojcic advocated for a simulation environment which could be used to explore DFM options in virtual space and time, instead of waiting for expensive “spins” in silicon.

Radojcic said that the main limitation to the use of new DFM tools is quantifying benefits, “Trade-offs in area, variability, yield, and cost at the whole chip level are nightmares, so we all just shrug our shoulders and keep doing what we did before.” Raghvendra replied that, “We’re making progress towards solutions that are holistic, where you can look at the whole picture, and it’s not like you look at timing and lose power.”

Extensive DFM will continue to be needed for ICs made using less than quarter-wavelength lithography: 45nm and below for 193nm litho tools. The trouble really started at half-wavelength (~90nm), and unless EUV (~13.5nm) becomes an option to get patterning back to super-wavelength litho, the world will need more and more DFM going forward. IBM’s Liebmann says that high-index 193nm immersion won’t be ready for the 22nm node, and so litho will be addressed 100% computationally using ultra-regular layouts, extreme RET ( incl. source mask optimization, ‘SMO’), and virtual fab-ing using predictive modeling.

To that end, the recently formed DFM Consortium (DFMC)—founding members including Cadence, Freescale, IBM, Samsung, ST, and TI, and—announced new members including Infineon, Intel, Mentor Graphics, and UMC. Let us hope that the DFMC now has sufficient leverage to develop new standard models and metrics to allow for innovation to be quantified and rapidly implemented into design flows.

—E.K.

Labels: , , , , ,


posted by [email protected]
080304: DFM matures along with industry

Post a Comment

0 Comments:

Post a Comment

<< Home



Ed's Threads is the weekly web-log of SST Sr. Technical Editor Ed Korczynski's musings on the topics of semiconductor manufacturing technology and business. Ed received a degree in materials science and engineering from MIT in 1984, and after process development and integration work in fabs, he held applications, marketing, and business development roles at OEMs. Ed won editorial awards from ASBPE, including interviews with Gordon Moore and Jim Morgan, and is not lacking for opinions.