The RV870 Story: AMD Showing up to the Fight
by Anand Lal Shimpi on February 14, 2010 12:00 AM EST- Posted in
- GPUs
The Payoff: How RV740 Saved Cypress
For its first 40nm GPU, ATI chose the biggest die that made sense in its roadmap. That was the RV740 (Radeon HD 4770):
The first to 40nm - The ATI Radeon HD 4770, April 2009
NVIDIA however picked a smaller die. While the RV740 was a 137mm2 GPU, NVIDIA’s first 40nm parts were the G210 and GT220 which measured 57mm2 and 100mm2. The G210 and GT220 were OEM-only for the first months of their life, and I’m guessing the G210 made up a good percentage of those orders. Note that it wasn’t until the release of the GeForce GT 240 that NVIDIA made a 40nm die equal in size to the RV740. The GT 240 came out in November 2009, while the Radeon HD 4770 (RV740) debuted in April 2009 - 7 months earlier.
NVIDIA's first 40nm GPUs shipped in July 2009
When it came time for both ATI and NVIDIA to move their high performance GPUs to 40nm, ATI had more experience and exposure to the big die problems with TSMC’s process.
David Wang, ATI’s VP of Graphics Engineering at the time, had concerns about TSMC’s 40nm process that he voiced to Carrell early on in the RV740 design process. David was worried that the metal handling in the fabrication process might lead to via quality issues. Vias are tiny connections between the different metal layers on a chip, and the thinking was that the via failure rate at 40nm was high enough to impact the yield of the process. Even if the vias wouldn’t fail completely, the quality of the via would degrade the signal going through the via.
The second cause for concern with TSMC’s 40nm process was about variation in transistor dimensions. There are thousands of dimensions in semiconductor design that you have to worry about. And as with any sort of manufacturing, there’s variance in many if not all of those dimensions from chip to chip. David was particularly worried about manufacturing variation in transistor channel length. He was worried that the tolerances ATI were given might not be met.
A standard CMOS transistor. Its dimensions are usually known to fairly tight tolerances.
TSMC led ATI to believe that the variation in channel length was going to be relatively small. Carrell and crew were nervous, but there’s nothing that could be done.
The problem with vias was easy (but costly) to get around. David Wang decided to double up on vias with the RV740. At any point in the design where there was a via that connected two metal layers, the RV740 called for two. It made the chip bigger, but it’s better than having chips that wouldn’t work. The issue of channel length variation however, had no immediate solution - it was a worry of theirs, but perhaps an irrational fear.
TSMC went off to fab the initial RV740s. When the chips came back, they were running hotter than ATI expected them to run. They were also leaking more current than ATI expected.
Engineering went to work, tearing the chips apart, looking at them one by one. It didn’t take long to figure out that transistor channel length varied much more than the initial tolerance specs. If you get a certain degree of channel length variance some parts will run slower than expected, while others would leak tons of current.
Engineering eventually figured a way to fix most of the leakage problem through some changes to the RV740 design. The performance was still a problem and the RV740 was mostly lost as a product because of the length of time it took to fix all of this stuff. But it served a much larger role within ATI. It was the pipe cleaner product that paved the way for Cypress and the rest of the Evergreen line.
As for how all of this applies to NVIDIA, it’s impossible to say for sure. But the rumors all seem to support that NVIDIA simply didn’t have the 40nm experience that ATI did. Last December NVIDIA spoke out against TSMC and called for nearly zero via defects.
The rumors surrounding Fermi also point at the same problems ATI encountered with the RV740. Low yields, the chips run hotter than expected, and the clock speeds are lower than their original targets. Granted we haven’t seen any GF100s ship yet, so we don’t know any of it for sure.
When I asked why it was so late with Fermi/GF100, NVIDIA pointed to parts of the architecture - not manufacturing. Of course, I was talking to an architect at the time. If Fermi/GF100 was indeed NVIDIA’s learning experience for TSMC’s 40nm I’d expect that its successor would go much smoother.
It’s not that TSMC doesn’t know how to run a foundry, but perhaps the company made a bigger jump than it should have with the move to 40nm:
Process | 150nm | 130nm | 110nm | 90nm | 80nm | 65nm | 55nm | 40nm |
Linear Scaling | - | 0.866 | 0.846 | 0.818 | 0.888 | 0.812 | 0.846 | 0.727 |
You’ll remember that during the Cypress discussion, Carrell was convinced that TSMC’s 40nm process wouldn’t be as cheap as it was being positioned as. Yet very few others, whether at ATI or NVIDIA, seemed to believe the same. I asked Carrell why that was, why he was able to know what many others didn’t.
Carrell chalked it up to experience and recounted a bunch of stuff that I can’t publish here. Needless to say, he was more skeptical of TSMC’s ability to deliver what it was promising at 40nm. And it never hurts to have a pragmatic skeptic on board.
132 Comments
View All Comments
Spoelie - Thursday, February 18, 2010 - link
phoronix.comfor all things ATi + Linux
SeanHollister - Monday, February 15, 2010 - link
Fantastic work, Anand. It's so difficult to make pieces like this work without coming across as puffery, but everything here feels genuine and evenhanded. Here's hoping for similar articles featuring individuals at NVIDIA, Intel and beyond in the not-too-distant future.boslink - Monday, February 15, 2010 - link
Just like many others i'm also reading/visiting anandtech for years but this article made me register just to say damn good job.Also for the long time i didn't read article from cover to cover. Usually i read first page and maybe second (enough to guess what's in other pages) and than skip to conclusions.
But this article remind us that Graphic card/chip is not only silicon. Real people story is what makes this article great.
Thanks Anand
AmdInside - Monday, February 15, 2010 - link
Great article as usual. Sunspot seems like the biggest non-factor in the 5x00 series. Except for hardware reviews sites which have lots of monitors lying around, I just don't see a need for it. It is like NVIDIA's 3D Vision. Concept sounds good but in general practice, it is not very realistic that a user will use it. Just another check box that a company can point to to an OEM and say we have it and they don't. NVIDIA has had Eyefinity for a while (SLI Mosaic). It just is very expensive since it is targeted towards businesses and not consumers and offers some features Eyefinity doesn't offer.I think NVIDIA just didn't believe consumers really wanted it but added it afterwards just so that ATI doesn't have a checkbox they can brag about. But NVIDIA probably still believes this is mainly a business feature.It is always interesting to learn how businesses make product decisions internally. I always hate reading interviews of PR people. I learn zero. Talk to engineers if you really want to learn something.
BelardA - Tuesday, February 16, 2010 - link
I think the point of Eyefinity is that its more hardware based and natural... not requiring so much work from the game publisher. A way of having higher screen details over a span of monitors.A few games will actually span 2 or 3 monitors. Or some will use the 2nd display as a control panel. With Eyefinity, it tells the game "I have #### x #### pixels" and auto divides the signal onto 3 or 6 screens and be playable. That is quite cool.
But as you say, its a bit of a non-factor. Most users will still only have one display to work with. Hmmm. there was a monitor that was almost seamless 3-monitors built together, where is that?
Also, I think the TOP-SECRET aspect of Sun-Spots was a way of testing security. Eyefinity isn't a major thing... but the hiding of it was.
While employees do move about in the business, the sharing of trade-secrets could still get them in trouble - if caught. It does happen, but how much?
gomakeit - Monday, February 15, 2010 - link
I love these insightful articles! This is why Anandtech is one of my favorite tech sites ever!Smell This - Monday, February 15, 2010 - link
Probably could have done without the snide reference to the CPU division at the end of the article - it added nothing and was a detraction from the overall piece.It also implies a symbiotic relationship between AMDs 40+ year battle with Chipzilla and the GPU Wars with nV. Not really an accurate correlation. The CPU division has their own headaches.
It is appropriate to note, however, that both divisions must bring their 'A' Game to the table with the upcoming convergence on-die of the CPU-GPU.
mrwilton - Monday, February 15, 2010 - link
Thank you, Anand, for this great and fun-to-read article. It really has been some time where I have read an article cover to cover.Keep up the excellent work.
Best wishes, wt
Ananke - Monday, February 15, 2010 - link
I have 5850, it is a great card. However, what people saying about PC gaming is true - gaming on PC slowly fades towards consoles. You cannot justify several thousand-dollar PC versus a 2-300 multimedia console.So powerful GPU is a supercomputer by itself. Please ATI, make better Avivo transcoder, push open software development using Steam further. We need many applications, not just Photoshop and Cyberlink. We need hundreds, and many free, to utilize this calculation power. Then, it will make sense to use this cards.
erple2 - Tuesday, February 16, 2010 - link
Perhaps. However, this "PC Gaming is being killed off by the 2-300 multimedia console" war has been going on since the Playstation 1 came out. PC gaming is still doing very well.I think that there will always be some sort of market (even if only 10% - that's significant enough to make companies take notice) for PC Gaming. While I still have to use the PC for something, I'll continue to use it for gaming, as well.
Reading the article, I find it poignant that the focus is on //execution// rather than //ideas//. It reminds me of a blog written by Jeff Atwood (http://www.codinghorror.com/blog/2010/01/cultivate...">http://www.codinghorror.com/blog/2010/01/cultivate... if you're interested) about the exact same thing. Focus on what you //do//. Execution (ie "what do we have an 80%+ chance of getting done on time) is more important than the idea (ie features you can claim on a spec sheet).
As a hardware developer (goes the same for any software developer), your job is to release the product. That means following a schedule. That means focusing on what you can do, not on what you want to do. It sounds to me like ATI has been following that paradigm, which is why they seem to be doing so well these days.
What's particularly encouraging about the story written was that Management had the foresight to actually listen to the technical side when coming up with the schedules and requirements. That, in and of itself, is something that a significant number of companies just don't do well.
It's nice to hear from the internal wing of the company from time to time, and not just the glossy presentation of hardware releases.
I for one thoroughly enjoyed the read. I liked the perspective that the RV5-- err Evergreen gave on the process of developing hardware. What works, and what doesn't.
Great article. Goes down in my book with the SSD and RV770 articles as some of the best IT reads I've done.