AMD can fab those small memory controller/cache chips cheaply, on a density optimized process. The cache chips don't care about being split up, as memory access is interleaved anyway, hence there is no communication between memory dies.
And the compute die can focus soley on compute, without wasting tons of die area on pins for the GDDRX or last level cache.
The scheme saves tape out costs, as the memory die is reused across the stack and the compute die can be relatively small.
But I wonder why AMD didn't make a 512-bit GPU with a 600mm^2+ compute die. They could have killed the 4090 with this architecture, and they could sell a 64GB version as a cheap server inference card... But they chose not to.Reply
It probably wouldn't stress AMD's engineers too much? They would tape out a compute die and test the package, but the memory dies and the basic architecture are already worked out.
This is all hypothetical though, as its indeed too late to start a new RDNA3 product.Reply
There is a relatively long lag from when the configuration decisions are made and when the product is released, with most parameters unknown to us. Process availability etc. Most likely if they had the knowledge about available today they would have made different choices back then. Sure there are minor configuration choices made after the chips are ready, like frequencies, memory speed etc, but those are minor adjustments within the possibilities given by the hardware. It's very complex guesswork.Reply
Bruce, what do you think the memory latency is between, on and off, back and forth between GCD and MCD? SIP interconnect has to through Serdes is that not correct? mbReply
> what do you think the memory latency is between, on and off, back and forth between GCD and MCD?
Maybe AMD mentioned this in the original RDNA3 architecture slide deck, or at Hot Chips? I know they provided bandwidth numbers.
For me, the big question is how the cache on the MCDs functions. I wonder if it's tied to the address range of the DRAM attached to the MCD, or if it's in a global pool? It matters a lot, especially with RDNA3 having less L3 cache than RDNA2 had. If it's being subdivided by address range, then the effect is that you have even less.Reply
Subdividing them by address range seems very inefficient. The memory chips and controllers are already interleaved, so the cache mind as well be interleaved too, right?Reply
Do you know, for a fact, that these GPUs interleave memory accesses? I wouldn't assume so.
And, what I meant by "subdividing by address range" was essentially having the cache front whatever DRAM is attached to the MCD. I wasn't suggesting there'd be a partitioning scheme different than the memory. Either it's partitioned according to how the attached memory is partitioned, or it's global. Those are the only two realistic possibilities. I just wonder which.Reply
I have no clue, and I haven't seen it mentioned anywhere. It would be interesting to know... Maybe a chipsandcheese cache benchmark will coax it out?Reply
What they are working towards is using multiple graphics chiplets (for consumer). That's when it would make more sense to make a gigantic GPU, since it would have lower costs than monolithic.Reply
Rumor is that they just cancelled their RDNA4 GPUs that had multiple shader dies. I didn't read too many details, but it seemed like they took the approach of stacking the shader dies on a MCD.Reply
I'm aware of the rumor. If true, they wouldn't make "ultra high end" $1000+ GPUs that generation, waiting until a generation when they can use multiple GCDs. The cancellation might mean that RDNA5 production gets brought forward by a few months.Reply
No they don´t- It seems that GPU prices are going up again! New middle range will most likely go to $1000 quite soon. Just remember Nvidia 5080 12Gb version... They will do it again. By pushing real 8080 to $2000, so 8070 at $1000 will be a bargain!
Club3D is the only maker that sells bidirectional DP to USB-C cable. It's very expensive compared to USB-C to DP or HDMI. It doesn't even pass through power. For USB-C output that has Power and DP, you would need a Wacom Link Plus dongle that is even more pricey.Reply
"Real wages" have moved slightly in 20 years, maybe by +10% at most, which is bad but not nothing. That adjusts for inflation. After you remove the impact of inflation on today's GPU prices, then you can determine how badly gamers are getting screwed by the "mid-range".
For example, a $500 GPU in 2023 is the equivalent of $380 in 2013. The GTX 770 launched at $399 in 2013. You could go back further but the GPUs of 20 years ago were vastly inferior, and not being used by consumers for things like machine learning.
With wages falling far short of inflation, normal bills become a bigger percentage of expenses and discretionary income deceases. This is especially true for those on the bottom half of the income bracket that didn't have much extra to begin with. I own a 7900XTX and I don't game. I build computers for family and $500 is not middle of the road for them. Even my gaming friends wouldn't spend $500 on a video card because it's too much money. Most of them stick with consoles instead.Reply
Don't we know Navi32 is a fully-enabled 3-shader-engine design, just with 20 CUs/SE instead of 16, like Navi31/32? That explains why ROPs are halved vs N31, and the 4th MCD is a bit odd but 7900 XT shows that MCDs are decoupled from SEs, and the extra bandwidth would be needed to balance the higher proportion of CUs.
Those config changes are sensible if you're building a 1440p card. Don't really need the ROPs, but enough CUs for uncompromised graphics settings.Reply
Too few shaders, relative to the RX 6000 counterparts. Sure, the TFLOPS look good, but that didn't translate well into real-world performance for the RX 7900 cards, so I don't see why it'd be any different for these.
The reduction in L3 cache is also a bit worrisome, especially if it's partitioned as I mentioned above.
I was interested in the RX 7800 XT, but the launch pricing feels a bit high, for the expected performance. Unless they've fixed some significant bugs or bottlenecks in the first round of RDNA3 cards, I'm expecting to see it struggle to beat the RX 6800 XT.Reply
According to whose simulation? I'll wait until I see benchmarks, but I'm not hopeful. I do think it'd have to beat the 6800 XT, on average, for them to dare calling it the 7800 XT.Reply
6800XT will be faster in rasterization... 7800XT will be faster in raytrasing and most likely when using FSR 3.0 in both GPUs 7800XT bigger computational power will give 7800XT an edge. But that is no point. If you have 6800XT... Don´t upgrade! These are not against AMD old GPUs, these are made to compete against Nvidia GPUs. 7700XT cost 450 aka less than 4060ti and is faster (easy because 4060ti is so bad... But it is what it is) 7800XT competes agains 4070 and is 100 cheaper while tiny fit faster in rasterization and weaker in raytrasin. Both works well in the competative situation.
I also think that AMD has much more fully working chips for 7800XT than they have defective chips for 7700XT. When AMD has more defective chips they can reduce the price of 7700XT... if intel reduce the price of 4060ti...Reply
It's obvious that nobody with a 6800 XT should be upgrading to the 7800 XT. It's more for people who have been hanging onto old cards and want to finally get a 16 GB card.
However I doubt rasterization will be worse on average. I think the 7800 XT will be something like 5-15% faster than the 6800 XT. The faster memory helps.Reply
RX 7600 showed us that nope, rDNA3 is rDNA2 with double the shaders (that do nothing). Core for core rDNA3 is barely moving the needle. The improvements are coming, at most, from TSMC 5nm allowing larger dies and the decoupling of memory controllers. Reply
bought a 3070TI during the mining craze. Do I regret it now with my ultra wide and 8gigs of VRAM? YES. Hoping I can sell it and get one of these instead.Reply
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
38 Comments
Back to Article
brucethemoose - Friday, August 25, 2023 - link
I really like that MCD-GCD split.AMD can fab those small memory controller/cache chips cheaply, on a density optimized process. The cache chips don't care about being split up, as memory access is interleaved anyway, hence there is no communication between memory dies.
And the compute die can focus soley on compute, without wasting tons of die area on pins for the GDDRX or last level cache.
The scheme saves tape out costs, as the memory die is reused across the stack and the compute die can be relatively small.
But I wonder why AMD didn't make a 512-bit GPU with a 600mm^2+ compute die. They could have killed the 4090 with this architecture, and they could sell a 64GB version as a cheap server inference card... But they chose not to. Reply
meacupla - Friday, August 25, 2023 - link
They are probably focusing their efforts on RDNA 4 and Navi 4C. Replybrucethemoose - Friday, August 25, 2023 - link
It probably wouldn't stress AMD's engineers too much? They would tape out a compute die and test the package, but the memory dies and the basic architecture are already worked out.This is all hypothetical though, as its indeed too late to start a new RDNA3 product. Reply
Zoolook - Saturday, August 26, 2023 - link
There is a relatively long lag from when the configuration decisions are made and when the product is released, with most parameters unknown to us. Process availability etc.Most likely if they had the knowledge about available today they would have made different choices back then. Sure there are minor configuration choices made after the chips are ready, like frequencies, memory speed etc, but those are minor adjustments within the possibilities given by the hardware.
It's very complex guesswork. Reply
Bruzzone - Friday, August 25, 2023 - link
Bruce, what do you think the memory latency is between, on and off, back and forth between GCD and MCD? SIP interconnect has to through Serdes is that not correct? mb ReplyBruzzone - Friday, August 25, 2023 - link
I also wonder if there are third party memory suppliers, standard products, qualified for the memory components? Price competitive supply. mb Replymode_13h - Saturday, August 26, 2023 - link
> what do you think the memory latency is between, on and off, back and forth between GCD and MCD?Maybe AMD mentioned this in the original RDNA3 architecture slide deck, or at Hot Chips? I know they provided bandwidth numbers.
For me, the big question is how the cache on the MCDs functions. I wonder if it's tied to the address range of the DRAM attached to the MCD, or if it's in a global pool? It matters a lot, especially with RDNA3 having less L3 cache than RDNA2 had. If it's being subdivided by address range, then the effect is that you have even less. Reply
brucethemoose - Saturday, August 26, 2023 - link
Subdividing them by address range seems very inefficient. The memory chips and controllers are already interleaved, so the cache mind as well be interleaved too, right? Replymode_13h - Saturday, August 26, 2023 - link
Do you know, for a fact, that these GPUs interleave memory accesses? I wouldn't assume so.And, what I meant by "subdividing by address range" was essentially having the cache front whatever DRAM is attached to the MCD. I wasn't suggesting there'd be a partitioning scheme different than the memory. Either it's partitioned according to how the attached memory is partitioned, or it's global. Those are the only two realistic possibilities. I just wonder which. Reply
brucethemoose - Saturday, August 26, 2023 - link
I have no clue, and I haven't seen it mentioned anywhere. It would be interesting to know... Maybe a chipsandcheese cache benchmark will coax it out? Replynandnandnand - Saturday, August 26, 2023 - link
What they are working towards is using multiple graphics chiplets (for consumer). That's when it would make more sense to make a gigantic GPU, since it would have lower costs than monolithic. Replymode_13h - Saturday, August 26, 2023 - link
Rumor is that they just cancelled their RDNA4 GPUs that had multiple shader dies. I didn't read too many details, but it seemed like they took the approach of stacking the shader dies on a MCD. Replynandnandnand - Saturday, August 26, 2023 - link
I'm aware of the rumor. If true, they wouldn't make "ultra high end" $1000+ GPUs that generation, waiting until a generation when they can use multiple GCDs. The cancellation might mean that RDNA5 production gets brought forward by a few months. Replymeacupla - Friday, August 25, 2023 - link
The graph at the bottom ought to include the 4060Ti 16GB at $499. Replyakramargmail - Friday, August 25, 2023 - link
The 4060 and Ti have already fallen in price a lot ReplyKurosaki - Friday, August 25, 2023 - link
Weeee!A 250usd card costing 450usd, and a 300usd card costing 500usd!
Feels like we're closing in on something. A couple of years ago, these mid end mediocre card would have costed 699 and 849 respectively.
Maybe I just wait for this to settle, whenever the 8000-series shows up. Who knows, maybe a mid end 250usd card actually sells for 250 by then! Reply
nandnandnand - Friday, August 25, 2023 - link
That $50 gap is going to widen. 7800 XT will not move down much, but the 7700 XT will. This must be AMD saying "buy more 6700 XT plz". ReplyKurosaki - Friday, August 25, 2023 - link
Just let the pile of 6700xt rot and get the 8080 for 300usd on release date. If not, rinse and repeat.. Replyhaukionkannel - Friday, August 25, 2023 - link
No they don´t- It seems that GPU prices are going up again!New middle range will most likely go to $1000 quite soon. Just remember Nvidia 5080 12Gb version... They will do it again. By pushing real 8080 to $2000, so 8070 at $1000 will be a bargain!
;) Reply
Dante Verizon - Friday, August 25, 2023 - link
Your logic sucks.6800XT -> U$ 649 MSRP x inf. rate = U$ 700+
7800XT -> U$ 499 MSRP x inf. rate = U$ 449
Its the cheapest GPU so far. Reply
scineram - Tuesday, September 5, 2023 - link
Nonsense! $250 is low of the low end. ReplyThreska - Friday, August 25, 2023 - link
RX 7800 sounds about right for a middle-of-the-road offering. Shame on the USB-C, some display tablets use that instead of HDMI. Replynandnandnand - Friday, August 25, 2023 - link
It's surprising that USB-C on GPUs hasn't caught on, but you can use an adapter, right? Replymeacupla - Friday, August 25, 2023 - link
Club3D is the only maker that sells bidirectional DP to USB-C cable. It's very expensive compared to USB-C to DP or HDMI. It doesn't even pass through power. For USB-C output that has Power and DP, you would need a Wacom Link Plus dongle that is even more pricey. ReplyKurosaki - Friday, August 25, 2023 - link
It's not middle of the road if its 500 usd. Minimum wage hasn't moved an inch in 20 years or so. Replynandnandnand - Saturday, August 26, 2023 - link
"Real wages" have moved slightly in 20 years, maybe by +10% at most, which is bad but not nothing. That adjusts for inflation. After you remove the impact of inflation on today's GPU prices, then you can determine how badly gamers are getting screwed by the "mid-range".For example, a $500 GPU in 2023 is the equivalent of $380 in 2013. The GTX 770 launched at $399 in 2013. You could go back further but the GPUs of 20 years ago were vastly inferior, and not being used by consumers for things like machine learning.
https://www.pewresearch.org/short-reads/2018/08/07...
https://www.bls.gov/news.release/realer.htm Reply
Fujikoma - Saturday, September 2, 2023 - link
With wages falling far short of inflation, normal bills become a bigger percentage of expenses and discretionary income deceases. This is especially true for those on the bottom half of the income bracket that didn't have much extra to begin with. I own a 7900XTX and I don't game. I build computers for family and $500 is not middle of the road for them. Even my gaming friends wouldn't spend $500 on a video card because it's too much money. Most of them stick with consoles instead. Replyscineram - Tuesday, September 5, 2023 - link
If not gaming who so high end? Replyscineram - Tuesday, September 5, 2023 - link
There are 2.5 Ada cards under 500. 4 above. Just delusional. Replyravyne - Friday, August 25, 2023 - link
Don't we know Navi32 is a fully-enabled 3-shader-engine design, just with 20 CUs/SE instead of 16, like Navi31/32? That explains why ROPs are halved vs N31, and the 4th MCD is a bit odd but 7900 XT shows that MCDs are decoupled from SEs, and the extra bandwidth would be needed to balance the higher proportion of CUs.Those config changes are sensible if you're building a 1440p card. Don't really need the ROPs, but enough CUs for uncompromised graphics settings. Reply
mode_13h - Saturday, August 26, 2023 - link
Too few shaders, relative to the RX 6000 counterparts. Sure, the TFLOPS look good, but that didn't translate well into real-world performance for the RX 7900 cards, so I don't see why it'd be any different for these.The reduction in L3 cache is also a bit worrisome, especially if it's partitioned as I mentioned above.
I was interested in the RX 7800 XT, but the launch pricing feels a bit high, for the expected performance. Unless they've fixed some significant bugs or bottlenecks in the first round of RDNA3 cards, I'm expecting to see it struggle to beat the RX 6800 XT. Reply
meacupla - Saturday, August 26, 2023 - link
RDNA3 is vastly superior to RDNA2 in 6800XTThe 7800XT should have been called the 7800
From a simulated 7800XT, it looks like it will be some 10~20% faster across the board. Reply
mode_13h - Saturday, August 26, 2023 - link
According to whose simulation? I'll wait until I see benchmarks, but I'm not hopeful. I do think it'd have to beat the 6800 XT, on average, for them to dare calling it the 7800 XT. Replyhaukionkannel - Sunday, August 27, 2023 - link
6800XT will be faster in rasterization... 7800XT will be faster in raytrasing and most likely when using FSR 3.0 in both GPUs 7800XT bigger computational power will give 7800XT an edge.But that is no point. If you have 6800XT... Don´t upgrade!
These are not against AMD old GPUs, these are made to compete against Nvidia GPUs.
7700XT cost 450 aka less than 4060ti and is faster (easy because 4060ti is so bad... But it is what it is) 7800XT competes agains 4070 and is 100 cheaper while tiny fit faster in rasterization and weaker in raytrasin. Both works well in the competative situation.
I also think that AMD has much more fully working chips for 7800XT than they have defective chips for 7700XT. When AMD has more defective chips they can reduce the price of 7700XT... if intel reduce the price of 4060ti... Reply
nandnandnand - Monday, August 28, 2023 - link
It's obvious that nobody with a 6800 XT should be upgrading to the 7800 XT. It's more for people who have been hanging onto old cards and want to finally get a 16 GB card.However I doubt rasterization will be worse on average. I think the 7800 XT will be something like 5-15% faster than the 6800 XT. The faster memory helps. Reply
TheinsanegamerN - Monday, August 28, 2023 - link
"vastly superior" LMFAORX 7600 showed us that nope, rDNA3 is rDNA2 with double the shaders (that do nothing). Core for core rDNA3 is barely moving the needle. The improvements are coming, at most, from TSMC 5nm allowing larger dies and the decoupling of memory controllers. Reply
mrdalesen - Thursday, August 31, 2023 - link
bought a 3070TI during the mining craze. Do I regret it now with my ultra wide and 8gigs of VRAM? YES. Hoping I can sell it and get one of these instead. ReplyKevinlangford - Friday, September 8, 2023 - link
mere purpose for the ones who don't know the ins and outs just want to have what's new Reply