https://www.rssboard.org/rss-specification AnandTech This channel features the latest computer hardware related articles. https://www.anandtech.com en-us Copyright 2023 AnandTech https://www.anandtech.com/content/images/rss_logo.png AnandTech https://www.anandtech.com Samsung Announces 'Shinebolt' HBM3E Memory: HBM Hits 36GB Stacks at 9.8 Gbps Ryan Smith

Samsung’s annual Memory Tech Day is taking place in San Jose this morning, and as part of the event, the company is making a couple of notable memory technology announcements/disclosures. The highlight of Samsung’s event is the introduction of Shinebolt, Samsung’s HBM3E memory that will set new marks for both memory bandwidth and memory capacity for high-end processors. The company is also disclosing a bit more on their GDDR7 memory, which will mark a significant technological update to the GDDR family of memory standards.

]]>
https://www.anandtech.com/show/21104/samsung-announces-shinebolt-hbm3e-memory-hbm-hits-36gb-stacks-at-98-gbps Fri, 20 Oct 2023 14:00:00 EDT tag:www.anandtech.com,21104:news
Zotac's Zbox Pico PI430AJ Uses Frore's AirJet Solid-State Active Cooling Anton Shilov

Zotac has introduced the industry's first compact PC featuring Frore's AirJet solid-state cooling system. Zotac's ultra-compact Zbox Pico PI430AJ is powered by Intel's Core i3 processor is designed primarily for everyday home and office computing, as well as applications like digital signage.

As far as specifications are concerned, Zotac's Zbox Pico PI430AJ is a fairly sophisticated machine featuring Intel's eight-core Core i3-N300 CPU, 8 GB of LPDDR5 memory, and an M.2 SSD. For connectivity the PC offers a Wi-Fi 6 + Bluetooth 5.2 adapter, a single GbE port, a USB Type-C port, two USB 3.2 Type-A connectors, and two display outputs (DisplayPort and HDMI).

The Zbox Pico PI430AJ has two major selling points: it is as small as modern smartphones, and it is passively cooled using Frore's AirJet solid-state cooling module, or chip. AirJet's module dissipates heat from electronic components by propelling ultrasonic waves of air across fin-like structures on each chip. This mechanism effectively directs a cool flow of air across the chip's surface area, moving heat away from the silicon components while not collecting dust. Compared to conventional fan cooling methods, AirJet stands out by offering equivalent heat dissipation with enhanced power efficiency and quieter operation. Specifically, each chip can remove 5W of heat, with the capacity to scale up; for instance, two chips can expel 10W.

Zotac's Zbox Pico PI430AJ seems to use two AirJet modules, so they can dissipate up to 10W of power, which should be more or less enough for Intel's Core i3-N300.

Zotac claims that its Zbox Pico PI430AJ Mini PC is now available for purchase in both Windows and barebones versions in select regions (primarily APAC and EMEA, from the looks of things), but is disclosing its recommended pricing.

While Frore's AirJet solid-state active cooling makes a lot of sense for Zbox Pico PI430AJ as it enables it to run faster for longer periods, it should be noted that for Zotac this mini PC is a way to try out the technology in a mass-produced product. That said, if AirJet meets Zotac's expectations for performance, reliability, manufacturability, and costs, expect the company to use it for other PCs as well.

]]>
https://www.anandtech.com/show/21103/zotacs-zbox-pico-pi430aj-uses-frores-airjet-solidstate-active-cooling Fri, 20 Oct 2023 11:00:00 EDT tag:www.anandtech.com,21103:news
TSMC Q3 Earnings: 3nm Production Node Accounts for 6% of Revenue Anton Shilov

Although Taiwan Semiconductor Manufacturing Co. formally started production of chips using its N3 (3nm-class) process technology back in late 2022, the company did not recognize any meaningful N3 revenue in Q1 and Q2. This week as part of the company's Q3 earnings announcement, the foundry finally recognized its first N3-related revenue, with N3 accounting for 6% of TSMC's Q3 revenue. Meanwhile, advanced nodes now account for 59% of TSMC's earnings.

For their first quarter of significant 3nm revenue, TSMC booked roughly $1.03 billion in income for the new node. To put the number into generational context, TSMC recognized its first N5 revenue in Q3 2020, where back then the technology accounted for $0.97 billion in revenue making 5nm-class chips – or about 8% of TSMC's revenue share.

The strong start to N3 revenue was not unexpected, if only due to the ever-rising prices that TSMC is thought to charge for their cutting-edge wafers. Still, even with a billion dollar quarter, TSMC is just getting started; the company previously warned that its 3nm ramp would take some time.

Going forward, TSMC's long-term plans for the 3nm node call for the company to eventually offer several variations on the process. TSMC's baseline N3 (aka N3B) node uses up to 25 EUV layers, some with expensive EUV double-patterning, allowing higher transistor density but at higher costs – and few customers. More clients have opted for the more cost-efficient N3E process technology with up to 19 EUV layers, no EUV double-patterning, offering lower logic density, but better yields and a wider process window. TSMC is set to begin to ramp up N3E in Q4 2024 and with this version its 3nm-class process technology is expected to shine.

"Our business in the third quarter was supported by the strong ramp of our industry-leading 3nm technology and higher demand for 5nm technologies, partially offset by customers' ongoing inventory adjustment," said C.C. Wei, chief executive of TSMC, at the conference call with analysts and investors. "N3 is already involving production with good yield, and we are seeing a strong ramp in the second half of this year, supported by both HPC and smartphone applications. […] N3E has passed qualification and achieved performance and yield targets and will start volume production in fourth quarter of this year. "

TSMC's total revenue Q3 2023 hit $17.28 billion, a 14.6% decrease year-over-year, but a 10.2% increase from the previous quarter. Meanwhile, the company's net income increased 16.1% quarter-over-quarter to $6.521 billion, whereas gross margin for the quarter was 54.3%

]]>
https://www.anandtech.com/show/21102/tsmc-q3-earnings-3nm-production-node-accounts-for-6-of-revenue Thu, 19 Oct 2023 15:30:00 EDT tag:www.anandtech.com,21102:news
Seagate Unveils Exos X24 24TB and 28 TB: Setting the Stage for HAMR HDDs Anton Shilov

Seagate on Wednesday introduced its Exos X24 family of hard drives, it's highest capacity series of drives to date. The new family is comprised of both conventional magnetic recording technology (CMR) and shingled magnetic recording (SMR) models, with the CMR drive topping out at 24 TB, while SMR brings the peak capacity up to 28 TB. Both are, as of now, the highest-capacity HDDs in their respective segments. But perhaps the most important technological development with the Exos X24 lineup is that it uses a platform that will be largely re-used for the upcoming HAMR drives.

The Seagate Exos X24 3.5-inch helium-filled hard drive family includes 12 TB, 16 TB, 20 TB, and 24 TB models, which are built using up to 10 2.4 TB platters. Seagate's platters feature perpendicular magnetic recording (PMR) and heads utilizing two-dimensional magnetic recording (TDMR) technology (to minimize adjacent track interference and ensure reliable reading at high track pitch densities). In addition, Seagate offers a sole 28 TB SMR version of the Exos X24 to select cloud customers who can self-manage shingled recording in their datacenters.

Seagate's Exos X24 HDDs operate at a spindle speed of 7200 RPM and feature a segmented 256 MB cache (check out all of its specifications in tables below). The family's 2.4 TB platters have an areal density of 1260 Gbit/inch2, allowing for a peak sustained transfer rate of 285 MB/s, which is in line with previous-generation HDDs (which is a bit surprising as typically higher areal density enables higher transfer rates). The entire Exos X24 range offers up to 168/550 random read/write IOPS (4K, QD16), which is again in line with previous-generation drives.

Seagate plans to make the Exos X24 HDDs available with either a SATA 6 Gbps or a dual-port SAS 12 Gbps interface to cater to varying customer needs.

As for power consumption of Exos X24, it ranges from 6.3W (idle) to 8.9W (max) for SATA versions and from 6.5W (idle) to 9.8W (max) for the SAS SKUs. Exos X24 HDDs are also adaptable to meet the diverse needs of major clients, supporting PowerBalance technology, which offers data centers the ability to harmonize power usage with IOPS performance. Additionally, they feature PowerChoice technology, optimizing power consumption during periods of inactivity.

Seagate Exos X24 - Metrics of Interest
Rated Workload (TB/yr) 550
Max. Sustained Transfer Rate (MBps) 285
Random Read/Write 4K QD16 WCD (IOPS) 168/550
Areal Density 1260 Gbit/inch2
Rated Load / Unload Cycles 600K
Unrecoverable Read Errors 1 in 10E15
MTBF (Hours) 2.5M
Power (Idle / Active) (W) SATA: 6.3W - 8.9W | SAS: 6.5W - 9.8W
Warranty (Years) 5
Datasheet PDF

The enterprise-focused drives otherwise check all of the usual boxes for this market segment, including vibration resistance and RV sensors. The drives are rated to handle workloads up to 550 TB/year, with a 5 year overarching warranty.

Seagate said it had commenced shipping of qualification Exos X24 drives to select customers, and the production drives are slated to be widely available for channel distribution in December.

Finally, and most interesting of all, the company has also revealed that the 10-platter helium-sealed platform will be significantly re-used for the company's upcoming heat-assisted magnetic recording (HAMR) hard drives. Those drives will be available in capacities up to 32 TB, with Seagate set to ramp up production in early 2024.

]]>
https://www.anandtech.com/show/21100/seagate-unveils-exos-x24-24tb-and-28-tb-setting-the-stage-for-hamr-hdds Thu, 19 Oct 2023 11:00:00 EDT tag:www.anandtech.com,21100:news
AMD Unveils Ryzen Threadripper 7000 Family: 96 Core Zen 4 for Workstations and HEDT Gavin Bonshor Having just recently crossed the one-year anniversary of AMD’s first Zen 4 architecture CPUs – the Ryzen 7000 series – we’re now at the point where the final Zen 4 products are landing in place. Thus far AMD has launched consumer desktop CPUs, multiple classes of mobile CPUs, and a bevy of server CPUs big (Genoa) and small (Siena). The one remaining gap in AMD’s product roadmap has been the workstation and high-end desktop market, which AMD will be filling next month with the launch of a pair of Threadripper product lineups.

Being announced today by AMD for a November 21st launch, this morning AMD is taking the wraps off of their Ryzen 7000 Threadripper CPUs. These high-end chips are being split up into two product lines, with AMD assembling the workstation-focused Ryzen Threadripper 7000 Pro series, as well as the non-pro Ryzen Threadripper 7000 series for the more consumer-ish high-end desktop (HEDT) market. Both chip lines are based on AMD’s tried and true Zen 4 architecture – derivatives of AMD’s EPYC server processors – incorporating AMD’s Zen 4 chiplets and a discrete I/O dies. As with previous generations of Threadripper parts, we’re essentially looking at the desktop version of AMD’s EPYC hardware.

With both product lines, AMD is targeting customer bases that need CPUs more powerful than a desktop Ryzen processor, but not as exotic (or expensive) as AMD’s server wares. This means chips with lots and lots of CPU cores – up to 96 in the case of the Threadripper 7000 Pro series – as well as support for a good deal more I/O and memory. The amount varies with the specific chip lineup, but both leave Ryzen 7000 and its 16 cores and 24 PCIe lanes in the dust.

Most notably for this generation of Threadripper parts, AMD is once again offering the HEDT-focused non-pro lineup. With the Zen 3-based Threadripper 5000 series, AMD only ever released the workstation-focused Pro parts, leaving HEDT hopefuls in the dust. But this time AMD has decided to bring the HEDT back, creating a pair of Threadripper lines in a very similar fashion to the Threadripper 3000 family in 2019.

]]>
https://www.anandtech.com/show/21092/amd-unveils-ryzen-threadripper-7000-family-zen-4-for-workstations-and-hedt Thu, 19 Oct 2023 09:00:00 EDT tag:www.anandtech.com,21092:news
Arm Total Design to Facilitate Development of Custom Datacenter SoCs Anton Shilov

Arm this week introduced its Arm Total Design initiative, which is aimed at accelerating development of custom datacenter-oriented system-on-chip (SoC) designs using Neoverse Compute Subsystems (CSS). The collaborative ecosystem unites various developers in a bid to speed up time-to-market and reduce development costs of custom SoCs for AI, cloud, and high-performance computing markets. ATD promises to enable development of datacenter processors that will offer formidable competition for x86 CPUs.

The Arm Total Design ecosystem is a conglomerate of ASIC design houses, IP vendors, EDA tool providers, foundries, and firmware developers that is aimed to facilitate rapid and cost-efficient delivery of custom silicon for datacenters based on Arm Neoverse cores for AI, HPC, cloud, and networking workloads. The ecosystem provides preferential access to Neoverse CSS to its partners, fostering innovation, and facilitating faster time-to-market strategies, while also lowering development costs.

The initiative aims to harness collective industry expertise at every stage of custom SoC development, thereby promoting the broad availability of specialized, Arm Neoverse-based solutions. 

Central to this initiative is the delivery of pre-integrated, validated IP and EDA tools, courtesy of collaborative efforts from partners such as Cadence, Rambus, and Synopsys. Such strategic collaborations are instrumental in speeding up silicon design process as it simplifies incorporation of essential components like memory, security, and various peripherals.

In addition to the abovementioned companies, Arm Total Design leverages design services prowess of such companies ADTechnology, Alphawave Semi, Broadcom, Capgemini, Faraday, Socionext, and Sondrel. These companies bring their expertise to the table, providing robust support to the ecosystem given their experience with Neoverse CSS as well as other Arm IPs and technologies.

Since the expected future of datacenter processors is multi-chiplet, the Arm Total Design ecosystem not only makes custom chips more accessible, but also poised to support AMBA CHI C2C, and UCIe standards. Intel Foundry Services and TSMC also participate in the Arm Total Design ecosystem bringing in leading-edge process technologies and advanced packaging techniques.

Complementing the hardware-focused aspects, the initiative also places a strong emphasis on commercial software and firmware support for Neoverse CSS, drawing upon the specialized contributions from AMI.

On paper, Arm Total Design emerges as a major alliance that could significantly alter the landscape of custom datacenter silicon development. By combining a diverse spectrum of industry leaders under one roof, it promises a seamless and efficient pathway towards realizing the potentials of Neoverse CSS. This collaborative venture aspires to unlock new levels of performance and features for AI, edge, and HPC SoCs based on the Arm technology, offering a decidedly more group-oriented approach to chip design than the vertically integrated strategies employed by industy heavyweights such as Intel and AMD.

]]>
https://www.anandtech.com/show/21099/arm-total-design-to-facilitate-development-of-custom-datacenter-socs Wed, 18 Oct 2023 20:00:00 EDT tag:www.anandtech.com,21099:news
Qualcomm Swaps Out Arm for RISC-V for Next-Gen Google Wear OS Devices Anton Shilov

As part of a broad collaborative agreement with Google, Qualcomm this week said that that it will be adopting the RISC-V instruction set architecture (ISA) for a future Snapdragon Wear platform. Working together, the two companies will be bootstrapping a RISC-V ecosystem for Wear OS devices, with Qualcomm providing the hardware while Google expands its wearables OS and associated ecosystem of tools to support the new processor architecture.

Qualcomm's Wear processors have been the de facto chip of choice for Wear OS devices since the launch of Google's wearables platform almost a decade ago, with Qualcomm employing multiple generations of Arm CPU designs. This makes Qualcomm's decision to develop a RISC-V wearables SoC especially significant, as it not only represents one of the highest profile adoptions of RISC-V in a consumer platform to date, but it means that, depending on Qualcomm's specific product plans, this could see the overall Wear OS market make a hard turn from Arm to RISC-V in relatively short order.

As laid out in the relatively brief announcement from Qualcomm, the company will focus on development of RISC-V-based hardware suitable for wearable devices. While the company isn't disclosing detailed technical specifications of their in-development products, given the company's significant chip-design background, this likely includes customized RISC-V general purpose cores as well as sensors.

Notably here, the announcement is for "a RISC-V based wearables solution," rather than a complete pivot to RISC-V with multiple solutions. Wearables as a whole are a much smaller market than smartphones, so Qualcomm has historically not offered a particularly deep lineup of hardware – meaning that even one chip is significant. Still, this also means that Qualcomm is not formally dropping Arm from its Snapdragon Wear platform at this time.

Qualcomm's decision to embrace RISC-V for a future wearables SoC is significant news for the up-and-coming ISA, as this marks one of the highest profile adoptions of RISC-V in consumer gear to date. The open standard ISA has seen success over the last several years in the microcontroller market, with chip vendors adopting RISC-V CPU cores – often in place of Arm Cortex-M designs – as a means of having more control over their CPU core designs, and avoid paying ISA royalties in the process. Conversely, RISC-V has seen very limited adoption in the application processor space thus far, owing to the more complex chip designs and the overall smaller market. So Qualcomm's plans to use RISC-V in their Snapdragon Wear platform, which has traditionally been based on Arm Cortex-A designs, marks a significant milestone for the adoption of RISC-V into higher-performing mobile devices.

Similarly, Google's backing of the ISA by porting Wear OS to RISC-V is a major milestone on the software front. Bootstrapping a platform based on a new ISA is not just about the hardware, but the software as well, as there needs to be well-developed operating systems and applications to make the hardware useful. All of which requires significant tooling to enable that development. Google, for its part, is no stranger to embracing multiple ISAs – Android has long supported Arm, x86, and even MIPS – and the company already announced earlier this year that they're working to make RISC-V a "tier-1" platform for Android, so the company's efforts with Wear OS will go hand-in-hand with that.

Between the two companies, Google and Qualcomm essentially make up the software and hardware backend of the Wear OS ecosystem. Google's Wear OS, in turn, is used by a range of popular smart watches, including those from Samsung, Fossil Group, Motorola, and Casio.

"Qualcomm Technologies have been a pillar of the Wear OS ecosystem, providing high performance, low power systems for many of our OEM partners," said Bjorn Kilburn, GM of Wear OS by Google. "We are excited to extend our work with Qualcomm Technologies and bring a RISC-V wearable solution to market."

Meanwhile, the decision to use RISC-V for wearables also has the potential to be a big change for the business side of Qualcomm. The company is currently butting heads with Arm over licensing and royalty rates, particularly in regards to their acquired Nuvia IP. That relationship has already devolved to lawsuits, including Arm looking to block Qualcomm's use of Nuvia-designed Arm CPU cores.

In short, swapping out Arm for RISC-V would allow Qualcomm to do away with paying royalties to Arm for Snapdragon Wear chips. The current royalties aren't thought to be extravagant – Qualcomm is using Cortex-A53 here – but a penny saved is a penny booked for Qualcomm's quarterly earnings. If nothing else, the very public announcement about the development of a RISC-V Snapdragon Wear SoC can be considered a shot across Arm's bow, as a reminder that Qualcomm could eventually do the same thing with bigger and higher royalty bearing chips.

"We are excited to leverage RISC-V and expand our Snapdragon Wear platform as a leading silicon provider for Wear OS," said Dino Bekis, vice president and general manager, Wearables and Mixed Signal Solutions, Qualcomm Technologies. " Our Snapdragon Wear platform innovations will help the Wear OS ecosystem rapidly evolve and streamline new device launches globally."

]]>
https://www.anandtech.com/show/21098/qualcomm-swaps-out-arm-for-risc-v-for-next-wear-soc Wed, 18 Oct 2023 19:00:00 EDT tag:www.anandtech.com,21098:news
Canon Prepares Nanoimprint Lithography Tool To Challenge EUV Scanners Anton Shilov

Canon has recently revealed its FPA-1200NZ2C, a nanoimprint semiconductor manufacturing tool that can be used to make advanced chips. The device uses nanoimprint lithography (NIL) technology as an alternative to photolithography, and can theoretically challenge extreme ultraviolet (EUV) and deep ultraviolet (DUV) lithography tools when it comes to resolution.

Unlike traditional DUV and EUV photolithography equipment that transfers a circuit pattern onto a resist-coated wafer through projection, nanoimprint tool employs a different technique. It uses a mask, embossed with the circuit pattern, which directly presses against the resist on the wafer. This method eliminates the need for an optical mechanism in the pattern transfer process, which promises a more accurate reproduction of intricate circuit patterns from the mask to the wafer. In theory, NIL enables formation of complex two- or three-dimensional circuit patterns in a single step, which promises to lower costs. NIL itself is not a new technology, but it has remained in parallel development over the years, while the challenges involved in further improving photolithography have Canon believing that now is a good time for a second-look.

Canon says that its FPA-1200NZ2C enables patterning with a minimum linewidth (critical dimensions, CD) of 14 nm, which is good enough to 'stamp' a circa 26-nm minimum metal pitch, and therefore suitable for 5 nm-class process technologies. That would be in line with capabilities of ASML's Twinscan NXE:3400C (and similar) EUV lithography scanners with a 0.33 numerical aperture (NA) optics.

Meanwhile, Canon says that further refinements of its technology, its tool can achieve finer resolutions that can enable 3 nm and even 2 nm-class production nodes.

Nanoimprint lithography offers several compelling advantages over photolithography. Primarily, NIL excels in resolution, enabling the creation of structures at the nanometer scale with remarkable precision without using photomasks. This technology bypasses the diffraction limits encountered in conventional photolithography, allowing for more intricate and smaller features. Additionally, NIL operates without the necessity of complex optics or high-energy radiation sources, leading to potentially lower operational costs and simpler equipment.

Another advantage of NIL is its direct patterning capability, enabling the reproduction of three-dimensional nanostructures effectively. Such functionality makes NIL a potent tool in the production of photonics and other applications where three-dimensional nano-patterns are essential. The technology also facilitates better pattern fidelity and uniformity.

However, NIL also presents certain challenges and limitations. One notable issue is its susceptibility to defects due to the direct contact involved in the imprinting process. Particles or contaminants present on the substrate or the mold can lead to defects, which may affect the overall yield and reliability of the manufacturing process. This necessitates impeccable process control and cleanliness to maintain consistent output quality.

Additionally, NIL, in its traditional form, is a serial process, which limits its throughput and production capacity. Unlike photolithography, which can process entire wafers or large areas in a parallel fashion, NIL often involves processing smaller areas sequentially. This poses great challenges in scaling the technology for high-volume manufacturing of chips, which limits its usage for chip manufacturing. Meanwhile, NIL can be used to create photomasks for EUV and DUV. Also, it can theoretically be used to create patterned media for hard disk drives.

]]>
https://www.anandtech.com/show/21097/canons-nanoimprint-lithography-tool-can-challenge-euv-scanners Tue, 17 Oct 2023 12:00:00 EDT tag:www.anandtech.com,21097:news
Intel Core i9-14900K, Core i7-14700K and Core i5-14600K Review: Raptor Lake Refreshed Gavin Bonshor In what is to be the last of Intel's processor families to use the well-established Core i9, i7, i5, and i3 naming scheme, Intel has released its 14th Generation Core series of desktop processors, aptly codenamed Raptor Lake Refresh (RPL-R). Building upon the already laid foundations of the 13th Gen Core series family, Intel has announced a variety of overclockable K and KF (no iGPU) SKUs, including the flagship Core i9-14900K, the reconfigured Core i7-14700K, and the cheaper, yet capable Core i5-14600K.

The new flagship model for Intel's client desktop products is the Core i9-14900K, which looks to build upon the Core i9-13900K, although it's more comparable to the special edition Core i9-13900KS. Not only are the Core i9-14900K and Core i9-13900KS similar in specifications, but the Core i9-14900K/KF are the second and third chips consecutively to hit 6.0 GHz core frequencies out of the box.

Perhaps the most interesting chip within Intel's new 14th Gen Core series family is the Core i7-14700K, which is the only chip to receive an uplift in cores over the previous SKU, the Core i7-13700K. Intel has added four more E-cores, giving the Core i7-13700K a total of 8 P-cores and 12 E-cores (28 threads), with up to a 5.6 GHz P-core turbo and a 125 W base TDP; the same TDP across all of Intel's K and KF series 14th Gen core chips.

Also being released and reviewed today is the Intel Core i5-14600K, which has a more modest 6P+8E/20T configuration and has the same configuration 5.3 GHz boost frequency and 3.5 GHz base frequency on the P-cores as the Core i5-13600K. Intel has only boosted the E-core boost frequency by 100 MHz to justify the Core i5-14600K, which means it should perform similarly to its predecessor.

Despite being a refreshed selection of Intel's 13th Gen Raptor Lake platform, the biggest question, aside from the performance, is what are the differences, and are there any nuances to speak of, and how do they correlate regarding performance to 13th Gen? We aim to answer these questions and more in our review of the Intel 14th Gen Core i9-14900K, Core i7-14700K and Core i5-14600K processors.

]]>
https://www.anandtech.com/show/21084/intel-core-i9-14900k-core-i7-14700k-and-core-i5-14600k-review-raptor-lake-refreshed Tue, 17 Oct 2023 09:00:00 EDT tag:www.anandtech.com,21084:news
Intel Announces 14th Gen Core Series For Desktop: Core i9-14900K, Core i7-14700K and Core i5-14600K Gavin Bonshor

Ahead of tomorrow's full-scale launch, Intel this afternoon is pre-announcing their 14th Generation Core desktop processors. Aptly codenamed Raptor Lake Refresh, these new chips are based on Intel's existing Raptor Lake silicon – which was used in their 13th generation chips – with Intel tapping further refinements in manufacturing and binning in order to squeeze out a little more performance from the silicon. For their second iteration of Raptor Lake, Intel is also preserving their pricing for the Core i9, i7, and i5 processors, which aligns with the pricing during the launch of Intel's 13th Gen Core series last year.

Headlining the new lineup is Intel's latest flagship desktop processor, the Core i9-14900K, which can boost up to 6 GHz out of the box. This is the second Intel Raptor Lake chip to hit that clockspeed – behind their special edition Core i9-13900KS – but while that was a limited edition chip, the Core i9-14900K is Intel's first mass-produced processor that's rated to hit 6 GHz. Under the hood, the i9-14900K uses the same CPU core configuration as the previous Core i9-13900K chips, with 8 Raptor Cove performance (P) cores and 16 Gracemont-based efficiency (E) cores, for a total of 24 CPU cores capable of executing on 32 threads.

Intel 14th Gen Core, Raptor Lake-R (K/KF Series)
Pricing as of 10/16
AnandTech Cores
P+E/T
P-Core
Base
P-Core
Turbo
E-Core
Base
E-Core
Turbo
L3 Cache
(MB)
iGPU Base
W
Turbo
W
Price
($)
i9-14900K 8+16/32 3200 6000 2400 4400 36 770 125 253 $589
i9-14900KF 8+16/32 3200 6000 2400 4400 36 - 125 253 $564
i9-13900K 8+16/32 3000 5800 2200 4300 36 770 125 253 $537
 
i7-14700K 8+12/28 3400 5600 2500 4300 30 770 125 253 $409
i7-14700KF 8+12/28 3400 5600 2500 4300 30 - 125 253 $384
i7-13700K 8+8/24 3400 5400 2500 4200 30 770 125 253 $365
 
i5-14600K 6+8/20 3500 5300 2600 4000 24 770 125 181 $319
i5-14600KF 6+8/20 3500 5300 2600 4000 24 - 125 181 $294
i5-13600K 6+8/20 3500 5300 2600 3900 24 770 125 181 $285

Moving down the stack, arguably the most interesting of the chips being announced today is the new i7-tier chip, the Core i7-14700K. Intel's decision to bolster the core count of its Core i7 is noteworthy: the i7-14700K now boasts 12 E-cores and 8 P-cores, 4 more E-cores than its 13th Gen counterpart – and only 4 behind the flagship i9. With base clock rates mirroring the previous generation's Core i7-13700K, the additional efficiency cores aim to add extra range in multitasking capabilities, designed to benefit creators and gamers.

Rounding out the 14th Gen Core collection is the i5 series. Not much has changed between the latest Core i5-14600K and the Core i5-13600K, with the only differences coming in E-core turbo frequencies; just a 100 MHz uptick here. Both families share the same 6P+8E (20T) configuration, 5.3 GHz P-core turbo, and 3.5 GHz P-core base frequencies. Price-wise (at the time of writing), the Core i5-13600K is currently available at Amazon for $285, which is a $34 saving over the MSRP of the Core i5-14600K, and that money could potentially be spent elsewhere, such as storage or memory.

Since the Intel 14th and 13th Gen core series are essentially the same chips but with slightly faster frequencies, Intel has made no changes to the underlying core architecture. Intel does include a new overclocking feature for users looking to overclock their 14th Gen Core i9 processors. Dubbed 'AI Assist,' it enhances things through its Extreme Tuning Utility (XTU) overclocking software. Harnessing AI to provide users with more intelligent options for overclocking settings outside of the traditional look-up tables based on set parameters, Intel's AI Assist goes further. Using various systems with various components such as memory, motherboards, and cooling configurations to train the AI model, Intel claims their in-house AI is constantly being trained to offer users the most comprehensive automatic overclocking settings thus far.

Of course, it should be noted that overclocking does, in fact, void Intel's warranty, so users should use this feature at their own risk.

Intel boasts up to 23% better gaming performance with their in-house testing than Intel's 12th Gen Core series (Alder Lake), the first platform to bring the hybrid core architecture to Intel's desktop lineup. It must be noted that Intel hasn't compared performance directly to 13th Gen (Raptor Lake), likely due to the close similarities both families share: same cores, same architecture, just slightly faster frequencies out of the box.

The Intel 14th Gen chips are designed for the preexisting 600 and 700-series motherboards, which use the LGA 1700 socket. Motherboard vendors have already begun refreshing their Z790 offerings with more modern features, such as Wi-Fi 7 and Bluetooth 5.4, providing motherboard manufacturers decide to integrate them into their refreshed Z790 models. Official memory compatibility remains the same as 13th Gen, supporting DDR5-5600 and DDR4-3200 memory. Though overclockers may find the highest binned chips more capable than before, with Intel teasing speeds beyond DDR5-8000 for their best chips.

The Intel 14th Gen Core family of desktop processors (K and KF) is launching on October 17th at retailers and system integrators. Pricing-wise, the flagship Core i9-14900K costs $589, the Core i7-14700K will be available for $409, and the more affordable Core i5-14600K for $319.

]]>
https://www.anandtech.com/show/21096/intel-announces-14th-gen-core-series-for-desktop-core-i9-14900k-core-i7-14700k-and-core-i5-14600k Mon, 16 Oct 2023 13:30:00 EDT tag:www.anandtech.com,21096:news
TSMC: We Want OSATs to Expand Their Advanced Packaging Capability Anton Shilov

Almost since the inception of the foundry business model in the late 1980s, TSMC would produce silicon. In contrast, an outsourced semiconductor assembly and test (OSAT) service provider would then package it into a ceramic or organic encasing. Things have changed in recent years with the emergence of advanced packaging methods that require sophisticated tools and cleanrooms that are akin to those used for silicon production because TSMC was at the forefront of innovative packaging methods, which the company aggregates under the 3DFabric brand and because it built appropriate capacity, it quickly emerged as a significant OSAT for advanced packaging.

Many companies, such as Nvidia, want to send in blueprints and get their product that is ready to ship, which is why they choose to use TSMC's services to package their advanced system-in-packages, such as H100, using such technologies as integrated fan-out (InFO, chip first) and chip-on-wafer-on-substrate (CoWoS, chip last) developed by the foundry. As a result, TSMC had to admit earlier this year that it could not keep up with CoWoS demand and would expand appropriate production capacity.

Although TSMC makes tons of money on advanced chip packaging methods these days, the company does not have plans to steal business away from its traditional OSAT partners, which is why it wants these companies to expand their sophisticated packaging capacity and use similar tools to TSMC and its partners to offer to package compatible with TSMC-made chiplets. 

But it is not that simple. All leading assembly and test specialists like ASE Group, Amkor Technology, and JCET have advanced chip packaging technologies, many resembling those of TSMC. These OSATs own advanced packaging fabs already and can serve fabless chip designers. For example, just this week, Amkor opened up its $1.6 billion advanced packaging facility in Vietnam. It is set to have a cleanroom space comparable to that GlobalFoundries owns across multiple fabs.

But while packaging technologies offered by OSATs may be similar to those of TSMC in terms of pitch dimensions and bump I/O pitch dimensions, they are not the same in terms of flow and may even have slightly different electric specifications. Meanwhile, OSATs use the same tools as TSMC, so they can pack chips that use CoWoS interposer. So far, TSMC has certified two OSATs to perform the final CoWoS assembly. However, there is still a shortage of CoWoS capacity on the market because TSMC's capacity is the bottleneck, at least based on TSMC's comments from earlier this year.

"So, we have ASE and SPIL, we have qualified their substrates," said Dan Kochpatcharin, Head of Design Infrastructure Management at TSMC, at the OIP 2023 conference in Amsterdam. "The next step is also doing the same thing to bring them into using the automated routing of the substrate as well. So, we can have the whole [CoWoS service] stack."

TSMC's advanced packaging technologies like CoWoS and InFO are supported by electronic design automation (EDA) tools from companies like Ansys, Cadence, Siemens EDA, and Synopsys. So, TSMC needs OSATs to use the same programs and align their technical capabilities with what these tools design and TSMC produces. 

"We want them to use the same EDA tools," said Kochpatcharin. "So, let's say TSMC interposer on OSAT's substrate. So, they use 3Dblox and [appropriate] EDA tools to do analysis, then it is easier for the customer. Right? Like we qualified the two partners to [produce] substrate. So, we do CoWoS, and OSATs do substrate. So, it would be good to use the same flow, because it is easier for customer. "If you have customers who use [different EDA tools] then the multi physics analysis [of the package] will be more difficult. It can be done just more difficult."

To meet the demand for CoWoS and other advanced packaging methods, OSATs need to invest in appropriate capacities and tools, which are expensive. The problem is that assembly and test specialists cannot keep up with Intel, TSMC, and Samsung regarding investments in advanced packaging facilities. Last year, Intel spent $4 billion on advanced packaging plants, and TSMC's capital expenditures on advanced packaging totaled $3.6 billion. In contrast, Samsung spent around $2 billion, according to Yole Group's estimates published by EE Times. By comparison, the capital expenditures of ASE Group (with SPIL and USI) totaled $1.7 billion in 2022, whereas the spending of Amkor reached $908 million.

There are several reasons why advanced packaging technologies like TSMC's CoWoS and InFO, as well as Intel's EMIB and Foveros, are gaining importance. First up, disaggregated chip designs are getting more popular because chip manufacturing is getting more expensive, smaller chips are easier to yield, and many chips are reaching the reticle limit. At the same time, their designers want them to be bigger and more powerful. Secondly, disaggregated designs using chiplets made on different nodes are cheaper than one monolithic chip on a leading-edge node.

OSATs are poised to expand their advanced production capacities as their clients demand appropriate services. Meanwhile, they are less inclined to offer such services than foundries simply because if something fails during packaging steps, they have to throw away all the expensive silicon they package, and they do not earn as much as chipmakers do. Their margins are also significantly lower. Finally, it may be unclear in many cases why a multi-chiplet package does not work and whether the problem is with the package itself or with one of the chips. Today, all TSMC can do is to optically check the wafers before dicing them, but this is not a particularly efficient way of testing.

To gain the capability to test chiplets individually, TSMC is working with makers of chip test equipment and expects to validate these tools next year.

"On the 3DFabric on the testing, we work with Advantest, Teradyne, and Synopsys to leverage the high-speed die-to-die testing," said Kochpatcharin. "When you have all these things stacked together, it is getting very difficult to test them. So, we have worked with Teradyne and Advantest to work […] [die-to-die] testing, and we will have silicon validation in 2024."

]]>
https://www.anandtech.com/show/21095/tsmc-we-want-osats-to-expand-their-advanced-packaging-capability Mon, 16 Oct 2023 10:30:00 EDT tag:www.anandtech.com,21095:news
GEEKOM Mini IT13 Review: Core i9-13900H in a 4x4 Package Ganesh T S The performance of ultra-compact form-factor (UCFF) desktops has improved significantly over the years, thanks to advancements in semiconductor fabrication and processor architecture. Thermal solutions suitable for these 4in. x 4in. machines have also been evolving simultaneously. As a result, vendors have been able to configure higher sustained power limits for the processors in these systems. With Intel and AMD allowing configurable TDPs for their notebook segment offerings, UCFF systems with regular 45W TDP processors (albeit, in cTDP-down mode) are now being introduced into the market.

GEEKOM became one of the first vendors to release a Core i9-based UCFF machines with the launch of the Mini IT13. Based on paper specifications, this high-end Raptor Lake-H (RPL-H) UCFF desktop is meant to give the mainstream RPL-P NUCs stiff competition in both performance and price. Read on for a detailed look into the performance profile and value proposition of the Mini IT13's flagship configuration, along with analysis of the tradeoffs involved in cramming a 45W TDP processor into a 4x4 machine.

]]>
https://www.anandtech.com/show/21075/geekom-mini-it13-review-core-i913900h-in-a-4x4-package Mon, 16 Oct 2023 08:00:00 EDT tag:www.anandtech.com,21075:news
TSMC: Ecosystem for 2nm Chip Development Is Nearing Completion Anton Shilov

Speaking to partners last week as part of their annual Open Innovation Platform forum in Europe, a big portion of TSMC's roadshow was dedicated to the next generation of the company's foundry technology. TSMC's 2 nm-class N2N2P, and N2X process technologies are set to introduce multiple innovations, including nanosheet gate-all-around (GAA) transistors, backside power delivery, and super-high-performance metal-insulator-metal (SHPMIM) capacitor over the next few years. But in order to take advantage of these innovations, TSMC warns, chip designers will need to use all-new electronic design automation (EDA), simulation, and verification tools as well as IP. And while making such a big shift is never an easy task, TSMC is bringing some good news to chip designers early-on: even with N2 still a couple of years out, many of the major EDA tools, verification tools, foundation IP, and even analog IP for N2 are already available for use.

"For N2 we could be working with them two years in advance already because nanosheet is different," said Dan Kochpatcharin, Head of Design Infrastructure Management at TSMC, at the OIP 2023 conference in Amsterdam. "[EDA] tools have to be ready, so what the OIP did is to work with them early. We have a huge engineering team to work with the EDA partners, IP partners, [and other] partners."

Advertised PPA Improvements of New Process Technologies
Data announced during conference calls, events, press briefings and press releases
  TSMC
N5
vs
N7
N3
vs
N5
N3E
vs
N5
N2
vs
N3E
Power -30% -25-30% -34% -25-30%
Performance +15% +10-15% +18% +10-15%
Chip Density* ? ? ~1.3X >1.15X
Volume
Manufacturing
Q2 2022 H2 2022 Q2/Q3 2023 H2 2025

*Chip density published by TSMC reflects 'mixed' chip density consisting of 50% logic, 30% SRAM, and 20% analog. 

Preparations for the start of N2 chip production, scheduled for sometime in the second half of 2025, began long ago. Nanosheet GAA transistors behave differently than familiar FinFETs, so EDA and other tool and IP makers had to build their products from scratch. This is where TSMC's Open Innovation Platform (OIP) demonstrated its prowess and enabled TSMC's partners to start working on their products well in advance.

By now, major EDA tools from Cadence and Synopsys as well as many tools from Ansys and Siemens EDA have been certified by TSMC, so chip developers can already use them to design chips. Also, EDA software programs from Cadence and Synopsys are ready for analog design migration. Furthermore, Cadence's EDA tools already support N2P's backside power delivery network.

With pre-built IP designs, things are taking a bit longer. TSMC's foundation libraries and IP, including standard cells, GPIO/ESD, PLL, SRAM, and ROM are ready both for mobile and high-performance computing applications. Meanwhile, some PLLs exist in pre-silicon development kits, whereas others are silicon proven. Finally, blocks such as non-volatile memory, interface IP, and even chiplet IP are not yet available - bottlenecking some chip designs - but these blocks in active development or planned for development by companies like Alphawave, Cadence, Credo, eMemory, GUC, and Synopsys, according to a TSMC slide. Ultimately, the ecosystem of tools and libraries for designing 2 nm chips is coming together, but it's not all there quite yet.

"[Developing IP featuring nanosheet transistors] is not harder, but it does take more cycle time, cycle time is a bit longer," said Kochpatcharin. "Some of these IP vendors also need to be trained [because] it is just different. To go from planar [transistor] to FinFET, is not harder, you just need to know how to do the FinFET. [It is] same thing, you just need to know how to do [this]. So, it does take some to be trained, but [when you are trained], it is easy. So that is why we started early."

Although many of the major building blocks for chips are N2-ready, a lot of work still has to be done by many companies before TSMC's 2 nm-class process technologies go into mass production. Large companies, which tend to design (or co-design) IP and development tools themselves are already working on their 2 nm chips, and should be ready with their products by the time mass production starts in 2H 2025. Other players can also fire up their design engines because 2 nm preps are well underway at TSMC and its partners.

]]>
https://www.anandtech.com/show/21091/tsmc-ecosystem-for-2nm-chip-development-is-nearing-completion Thu, 12 Oct 2023 17:00:00 EDT tag:www.anandtech.com,21091:news
Samsung Lines Up First Server Customer For 3nm Fabs Anton Shilov

Although Samsung Foundry was the first contract fab to formally start mass production of chips on a 3 nm-class process, so far, the company's latest process has largely been relegated to producing tiny cryptocurrency mining chips. But it looks like things will start picking up for Samsung's foundry business soon, as this week it was announced that the company has landed a more substantial order which will see the Samsung make a server-grade system-in-package (SiP) with HBM memory for an unknown client.

Per this week's press releases, Samsung Foundry is set to produce a server-grade processor with HBM memory that is set to be designed by ADTechnology, a contract chip developer from South Korea, for an American company. For now, details on the chip are light, so all we know about the 3 nm-based datacenter product is that it will will use 2.5D packaging in conjunction with HBM memory. All of which points to a high-end system-on-chip (SoC) – or rather a system-in-package (SiP).

"This 3nm project will be one of the largest semiconductor products in the industry," said Park Joon-Gyu, chief executive of AD Technology. "This 3nm and 2.5D design experience will be a significant differentiation factor between other companies and AD Technology. We will do our utmost to deliver the best design results to our customers."

Meanwhile, it is unclear which of Samsung Foundry's 3 nm-class process technologies the company is set to use for the project. Currently the company is producing cryptocurrency mining ASICs using its SF3E process technology, which is the initial version of Samsung's gate-all-around (GAA) manufacturing tech.

The company is set to roll-out an enhanced SF3 process technology next year. This version of the node provides additional design flexibility, which is enabled by varying nanosheet channel widths of the GAA device within the same cell type. All of this will, in turn, improve the performance, power, and area characteristics of SF3 compared to SF3E, making it more suitable for server designs. Yet, the company is also prepping SP3P technology with performance enhancements for 2025, which is likely to be even better for server-grade silicon.

"We are pleased to announce our 3nm design collaboration with AD Technology," said Jung Ki-Bong, Vice President of Samsung Electronics Foundry Business Development team. "This project will set a good precedent in the collaboration program between Samsung Electronics Foundry Division and our ecosystem partners, and Samsung Electronics Foundry Division will strengthen our cooperation with partners to provide the best quality to our customers."

Sources: ADTechnologyPulsenew

]]>
https://www.anandtech.com/show/21094/samsung-lines-up-first-server-customer-for-3nm-fabs Thu, 12 Oct 2023 13:00:00 EDT tag:www.anandtech.com,21094:news
HBM4 in Development, Organizers Eyeing Even Wider 2048-Bit Interface Anton Shilov

High-bandwidth memory has been around for about a decade, and throughout its its continued development it has steadily increased in speed, starting at a data transfer rate from 1 GT/s (the original HBM) and reaching upwards of 9 GT/s with the forthcoming HBM3E. This has made for an impressive jump in bandwidth in less than 10 years, making HBM an important cornerstone for whole new classes of HPC accelerators that have since hit the market. But it's also a pace that's getting harder to sustain as memory transfer rates increase, especially as the underlying physics of DRAM cells have not changed. As a result, for HBM4 the major memory manufacturers behind the spec are planning on making a more substantial change to the high-bandwidth memory technology, starting with an even wider 2048-bit memory interface.

Designed as a wide-but-slow memory technology that utilizes an ultra-wide interface running at a relatively modest clockspeed, HBM's current 1024-bit memory interface has been a defining characteristic of the technology. Meanwhile its modest clockspeeds have become increasingly less modest in order to keep improving memory bandwidth. This has worked thus far, but as clockspeeds increase, the highly parallel memory is risking running into the same signal integrity and energy efficiency issues that challenge GDDR and other highly serial memory technologies.

Consequently, for the next generation of the technology, organizers are looking at going wider once more, expanding the width of the HBM memory interface even further to 2048-bits. And, equally as important for multiple technical reasons, they intend to do this without increasing the footprint of HBM memory stacks, essentially doubling the interconnection density for the next-generation HBM memory. The net result would be a memory technology with an even wider memory bus than HBM today, giving memory and device vendors room to further improve bandwidth without further increasing clock speeds.

As planned, this would make HBM4 a major technical leap forward on multiple levels. On the DRAM stacking side of matters, a 2048-bit memory interface is going to require a significant increase in the number of through-silicon vias routed through a memory stack. Meanwhile the external chip interface will require shrinking the bump pitch to well below 55 um, all the while increasing the total number of micro bumps significantly from the current count of (around) 3982 bumps for HBM3.

Adding some additional complexity to the technology, memory makers have indicated that they are also going to stack up to 16 memory dies in one module; so-called 16-Hi stacking. (HBM3 technically supports 16-Hi stacks as well, but so far no manufacturer is actually using it) This will allow memory vendors to significantly increase the capacity of their HBM stacks, but it brings new complexity in wiring up an even larger number of DRAM dies without defects, and then keeping the resulting HBM stack suitably and consistently short.

All of this, in turn will require even closer collaboration between chip makers, memory makers, and chip packaging firms in order to make everything come together smoothly.

Speaking at TSMC's OIP 2023 conference in Amsterdam, Dan Kochpatcharin, TSMC's Head of Design Infrastructure Management had this to say: "Because instead of doubling the speed, they doubled the [interface] pins [with HBM4]. That is why we are pushing to make sure that we work with all three partners to qualify their HBM4 [with our advanced packaging methods] and also make sure that either RDL or interposer or whatever in between can support the layout and the speed [of HBM4]. So, [we work with] Samsung, SK Hynix, and Micron."

Since system-in-package (SiP) designs are getting larger, and the number of HBM stacks supported by advanced chip packages is increasing (e.g. 6x reticle size interposers and chips with 12 HBM stacks on-package), chip packages are getting more complex. To ensure that everything continues to work together, TSMC is pushing chip and memory designers to embrace Design Technology Co-Optimization (DTCO). This being a big part of the reason why the world's largest foundry recently organized 3DFabric Memory Alliance, a program designed to enable close collaboration between DRAM makers and TSMC in a bid to enable next-generation solutions that will pack huge amounts of logic transistors and advanced memory.

Among other things, TSMC's 3DFabric Memory Alliance is currently working on ensuring that HBM3E/HBM3 Gen2 memory works with CoWoS packaging, 12-Hi HBM3/HBM3E packages are compatible with advanced packages, UCIe for HBM PHY, and buffer-less HBM (a technology spearheaded by Samsung).

Overall, TSMC's comments last week give us our best look yet at the next generation of high-bandwidth memory. Still, additional technical details about HBM4 remain rather scarce for the moment. Micron said earlier this year that 'HBMNext' memory set to arrive around 2026 will offer capacities between 36 GB and 64 GB per stack and peak bandwidth of 2 TB/s per stack or higher. All of which indicates that memory makers won't be backing off on memory interface clockspeeds for HBM4, even with the move to a wider memory bus.

]]>
https://www.anandtech.com/show/21088/hbm4-in-development-2048bit-interface-will-require-more-collaboration Thu, 12 Oct 2023 10:00:00 EDT tag:www.anandtech.com,21088:news
TSMC: Importance of Open Innovation Platform Is Growing, Collaboration Needed for Next-Gen Chips Anton Shilov

This year TSMC is commemorating 15 years of its Open Innovation Platform, a multi-faceted program that brings together the foundry's suppliers, partners, and customers to help TSMC's customers better build innovative chips in an efficient and timely manner. The OIP program has grown over the years and now involves tens of companies and over 70,000 IP solutions for a variety of applications. It continues to grow, and its importance will get higher than ever when next generation technologies, such as 2 nm, and advanced packaging methods become mainstream in the coming years.

"This is not a marketing program, it is actually an engineering program to enable the industry," said Dan Kochpatcharin, Head of Design Infrastructure Management at TSMC, at the OIP 2023 conference in Amsterdam, the Netherlands. "We have a huge engineering team behind to work with the EDA partners, IP partners, and design partners."

Shrinking Time-to-Market

Speeding up time-to-market is one of the corner stones of TSMC's OIP program. Before emergence of the OIP program in 2008, TSMC would develop a process technology and process development kits (PDKs) in about 18 months time, then hand over PDKs and design rules to its partners among electronic design automation (EDA) software and IP developers. The latter would spend another 12 months creating EDA tools and building IP blocks before supplying programs and IP solutions to actual chip designers. Then it would take chip developers another 12 months to build actual chips. 

With OIP, TSMC's  EDA tool and IP design partners can start development of their products a few months after TSMC begins development of its new production node. And, by the time TSMC finalizes its process technology, EDA tools and IP are ready for chip designers, the foundry claims. This speeds up time-to-market by about 15 months, TSMC says. Meanwhile, as development time for new nodes is stretching and so is development time for chips, the value of early collaboration between TSMC and EDA and IP providers is increasing. 

For example, TSMC has been working with its partners on N2 (2nm-class) EDA and IP readiness for two years now, with TSMC aiming to have tools and common IP ready for chip designers in H2 2025. 

Quality Matters

An avid reader would wonder why, even with the success of the program, OIP only grew to 39 IP members in 15 years. As it turns out, TSMC is extremely picky with companies that join the program, according to Dan Kochpatcharin. TSMC needs members of the OIP program to really contribute to it and make the joint effort something bigger than the sum of all parts. Because TSMC clients use IP, software, and services offered by participants of the OIP program, the latter have to be really good in their fields to be a part of OIP.

In fact, TSMC even has its TSMC9000 program (the name mimics ISO 9000 quality policy) that sets quality requirements for IP designs. IP collaborators undergo TSMC9000 evaluations, with results available on TSMC-Online, guiding customers on IP reliability and risks. 

"We do a lot of qualifications for IP, before the test shuttles they do tape outs, and then they have TSMC 9000 checklists, […] customer can see [all] the results on TSMC-Online," explained Kochpatcharin. "So, they can see okay, this IP got silicon introductions, so, they have more confidence in the IP. [They also see] how many customers adopted [this IP], how many tape outs, and how many productions. For the lack of a better term, Consumer Report for IP."

Alliance members list their IPs in TSMC's premier catalog, which features thousands of IP options from 39 contributors. Customers can search for IPs using the 'IP Center' on the TSMC-Online Design Portal. Each IP in the catalog is developed, sold, and supported by its originating partner. Meanwhile, chip developers can even check out how popular is one IP or another, which can give chip developers some more confidence in their choice. Confidence is something important today and will be even more important for 3 nm, 2 nm, and future nodes as tape outs get more expensive.

Six Alliances

But speeding up time-to-market and ensuring quality are not the only purposes of the OIP program. It is meant to simplify development, production, testing, and packaging of chips. TSMC's OIP involves a variety of members and is organized into six programs or alliances, each responsible for a separate line of work: 

  • IP Alliance that that is focused on providing silicon-verified, production-proven and foundry-specific intellectual property (IP) that TSMC customers can choose from.
  • EDA Alliance that includes companies which offer electronic design automation (EDA) software that is compliant with TSMC technology requirements and support the foundry's production nodes.
  • Design Center Alliance which comprises of contract chip designers as well as companies offering system level design solution enablement.
  • Cloud Alliance that combines EDA toolmakers and cloud service providers enabling TSMC's customers to develop and simulate their chips in the cloud to reduce in-house compute needs.
  • 3D Fabric Alliance that unites all companies responsible for advanced packaging as well as development of multi-chiplet processors, which essentially includes all of the abovementioned companies as well as makers of memory (including Micron, Samsung, and SK Hynix), substrates, OSATs, and makers of test equipment.
  • Value Chain Alliance that resembles Design Center Alliance, but is meant to offer a broader range of contract chip design services and IP offerings to cater to needs of a broad range of customers spanning from startups and OEMs to ASIC designers. 

The 3D Fabric Alliance program was introduced late last year, so it can be considered the newest addition to OIP. Meanwhile, 3DFabric Alliance looks to be expanding fast with new members and for a reason.

Multi-Chiplet Designs Become New Standard

Process technologies are getting more complex, and this is not going to change. Chip design workflow might get a little easier going forward as EDA makers like Ansys, Cadence, Siemens EDA, and Synopsys are incorporating artificial intelligence capabilities into their tools. But because High-NA EUV lithography scanners halve reticle size from 858 mm2to 429 mm2, it looks like the majority of AI and high-performance computing (HPC) processors are going to adopt multi-tile design in the coming years, which will drive the need for software that assists creation of multi-tile solutions, advanced packaging, HBM-type memory, and all-new methods of testing. This will again increase importance of industry-wide collaboration and the importance of TSMC's OIP.

"[We have offered InFO_PoP] and InFO_oS 3D IC since 2016, [3D ICs have been] in production for years already, [but] back then it was still a niche [market]," said Kochpatcharin. "The customer had to know what they were doing […] and only a few people could do a 3D IC [back then]. [In] 2021 we launched the 3DFabric activity, we wanted to make it generic for everybody because with AI and HPC coming [from multiple companies], [these] cannot be niche things anymore. So, everybody has to be able to use 3D IC. [For example], automotive is a wonderful [application for] 3D IC, there is a [huge] market out there."

Meanwhile, to enable next-generation connectivity between chips and between chiplets, TSMC envisions silicon photonics will be needed, so the company is actively working in this direction within its OIP program. 

"If you go to N2 and the next one coming up is silicon photonics," said Kochpatcharin. "This is where we launched a process needed to have [design service partners] to be able to support the customer."

]]>
https://www.anandtech.com/show/21086/importance-of-tsmcs-oip-is-growing-collaboration-needed-for-nextgen-chips Thu, 12 Oct 2023 08:00:00 EDT tag:www.anandtech.com,21086:news
Intel Launches Arc A580: A $179 Graphics Card for 1080p Gaming Anton Shilov

When Intel unveiled its range of Arc A-series desktop graphics cards last year, it introduced four models: the Arc A770, Arc A750, Arc A580, and Arc A380. However, the Arc A580, which uses a cut-down ACM-G10 GPU, never reached the market for reasons that remain unclear. On Tuesday Intel finally fleshed out the Arc desktop lineup with a 500 series card, formally and immediately launching the Arc A580 graphics card.

Intel's Arc A580 is based on the Alchemist ACM-G10 graphics processor with 3072 stream processors and that is paired with 8 GB of memory using a 256-bit interface. While the the cut-down GPU has fewer SPs than its higher-performing counterparts, it retains all of the features that the Alchemist architecture has to offer, including world-class media playback capabilities, including hardware accelerated decoding and encoding in AV1, H.264, and H.265 formats.

The card sits under the Arc A770 and Arc A750 in terms of performance, but above the Arc A380, thus targeting gamers in budget. Intel itself positions its Arc A580 for 1080p gaming against AMD's Radeon RX 6600 and NVIDIA's GeForce RTX 3050 graphics cards that have been available on the market for about two-and-a-half years.

When compared to its rivals, the Arc A580 has higher compute performance (10.445 FP32 TFLOPS vs. Radeon RX 6600's 9 FP32 TFLOPS and GeForce RTX 3050's 8 FP32 TFLOPS) as well as dramatically higher memory bandwidth (512 GB/s vs. 224 GB/s). Though as FLOPS are not everything, we'll have to see how benchmarks play out. The biggest advantage for Intel right now is going to be memory bandwidth, as Intel is shipping a card with a far wider memory bus than anything else in this class – something that AMD and NVIDIA shied away from after multiple cryptocurrency rushes and crashes.

But Intel's Arc A580 is more power hungry than its rivals: as this part is based on Intel's top-tier ACM-G10 GPU, it has the power consumption to match, with a total graphics power rating of 185W. Conversely, AMD's Radeon RX 6600 and NVIDIA's GeForce RTX 3050 are rated for 132W and 130W, respectively.

Graphics cards based on the Intel Arc A580 GPU are set to be offered by ASRock, Gunnir, and Sparkle, starting at $179. At $179, the boards are cheaper than AMD's Radeon RX 6600 ($199) and Nvidia's GeForce RTX 3050 ($199), which makes it quite a competitive offering. Meanwhile, Intel's higher-performing Arc A750 can now be obtained for $189 - $199, which somewhat reduces appeal of the new board – though it remains to be seen if those A750 prices will last.

]]>
https://www.anandtech.com/show/21090/intel-launches-arc-a580-a-179-graphics-card-for-1080p-gaming Tue, 10 Oct 2023 16:45:00 EDT tag:www.anandtech.com,21090:news
ASRock Industrial NUC BOX-N97 and GMKtec NucBox G2 Review: Contrasting Compact ADL-N Options Ganesh T S Intel has been maintaining a low-power / low-cost x86 microarchitecture since the introduction of the Silverthorne Atom processors in 2008. Its latest iteration, Gracemont, made its debut in the Alder Lake lineup. The hybrid processors in that family teamed up the Gracemont efficiency cores with the Golden Cove performance cores. Eventually, Intel released a new line of processors under the 'Alder Lake-N' (ADL-N) tag comprising only the Gracemont cores. As a replacement for the Tremont-based Jasper Lake SoCs, ADL-N has found its way into a variety of entry-level computing systems including notebooks and compact desktops.

ASRock Industrial's lineup of ultra-compact form-factor (UCFF) systems - the Intel-based NUC BOX series and AMD-based 4X4 BOX series - has enjoyed significant market success. At the same time, the expanding market for compact computing systems has also brought many Asian manufacturers such as ACEMAGIC, Beelink, GMKtec, and MinisForum into play. As ADL-N ramps up, we are seeing a flood of systems based on it from these vendors. We took advantage of this opportunity to source two contrasting ADL-N mini-PCs - the ASRock Industrial NUC BOX-N97 and the GMKtec NucBox G2. Though both systems utilize a quad-core ADL-N SoC, the feature set and target markets are very different. Read on for a detailed analysis of the system features, build, performance profile, and value proposition of the NUC BOX-N97 and the NucBox G2.

]]>
https://www.anandtech.com/show/21085/asrock-industrial-nuc-boxn97-and-gmktec-nucbox-g2-review-contrasting-compact-adln-options Fri, 06 Oct 2023 09:45:00 EDT tag:www.anandtech.com,21085:news
Intel to Spin-off Programmable Solutions Group as Standalone Business, Eyeing IPO in 2-3 Years Ryan Smith

Intel this afternoon has announced that the company will be spinning off its programmable solutions group (PSG), to operate as a standalone business. The business unit, responsible for developing Intel’s Agilex, Stratix, and other FPGA products, will become a standalone entity under Intel’s corporate umbrella starting in Q1 of 2024, with the long-term goal of eventually selling off part of the group in an IPO in two to three years’ time.

The reorganization announced today will see Intel’s PSG transition to operating as a standalone business unit at the start of 2024, with Intel EVP Sandra Rivera heading up PSG as its new CEO. Rivera is currently the general manager of Intel’s Data Center and AI Group (DCAI), which is where PSG is currently housed, so she has significant familiarity with the group. In the interim, Rivera will also continue serving in her role in DCAI until Intel can find a replacement, with the company looking for candidates both externally and internally.

The separating of PSG is the latest move from Intel to reorganize the company’s multi-faceted business in an effort to focus on its core competencies of silicon photolithography and chip design. Since bringing on current CEO Pat Gelsinger two years ago, Intel has sold or spun off several business units, including its SSD business, NUC mini-PC business, Mobileye ADAS unit, and others, all the while making significant new investments in Intel’s Foundry Services (IFS) fab division. Though, unlike some of Intel's other divestments, it's notable that the company isn't separating from PSG because the business unit is underperforming or is in a commoditized, low-margin market – rather, Intel thinks PSG could perform even better without the immense business and bureaucratic weight of Intel hanging over it.

For the standalone PSG business unit, Intel is eyeing a very similar track to how they’ve handled Mobileye, which will see Intel maintaining majority ownership while still freeing up the business unit to operate more independently. This strategy has played out very well for Mobileye, with the company enjoying continued commercial growth while successfully IPOing last year, and which Intel is hoping they can achieve again with a standalone PSG.

This business unit separation comes as Intel, by their own admittance, has mismanaged PSG. While PSG has enjoyed a string of record quarters financially, Intel believes that PSG has been underserving the true high growth, high profitability markets for FPGAs, such as industrial, automotive, defense, and aerospace. Since being acquired by Intel in 2015 – and especially in the last few years as a formal part of DCAI – Intel’s PSG has been focused on datacenter solutions, to the detriment of other business segments.

Reforming PSG as a standalone business unit, in turn, is intended to improve the agility of the business unit. While PSG will remain under the ownership of Intel both now and in the future, Intel’s control over the group will be largely reduced to that of an investor. This will leave Sandra Rivera and her leadership team free to adjust the company’s product portfolio and positioning as to best serve the wider FPGA market, and not just Intel’s datacenter-centric ambitions. Meanwhile, if all goes well, over the long-haul Intel gets to pocket the profits of a successful IPO while having one less business unit to manage, allowing Intel to funnel its money and time into its own higher priority ventures such as fabs.

Keeping in mind that the PSG was an acquisition for Intel in the first place, in some respects this is an unwinding of that acquisition. In 2015 Intel paid $16.7 billion for what was then Altera, which under Intel became the PSG as we know it today. And while Intel’s eventual IPO plans for PSG have them retaining a stake in the business unit – and a majority stake, at that – this very much re-separates PSG/Altera in terms of operations.

Still, PSG/Altera has a very long history with Intel, going all the way back to 1984, and even as a standalone business unit, PSG will still be tied closely to Intel. Altera will be free to use whatever contract fab it would like, but as the company has been under Intel’s umbrella all this time, it is no surprise that many of the company’s upcoming products are slated to be built at Intel’s fabs, where PSG is expecting to leverage Intel’s advanced packaging techniques. And over the longer term, as Intel lays the groundwork to become the top contract fab in the world, it’s Intel’s hope that they’ll be able to keep PSG’s business.

At the same time, however, PSG will need to win back the business it has lost in the last several years due to its datecenter focus under Intel. The FPGA space is highly competitive, with arch rival AMD having acquired Xilinx in 2020, and who is starting to reap some of the first benefits of that acquisition and integration. Meanwhile in the low power FPGA space, fellow Oregon firm Lattice Semiconductor is not to be underestimated. Intel believes the FPGA market is primed for significant growth – on the order of a “high single digit” compound annual growth rate – so it’s not just a matter of winning back existing dollars from PSG’s rivals. But they’ll have to win back mindshare as well, a task that may take a significant amount of time as the FPGA market moves much slower and offers much longer-lived products than the CPU market.

But first, PSG must get ready to stand on its own two feet. PSG will transition to operating as a standalone business unit at the start of 2024, and it will be reported as such on Intel’s financial statements. Meanwhile, Intel is looking to bring on an initial external investor in 2024, to act as an outside resource to help prepare the group for an eventual IPO. According to Intel, PSG will need two to three years to develop the financial history and leadership stability for a successful IPO, which is why Intel is focusing on making the business unit standalone now, while eyeing an IPO a few years down the line.

Finally, for now it remains to be seen what the standalone PSG will be calling itself. As “programmable solutions group” is arguably unsuitable as a business name, expect to see PSG renamed. Whether that means resurrecting the Altera name or coming up with a new name entirely, as part of standing up on its own two feet, Intel’s FPGA business will need an identity of its own to become a business of its own.

]]>
https://www.anandtech.com/show/21083/intel-to-spinoff-programmable-solutions-group-as-standalone-business-eyeing-ipo-in-23-years Tue, 03 Oct 2023 18:45:00 EDT tag:www.anandtech.com,21083:news
Seagate Releases Game Drive PCIe 4.0 SSDs for PlayStation 5 Zhiye Liu

Western Digital's WD_Black SN850P was the first officially PlayStation 5-licensed SSD to hit the market. Seagate wants a piece of that and has hopped on the PlayStation 5 train with the new Game Drive PCIe 4.0 NVMe SSD series, officially licensed for Sony's current-generation gaming console.

Unlike Microsoft, which uses a proprietary SSD expansion card for the Xbox Series X and Xbox Series S, Sony opted to employ a standard M.2 slot for storage expansion on the PlayStation 5. The Japanese console maker's decision provides more storage options for gamers since they have many M.2 SSD offerings on the market. The M.2 slot has also paved the way for SSD manufacturers to partner with Sony to release licensed drives, which have been tested and approved for the PlayStation 5. Therefore, you don't want to worry whether the SSD's heatsink keeps the drive cool or if the Game Drive will fit inside the PlayStation 5.

Seagate's Game Drive SSDs, like the WD_Black SN850P, stick to the PCIe 4.0 x4 interface. That's the same interface in the PlayStation 5, so it makes little sense for vendors to tailor faster toward the gaming console. The Game Drive SSDs utilize Phison's PS5018-E18 PCIe 4.0 SSD controller capable of hitting write and read speeds over 7 GB/s. Built with TSMC's 12nm process node, the E18 is a popular, high-end SSD controller for mainstream PCIe 4.0 SSDs. The E18 comes equipped with three 32-bit Arm Cortex R5 CPU cores and an eight-channel design to support NAND flash speeds up to 1,600 MT/s and capacities up to 8 TB. Seagate pairs the E18 controller with unspecified 3D TLC NAND in the company's Game Drive SSDs.

Seagate Game Drive Specifications
  1 TB 2 TB 4 TB
Part Number ZP1000GP304001 ZP2000GP304001 ZP4000GP304001
Seq Reads (MB/s) 7,300 7,300 7,250
Seq Writes (MB/s) 6,000 6,900 6,900
Random Reads (K IOPS) 800 1,000 1,000
Random Writes (K IOPS) 1,000 1,000 1,000
Endurance (TBW) 1,275 2,550 5,100
Active Power, Average (W) 6.3 7.8 8.6
Idle Power PS3, Average (mW) 20 25 30
Low Power L1.2 mode (mW) <5 <5 <5

Seagate offers the Game Drive SSDs in 1 TB, 2 TB, and 4 TB variants. Sony recently deployed a software update for the PlayStation 5 to support 8 TB SSDs. It's a shame that Seagate doesn't commercialize an 8 TB variant of the Game Drive SSDs as competing brands, including Corsair, Sabrent, PNY, Addlink, and Inland, all have 8 TB drives in their arsenals.

Seagate's Game Drive series delivers sequential read and write speeds up to 7,300 MB/s and 6,900 MB/s, respectively. Random performance scales up to 1,000,000 IOPS writes and reads. However, the sequential and random performance vary by capacity. The 2 TB model is the only SKU to hit the maximum quoted figures. The Game Drive series' sequential read performance is on par with the WD_Black SN850P. Sequential write performance is somewhat faster. However, the WD_Black SN850P flaunts better random performance than the Game Drive.

Endurance doubles by the capacity. The 1 TB is rated for 1,275 TBW (terabytes written), while the 2 TB and 4 TB drives are at 2,550 TBW and 5,100 TBW. That's one aspect where the Game Drive is substantially better than the WD_Black SN850P. For comparison, the WD_Black SN850P's endurance levels for the 1 TB, 2 TB, and 4 TB drives are 600 TBW, 1,200 TBW, and 2,400 TBW, respectively. Seagate's SSDs are over 2X more durable than the Western Digital drives.

The Game Drive 1 TB and 2 TB models are shipping now and have $99.99 and $159.99 MSRPs, respectively. However, they're selling for $104 and $174 on Amazon. Meanwhile, the Game Drive 4 TB will set you back $449. The 1 TB and 2 TB drives are $10 cheaper than the WD_Black SN850P. The WD_Black SN850P 4 TB is up to $70 less expensive. Due to the licensing fees, Seagate's Game Drive series is significantly more costly than the non-licensed SSDs. For comparison, the WD_Black SN850X, which is commonly regarded as one of the best SSDs for the PlayStation 5, is available at $69 for the 1 TB model, $129 for the 2 TB model, and $282 for the 4 TB model.

]]>
https://www.anandtech.com/show/21082/seagate-releases-game-drive-pcie-4-0-ssds-for-playstation-5 Tue, 03 Oct 2023 16:00:00 EDT tag:www.anandtech.com,21082:news
Tenstorrent to Use Samsung’s SF4X for Quasar Low-Cost AI Chiplet Anton Shilov

Tenstorrent this week announced that it had chosen to use Samsung's SF4X (4 nm-class) process technology for its upcoming low-cost, low-power codenamed Quasar chiplet for machine learning workloads. The chiplets will be made at Samsung's new fab near Taylor, Texas when it becomes operational in 2024.

Tenstorrent's Quasar chiplet is a new addition to the company's roadmap. Based on an image provided by the company, the chiplet is set to pack at least 80 Tensix cores based on the RISC-V instruction set architecture and tailored to run artificial intelligence workloads in a variety of formats, such as BF4, BF8, INT8, FP16, and BF16. Tenstorrent's Quasar's are designed to operate in groups and be paired with the company's CPU chiplets, so they are equipped with non-blocking die-to-die interfaces.

Samsung's SF4X is a process technology designed for high-performance computing applications. It is tailored for high clocks and high voltages to ensure maximum performance.

Tenstorrent does not disclose the estimated performance of its Quasar chiplet, but assuming that it has 80 Tensix cores, which is the same number as the Wormhole chiplet taking a 328 FP8 TOPS performance, we can probably make a similar estimate of the performance, considering that it is made using a performance-enhanced process technology.

Tenstorrent officially positions its Quasar chiplets as a low-power, low-cost solution for machine learning, so we can only wonder whether the company will try to squeeze every last bit of performance out of them or choose a different power strategy.

"Samsung Foundry is expanding in the U.S., and we are committed to serving our customers with the best available semiconductor technology," said Marco Chisari, head of Samsung's U.S. Foundry business. "Samsung's advanced silicon manufacturing nodes will accelerate Tenstorrent's innovations in RISC-V and AI for data center and automotive solutions. We look forward to working together and serving as Tenstorrent's foundry partner."

One interesting wrinkle about Tenstorrent's relationship with Samsung is that it recently secured $100 million in financing from various companies in a round co-led by Hyundai Motor Group and the Samsung Catalyst Fund. Hyundai and Samsung need AI processors in one form or another, so it is not surprising that their funds are invested in Tenstorrent. Meanwhile, and I am speculating here, the same applies to chipmaking, and Samsung may be interested in producing chips for Tenstorrent for strategic reasons.

]]>
https://www.anandtech.com/show/21081/tenstorrent-to-use-samsungs-sf4x-for-quasar-lowcost-ai-chiplet Tue, 03 Oct 2023 10:30:00 EDT tag:www.anandtech.com,21081:news
Samsung T9 Portable SSD Review: A 20 Gbps PSSD for Prosumer Workloads Ganesh T S Samsung's portable SSD lineup has enjoyed significant market success since the launch of the T1 back in 2015. Despite the release of the Thunderbolt-capable X5 PSSD in 2018, the company has been focusing on the mainstream market. Multiple T series drives have made it to the market over the last 8 years. The product line made the transition to NVMe and USB 3.2 Gen 2 only in 2020 with the launch of the T7 Touch. Today, the company unveiled its first USB 3.2 Gen 2x2 (20 Gbps) PSSD - the Samsung T9. Read on for an in-depth investigation into the design and performance profile of the T9's 4 TB version.

]]>
https://www.anandtech.com/show/21079/samsung-t9-portable-ssd-review-a-20-gbps-pssd-for-prosumer-workloads Tue, 03 Oct 2023 10:00:00 EDT tag:www.anandtech.com,21079:news
Asus Formally Completes Acquisition of Intel's NUC Business Anton Shilov

ASUS has formally acquired Intel's Next Unit of Computing (NUC) products based on Intel's 10th to 13th Generation Core processors. Asus is set to continue building and supporting Intel's existing NUCs and will, over time, roll out its own compact NUC systems for office, entertainment, gaming, and many other applications.

"I am confident that this collaboration will enhance and accelerate our vision for the mini PC," said Jackie Hsu, Asus senior vice president and co-head of OP & AIoT business groups, at the signing ceremony. "Adding the Intel NUC product line to our portfolio will extend ASUS's AI and IoT R&D capabilities and technology solutions, especially in three key markets – industrial, commercial, and prosumer."

Asus held a formal handover ceremony in Taipei and took control of the NUC product lines that span from business applications to gaming. With the acquisition, Asus instantly commenced business processes for the NUC range, ensuring a hassle-free transition for existing customers. Under the terms of the agreement, Asus obtained licenses for both Intel's hardware designs and software. This move widens Asus's operational scope in R&D and extends its reach in logistics, tech support, and numerous application areas. 

Asus envisions broadening its NUC product line and distribution channels. The focus will remain on offering high-quality compact PCs with robust security and advanced technologies, which NUC is known for. ASUS also aims to produce eco-friendly NUC products while emphasizing impeccable service for its customer base.

"This is an exciting time for both Intel and Asus as we move forward with the next chapter in NUC's story," said Michelle Johnston Holthaus, Executive Vice President and General Manager of the Client Computing Group at Intel, who also attended the event. "Today's signing ceremony signifies more than just a business deal. It signifies ASUS' dedication to enhancing the lives of NUC customers and partners around the world. I look forward to seeing NUC thrive as part of the ASUS family."

It should be noted that Asus's Intel NUC license is not exclusive, so Intel may eventually enable other PC makers to build its NUCs. Though at this point, Asus remains the only licensee.

]]>
https://www.anandtech.com/show/21080/asus-formally-completes-acquisition-of-intels-nuc-business Mon, 02 Oct 2023 14:15:00 EDT tag:www.anandtech.com,21080:news
Micron to Ship HBM3E Memory to NVIDIA in Early 2024 Anton Shilov

Micron has reaffirmed plans to start shipments of its HBM3E memory in high volume in early 2024, while also revealing that NVIDIA is one of its primary customers for the new RAM. Meanwhile, the company stressed that its new product has been received with great interest by the industry at large, hinting that NVIDIA will likely not be the only customer to end up using Micron's HBM3E.

"The introduction of our HBM3E product offering has been met with strong customer interest and enthusiasm," said Sanjay Mehrotra, president and chief executive of Micron, at the company's earnings call.

Introducing HBM3E, which the company also calls HBM3 Gen2, ahead of its rivals Samsung and SK Hynix is a big deal for Micron, which is an underdog on the HBM market with a 10% market share. The company obviously pins a lot of hopes on its HBM3E since this will likely enable it to offer a premium product (to drive up its revenue and margins) ahead of its rivals (to win market share).

Typically, memory makers tend not to reveal names of their customers, but this time around Micron emphasized that its HBM3E is a part of its customer's roadmap, and specifically mentioned NVIDIA as its ally. Meanwhile, the only HBM3E-supporting product that NVIDIA has announced so far is its Grace Hopper GH200 compute platform, which features an H100 compute GPU and a Grace CPU.

"We have been working closely with our customers throughout the development process and are becoming a closely integrated partner in their AI roadmaps," said Mehrotra. "Micron HBM3E is currently in qualification for NVIDIA compute products, which will drive HBM3E-powered AI solutions."

Micron's 24 GB HBM3E modules are based on eight stacked 24Gbit memory dies made using the company's 1β (1-beta) fabrication process. These modules can hit data rates as high as 9.2 GT/second, enabling a peak bandwidth of 1.2 TB/s per stack, which is a 44% increase over the fastest HBM3 modules available. Meanwhile, the company is not going to stop with its 8-Hi 24 Gbit-based HBM3E assemblies. The company has announced plans to launch superior capacity 36 GB 12-Hi HBM3E stacks in 2024 after it initiates mass production of 8-Hi 24GB stacks.

"We expect to begin the production ramp of HBM3E in early calendar 2024 and to achieve meaningful revenues in fiscal 2024," added chief executive of Micron.

]]>
https://www.anandtech.com/show/21078/micron-to-ship-hbm3e-to-nvidia-in-early-2024 Thu, 28 Sep 2023 20:00:00 EDT tag:www.anandtech.com,21078:news
Micron Samples 128 GB Modules Based on 32 Gb DDR5 ICs Anton Shilov

Micron is sampling 128 GB DDR5 memory modules, the company said at its earnings call this week. The modules are based on the company's latest single die, non-stacked 32 Gb DDR5 memory devices, which the company announced earlier this summer and which will eventually open doors for 1 TB memory modules for servers.

"We expanded our high-capacity D5 DRAM module portfolio with a monolithic die-based 128 GB module, and we have started shipping samples to customers to help support their AI application needs," said Sanjay Mehrotra, president and chief executive of Micron. "We expect revenue from this product in Q2 of calendar 2024."

Micron's 32 Gb DDR5 dies are made on the company's 1β (1-beta) manufacturing process, which is the last production node that solely relies on multi-patterning using deep ultraviolet (DUV) lithography and does not use extreme ultraviolet (EUV) lithography tools. This is all that we know about Micron's 32 Gb DDR5 ICs at this point, though: the company does not disclose its maximum speed bin, though we can expect a drop in power consumption compared to two 16 Gb DDR5 ICs operating at the same voltage and data transfer rate.

Micron's new 32 Gb memory chips pave the way for creating a standard 32 GB module for personal computers with just eight individual memory chips and a server-oriented 128 GB module based on 32 of such ICs. Moreover, these chips make producing memory modules with a 1 TB capacity feasible, deemed unattainable today. These 1 TB modules might seem excessive for now, but they benefit fields like artificial intelligence, Big Data, and server databases. Such modules can enable servers to support up to 12 TB of DDR5 memory per socket (in the case of a 12-channel memory subsystem).

Speaking of DDR5 memory in general, it is noteworthy that the company expects that its bit production of DDR5 will exceed that of DDR4 in early 2024, placing it a bit ahead of the industry.

"Micron also has a strong position in the industry transition to D5," said Mehrotra. "We expect Micron D5 volume to cross over D4 in early calendar 2024, ahead of the industry."

]]>
https://www.anandtech.com/show/21077/micron-samples-128-gb-modules-based-on-32-gb-ddr5-ics Thu, 28 Sep 2023 10:00:00 EDT tag:www.anandtech.com,21077:news
Intel Meteor Lake SoC is NOT Coming to Desktops: Well, Not Technically Gavin Bonshor

Over the last couple of days, numerous reports have revealed that Intel's recently announced Meteor Lake SoC, primarily a mobile platform, would be coming to desktop PCs. Intel has further clarified that while their Meteor Lake processors will be featured in desktop systems next year, they won't power traditional socketed desktop PCs. Instead, these CPUs, primarily crafted for laptops, will be packaged in ball grid array (BGA) formats, making them suitable for compact desktops and all-in-one (AIO) devices.

Intel's statement, as reported by ComputerBase, emphasizes, "Meteor Lake is a power efficient architecture that will power innovative mobile and desktop designs, including desktop form factors such as All-in-One (AIO). We will have more product details to share in the future.

A senior Intel official recently mentioned that Meteor Lake processors are slated for desktop release in 2024. However, they won't be available in Intel's LGA1851 form factor, which caters to gaming rigs, client workstations, and conventional desktop systems. The practice of integrating laptop CPUs into compact PCs, such as NUCs and all-in-one PCs, isn't a novel one. Manufacturers have been doing this for years, and the intriguing aspect will be observing the performance and efficiency metrics of these high-end Meteor Lake laptop CPUs, especially when juxtaposed against the existing Raptor Lake processors designed for both desktops and laptops.

The rationale behind Intel's decision to exclude Meteor Lake processors from socketed desktops remains ambiguous. The CPU employs a multi-tile structure, with its compute tile being developed on the Intel 4 process technology. This technology marks Intel's inaugural use of extreme ultraviolet lithography (EUV), while the graphics tile and SoC leverage TSMC's fabrication methods. Both production techniques are poised to deliver commendable performance and efficiency, but Meteor Lake is not designed as a pure desktop product.

Current indications suggest that the Arrow Lake-S series will be aimed at LGA1851 motherboards, but this is anticipated for the latter half of 2024. While Q3/Q4 of 2024 is still a while away, Intel's motherboard partners, such as GIGABYTE and MSI, have been readying up new refreshed Z790 motherboards, with features such as Wi-Fi 7 set to come to Intel's impending Raptor Lake refresh platform which is due sometime before the end of the year.

Source: ComputerBase

]]>
https://www.anandtech.com/show/21076/intel-meteor-lake-soc-is-not-coming-to-desktops-well-not-technically Thu, 28 Sep 2023 09:00:00 EDT tag:www.anandtech.com,21076:news
eMMC Destined to Live a Bit Longer: KIOXIA Releases New Generation of eMMC Modules Anton Shilov

While the tech industry as a whole is well in the middle of transitioning to UFS and NVMe storage for portable devices, it would seem that the sun won't be setting on eMMC quite yet. This week Kioxia has introduced a new generation of eMMC 5.1 modules, based around a newer generation of their 3D NAND and aimed at those final, low-budget devices that are still using the older storage technology.

Kioxia's new storage modules are compliant with the eMMC 5.1 standard, offering sequential read performance that tops out at 250 MB/s – the best that this technology can provide. But the internal upgrade to a newer generation of Kioxia's 3D NAND can still provide some benefits over older modules, including 2.5x higher sequential and random write performance as well as 2.7x higher random read performance. Also, the new eMMC modules are spec'd to be more durable, with up to 3.3x the TBW rating of its predecessors.

"e-MMC remains a popular embedded memory solution for a wide range of applications," said said Maitry Dholakia, vice president, Memory Business Unit, for Kioxia America. “Kioxia remains steadfast in its commitment to delivering the latest in flash technology for these applications. Our new generation brings new performance features which address end user demands – and create a better user experience."

Given that the remaining devices using eMMC storage fall into the simplistic and inexpensive category, the new lineup of Kioxia's eMMC modules only includes packages offering 64GB and 128GB of storage. Which, in the big picture, is a small amount of storage – but it's suitable for budget devices, as well as for electronics with limited storage needs, such as drones, digital signage, smart speakers, and TVs.

But the main idea behind the new eMMC modules from Kioxia is perhaps not to improve their performance and user experience, but rather use newer and cheaper 3D NAND memory with them. This enables Kioxia to address inexpensive applications more cost efficiently, which ensures that the company will continue doing it going forward.

Kioxia expects to start mass production of its new 64 GB and 128 GB eMMC 5.1 storage modules in 2024. The company is sampling the new devices with its partners at present.

]]>
https://www.anandtech.com/show/21074/emmc-to-live-a-bit-longer-kioxia-releases-new-emmc-products Wed, 27 Sep 2023 20:00:00 EDT tag:www.anandtech.com,21074:news
Crucial Unveils X9 Portable SSD: QLC for the Cost-Conscious Consumer Ganesh T S

Crucial entered the portable SSD market relatively late, with their X6 and X8 PSSDs being the mainstay for many years. Based on QLC NAND, they were marketed for read-intensive use-cases, though the generous amount of SLC cache ended up delivering good write performance too for mainstream consumers - particularly in the X8. Recently, the company also started focusing on the prosumer / power users market with the launch of the X9 Pro and X10 Pro. Based on Micron's 176L 3D TLC NAND, these drives came with guaranteed write speeds.

Earlier this week, the company launched a successor to the Crucial X8 in the same form-factor as that of the recently launched X9 Pro and X10 Pro. The new USB 3.2 Gen 2 Crucial X9 PSSD takes on the same 65 x 50mm dimensions, but opts for an ABS plastic enclosure instead of the metal one used in the Pro units. Similar to the X8 that is being replaced, the X9 also doesn't advertise write speeds and there is no hardware encryption available. The lanyard hole is retained from the Pro design, but the LED indicator has been dropped. While the X9 is drop-proof up to 2m, the water- and dust-resistance features are not included.

The Crucial X9 PSSD utilizes Micron's 176L 3D QLC NAND and retains the Phison U17 native flash controller. The 1TB, 2TB, and 4TB capacity points are being introduced at $80, $120, and $250, though Amazon currently lists them at $90, $140, and $280 respectively. It is no secret that there is a glut in the flash market currently, resulting in very attractive (P)SSD price points for consumers. However, it is also well-known that it is a cyclic trend. Industry observers expect prices to go up sometime next year, and based on inventory levels of various models with different retailers, we might see strange pricing swings.

The Crucial X9 PSSD is a much-needed upgrade to the aging X8, and we are glad that Crucial has decided to release a new model instead of silently updating the NAND in the older version. The new form-factor and design for this product class is also a welcome change. Crucial's expanded product lineup ensures that it is competitive against established players like Samsung and Western Digital across all high-volume PSSD market segments. The only missing part is a Thunderbolt / USB4 model, and we hope Crucial will address that in the near future.

]]>
https://www.anandtech.com/show/21072/crucial-unveils-x9-portable-ssd-qlc-for-the-costconscious-consumer Wed, 27 Sep 2023 08:00:00 EDT tag:www.anandtech.com,21072:news
Corsair's Dominator Titanium Memory Now Available, Unveils Plans for Beyond 8000 MT/s Anton Shilov

Corsair has started sales of its Dominator Titanium memory modules that were formally introduced this May. The new modules bring together luxurious look, customizable design, and extreme data transfer rates of up to 8000 MT/s. Speaking of performance, the company implied that it intends to introduce Dominator Titanium with speed bins beyond DDR5-8000 when the right platform arrives.

Corsair's Dominator Titanium family is based around 16 GB, 24 GB, 32 GB, and 48 GB memory modules that come in kits ranging from 32GB (2 x 16GB) up to 192GB (4x 48GB). As for performance, the lineup listed at the company's website includes DDR5-6000 CL30, DDR5-6400 CL32, DDR5-6600 CL32, DDR5-7000 CL34, DDR5-7000 CL36, DDR5-7200 CL34, and DDR5-7200 CL36 with voltages of 1.40 V – 1.45V.

Although Corsair claims that Dominator Titanium with data transfer speeds beyond 8000 MT/s are coming, it is necessary to note that they will be supported by next generation platforms from AMD and Intel. For now, the company only offers 500 First Edition Dominator Titanium kits rated for DDR5-8266 mode for its loyal fans.

To address demand from different types of users, Corsair offers Dominator Titanium with XMP 3.0 SPD settings for Intel's 12th and 13th Generation Core CPUs with black and white heat spreaders as well as with AMD EXPO SPD profiles for AMD's Ryzen processors with grey finish on heat spreaders.

In terms of design of heat spreaders, Corsair remained true to aesthetics. The modules are equipped with 11 customizable Capellix RGB LEDs, offering users a personalized touch. This can be easily adjusted using Corsair's proprietary software. For enthusiasts who lean towards a more traditional aesthetic, Corsair provides an alternative design with fins, reminiscent of their classic memory modules.

Speaking of heat spreaders, it is necessary to note that despite the name of the modules, they do not come with titanium radiators and keep using aluminum, which is a good thing since titanium has a rather low thermal conductivity of 11.4 W/mK and will therefore heat up memory chips rather than distribute heat away from them. Traditionally, Corsair's Dominator memory modules use cherry-picked DRAM chips and the company's proprietary printed circuit boards enhanced with internal cooling planes and external thermal pads to improve cooling.

Corsair's Dominator Titanium memory products are now available both directly from the company and from its resellers. The cheapest Dominator Titanium DDR5-6000 CL30 32 GB kit (2 x 16 GB) costs $175, whereas faster and higher-capacity kits are priced higher.

]]>
https://www.anandtech.com/show/21071/corsairs-dominator-titanium-memory-now-available-unveils-plans-for-beyond-8000-mts Tue, 26 Sep 2023 19:00:00 EDT tag:www.anandtech.com,21071:news
GlobalFoundries Applies for CHIPS Money to Expand U.S. Fabs Anton Shilov

Update 9/30: Correcting the number of companies interested in receiving support from the CHIPS fund.

GlobalFoundries has applied for financial support from the U.S. CHIPS and Science Act to expand its American manufacturing sites, the company said this week. The company intends to get federal grants and investment tax credits to upgrade facilities used to build chips for various applications, including automotive, aerospace, defense, and many other industries.

GlobalFoundries's initiative is in line with the provisions of the U.S. CHIPS and Science Act, which aims to strengthen the nation's semiconductor production capabilities. The act has set aside a substantial amount, $52.7 billion, to support semiconductor research, production, and workforce development. Additionally, it offers a 25% investment tax incentive for the construction of chip plants, estimated to be worth around $24 billion, Reuters reminded.

This expansion is beneficial for the company and essential for enhancing the U.S.'s economic stability, supply chain robustness, and defense mechanisms, the company said.

"As the leading manufacturer of essential semiconductors for the U.S. government, and a vital supplier to the automotive, aerospace and defense, IoT and other markets, GF has submitted our applications to the CHIPS Program Office to participate in the federal grants and investment tax credits enabled by the U.S. CHIPS and Science Act," said Steven Grasso, senior director of global government affairs at GF. "This federal support is critical for GF to continue growing its U.S. manufacturing footprint, strengthening U.S economic security, supply chain resiliency, and national defense."

GlobalFoundries is not alone in getting money from the CHIPS fund. The U.S. Department of Commerce recently said that over 500 companies from 42 states had expressed interest get these semiconductor subsidies in August. The subsidies aim to foster innovation and ensure the U.S. remains at the forefront of semiconductor technology.

Sources: GlobalFoundriesReuters

]]>
https://www.anandtech.com/show/21070/globalfoundries-applies-for-chips-money-to-expand-us-fabs Tue, 26 Sep 2023 12:00:00 EDT tag:www.anandtech.com,21070:news