Intel AI Assist: A Better Guess At Auto Overclocking

Below, we'll give Intel's latest AI Assist feature via the Extreme Tuning Utility (XTU) software to see what it believes is the best overclock for our system and how it compares to default settings. After applying Intel's AI Assist to our Core i9-14900K, it concluded that the following settings are suitable for our test setup:

Intel's AI Assist believes our system and Core i9-14900K is capable of 6.1 GHz on the two of the P-cores and 6.0 GHz on the remaining 6 P-cores, which, based on some preliminary testing with XTU, is very ambitious, to say the least. When running a CineBench R23 MT, the system was as stable as a kite in a hurricane; not very stable at all. We did manage to get a couple of CineBench R23 MT runs in, but with thermal throttling happening instantaneously, we saw some regression in performance with a score of 39445; temperatures went straight into the red, and the system dialed back the core frequencies and CPU V-Core.

The feature is a good idea in principle, but once enabled, even though it's an Intel-marketed feature, it voids the CPU's warranty. The other element is that the additional heat and power make the applied settings under intense workloads unstable. While this is still an early feature, we would have expected more stability with the applied settings than we saw in our testing.

Intel Core i9-14900K and Core i5-14600K Review: Raptor Lake Refreshed Test Bed and Setup: Moving Towards 2024
POST A COMMENT

54 Comments

View All Comments

  • DabuXian - Tuesday, October 17, 2023 - link

    so basically a mere 6% better Cinebench MT score at the cost of almost 100 extra watts. I dunno in what universe would anyone want this instead of a 7950x. Reply
  • yankeeDDL - Tuesday, October 17, 2023 - link

    At platform level it is over 200W difference. Impressive.
    And I agree, nobody in teh right mind should get Intel over AMD, unless they have very specific workload in which that 6% makes a difference worth hundreds/thousand of dollars in electricity per year.
    Reply
  • schujj07 - Tuesday, October 17, 2023 - link

    If you have a workload like that then you run Epyc or Threadripper as the task is probably VERY threaded. Reply
  • lemurbutton - Tuesday, October 17, 2023 - link

    Who cares about CInebench MT? It's a benchmark for a niche software in a niche. Reply
  • powerarmour - Wednesday, October 18, 2023 - link

    Wouldn't buy the 7950X either, not interested in any CPU that draws >200W unless I'm building a HEDT workstation. Reply
  • shabby - Tuesday, October 17, 2023 - link

    Lol @ the power usage, this will make a nice heater this winter. Reply
  • yankeeDDL - Tuesday, October 17, 2023 - link

    I find it amazing. It takes more than 200W MORE to beat the 7950.
    The difference in efficiency is unbelievable.
    Buying Intel today still makes no sense unless that extra 5-10% in some specific benchmark really make a huge difference. Otherwise it'll cost you dearly in electricity.
    Reply
  • bug77 - Thursday, October 19, 2023 - link

    While Anand has a policy of testing things out-of-the-box, which is fine, it is well known ADL and RPL can be power constrained to something like 125W max, while losing performance in the single digits range.
    It would be really useful if we had a follow up article looking into that.
    Reply
  • yankeeDDL - Tuesday, October 17, 2023 - link

    So, 6% faster than previous gen, a bit (10%?) faster than AMD's 7950.
    Consuming over 200W *more* than the Ryzen 7950.
    I'd say Intel's power efficiency is still almost half that of the ryzen. It's amazing how far behind they are.
    Reply
  • colinstu - Tuesday, October 17, 2023 - link

    This power consumption / heat output is insane… this is putting their 90nm Netburst Prescott / Pentium D Smithfield days to shame. Remember when Apple left IBM/Motorola alliance? Power architecture power consumption going thru the roof, and intel JUST pivoted back to PIII/Pentium M-based Core arch. No wonder why Apple dumped Intel, they called what they were seeing really early on. Arm for windows/linux desktop needs to get more serious, apple's desktop arm is proving nearly as powerful using a fraction of the power draw. Windows is ready, and can even run non-arm code too. Reply

Log in

Don't have an account? Sign up now