Intel Thread Director

One of the biggest criticisms that I’ve levelled at the feet of Intel since it started talking about its hybrid processor architecture designs has been the ability to manage threads in an intelligent way. When you have two cores of different performance and efficiency points, either the processor or the operating system has to be cognizant of what goes where to get the best result from the end-user. This requires doing additional analysis on what is going on with each thread, especially new work that has never been before.

To date, most desktop operating systems operate on the assumption that all cores and the performance of everything in the system is equal.  This changed slightly with simultaneous multithreading (SMT, or in Intel speak, HyperThreading), because now the system had double the threads, and these threads offered anywhere from zero to an extra 100% performance based on the workload. Schedulers were hacked a bit to identify primary and secondary threads on a core and schedule new work on separate cores. In mobile situations, the concept of an Energy Aware Scheduler (EAS) would look at the workload characteristics of a thread and based on the battery life/settings, try and schedule a workload where it made sense, particularly if it was a latency sensitive workload.

Mobile processors with Arm architecture designs have been tackling this topic for over a decade. Modern mobile processors now have three types of core inside – a super high performance core, regular high performance cores, and efficiency cores, normally in a 1+3+4 or 2+4+4 configuration. Each set of cores has its own optimal window for performance and power, and so it relies on the scheduler to absorb as much information as possible to determine the best way to do things.

Such an arrangement is rare in the desktop space - but now with Alder Lake, Intel has an SoC that has SMT performance cores and non-SMT efficient cores. With Alder Lake it gets a bit more complex, and the company has built a technology called Thread Director.

That’s Intel Thread Director. Not Intel Threat Detector, which is what I keep calling it all day, or Intel Threadripper, which I have also heard. Intel will use the acronym ITD or ITDT (Intel Thread Director Technology) in its marketing. Not to be confused with TDT, Intel’s Threat Detection Technology, of course.

Intel Threadripper Thread Director Technology

This new technology is a combined hardware/software solution that Intel has engineered with Microsoft focused on Windows 11. It all boils down to having the right functionality to help the operating system make decisions about where to put threads that require low latency vs threads that require high efficiency but are not time critical.

First you need a software scheduler that knows what it is doing. Intel stated that it has worked extensively with Microsoft to get what they want into Windows 11, and that Microsoft have gone above and beyond what Intel needed. This fundamental change is one reason why Windows 11 exists.

So it’s easy enough (now) to tell an operating system that different types of cores exist. Each one can have a respective performance and efficiency rating, and the operating system can migrate threads around as required. However the difference between Windows 10 and Windows 11 is how much information is available to the scheduler about what is running.

In previous versions of Windows, the scheduler had to rely on analysing the programs on its own, inferring performance requirements of a thread but with no real underlying understanding of what was happening. Windows 11 leverages new technology to understand different performance modes, instruction sets, and it also gets hints about which threads rate higher and which ones are worth demoting if a higher priority thread needs the performance.

Intel classifies the performance levels on Alder Lake in the following order:

  1. One thread per core on P-cores
  2. Only thread on E-cores
  3. SMT threads on P-cores

That means the system will load up one thread per P-core and all the E-cores before moving to the hyperthreads on the P-cores.

Intel’s Thread Director controller puts an embedded microcontroller inside the processor such that it can monitor what each thread is doing and what it needs out of its performance metrics. It will look at the ratio of loads, stores, branches, average memory access times, patterns, and types of instructions. It then provides suggested hints back to the Windows 11 OS scheduler about what the thread is doing, whether it is important or not, and it is up to the OS scheduler to combine that with other information about the system as to where that thread should go. Ultimately the OS is both topologically aware and now workload aware to a much higher degree.

Inside the microcontroller as part of Thread Director, it monitors which instructions are power hungry, such as AVX-VNNI (for machine learning) or other AVX2 commands that often draw high power, and put a big flag on those for the OS for prioritization. It also looks at other threads in the system and if a thread needs to be demoted, either due to not having enough free P-cores or for power/thermal reasons, it will give hints to the OS as to which thread is best to move. Intel states that it can profile a thread in as little as 30 microseconds, whereas a traditional OS scheduler may take 100s of milliseconds to make the same conclusion (or the wrong one).

On top of this, Intel says that Thread Director can also optimize for frequency. If a thread is limited in a way other than frequency, it can detect this and reduce frequency, voltage, and power. This will help the mobile processors, and when asked Intel stated that it can change frequency now in microseconds rather than milliseconds.

We asked Intel about where an initial thread will go before the scheduling kicks in. I was told that a thread will initially get scheduled on a P-core unless they are full, then it goes to an E-core until the scheduler determines what the thread needs, then the OS can be guided to upgrade the thread. In power limited scenarios, such as being on battery, a thread may start on the E-core anyway even if the P-cores are free.

For users looking for more information about Thread Director on a technical, I suggest reading this document and going to page 185, reading about EHFI – Enhanced Hardware Frequency Interface. It outlines the different classes of performance as part of the hardware part of Thread Director.

It’s important to understand that for the desktop processor with 8 P-cores and 8 E-cores, if there was a 16-thread workload then it will be scheduled across all 8 P-cores with 8 threads, then all 8 E-cores with the other 8 threads. This affords more performance than enabling the hyperthreads on the P-cores, and so software that compares thread-to-thread loading (such as the latest 3DMark CPU Profile test) may be testing something different compared to processors without E-cores.

On the question of Linux, Intel only went as far to say that Windows 11 was the priority, and they’re working upstreaming a variety of features in the Linux kernel but it will take time. An Intel spokesperson said more details closer to product launch, however these things will take a while, perhaps months and years, to get to a state that could be feature-parity equivalent with Windows 11.

One of the biggest questions users will ask is about the difference in performance or battery between Windows 10 and Windows 11. Windows 10 does not get Thread Director, but relies on a more basic version of Intel’s Hardware Guided Scheduling (HGS). In our conversations with Intel, they were cagy to put any exact performance differential metrics between the two, however based on understanding of the technology, we should expect to see better frequency efficiency in Windows 11. Intel stated that even though the new technology in Windows 11 will mean threads will move more often than in Windows 10, potentially adding latency, in their testing it wasn’t in any way human perceivable. Ultimately because the Win11 configuration can also optimize for power and efficiency, especially in mobile, Intel puts the win on Windows 11.

The only question is if Windows 11 will launch in time for Alder Lake.

Alder Lake: Intel 12th Gen Core Golden Cove Microarchitecture (P-Core) Examined
POST A COMMENT

222 Comments

View All Comments

  • mode_13h - Thursday, August 19, 2021 - link

    Indeed. But, remember that it's a Skylake from 2015, fabbed on Intel's original 14 nm node, and it's an integer workload they measured. If they measured vector or FPU workloads, the results would probably be rather different. Reply
  • Spunjji - Monday, August 23, 2021 - link

    Indeed. Based on how Intel usually do their marketing, I'm not expecting anything revolutionary from those cores. Maybe I'll be surprised, but I'm expecting mild disappointment. Reply
  • mode_13h - Tuesday, August 24, 2021 - link

    Having already bought into the "Atom" series at Apollo Lake, for a little always-on media streaming server, I'm already thrilled! Tremont was already a bigger step up than I expected. Reply
  • Spunjji - Tuesday, August 24, 2021 - link

    Fair - I've just been a bit burned! Last time I used an Atom device was Bay Trail, and at the time there was a big noise about its performance being much better than previous Atom processors. The actual experience was not persuasive! Reply
  • Silver5urfer - Thursday, August 19, 2021 - link

    Too many changes in the CPU x86 topology. They are making this CPU a heavily dependent one of the OS side with such insane changes to the Scheduler system, like P, E and then the Hyperthreading of P cores. On top of all this the DRAM system must be good else all that 19% big IPC boost will be wasted just like on Rocket Lake. Finally Windows 11 only ? dafaq.

    I have my doubts on this Intel IDT and the whole ST performance along with gaming SMT/HT performance. Until the CPU is out it's hard to predict the things. Also funny they are simply adding the older Skylake cores to the processor in a small format without HT, while claiming this ultra hybrid nonsense, seems like mostly tuned for a mobile processor than a Desktop system which is why there's no trash cores on the HEDT Sapphire Rapids Xeon. And which Enterprise wants to shift to this new nonsense of x86 landscape. On top we have Zen 4 peaking at 96Core 192T hyperbeast Genoa which also packs AVX512.

    I'm waiting Intel, also AMD for their 3D V Cache Zen 3 refresh. Plus whoever owns any latest processors from Intel or AMD should avoid this Hardware like plague, it's too much of a beta product and OS requirements, DRAM pricing problems will be there for Mobos and RAM kits and PCIe5.0 is just damn new and has no usage at all right now It all feels just like Zen when AMD came with NUMA system and refined it well by the Zen 3. I doubt AMD will have any issue with this design. But one small good news is some competition ?
    Reply
  • Silver5urfer - Thursday, August 19, 2021 - link

    Also scalable lol. This design is peaked out at 8C16T and 8 small cores while Sapphire Rapids is at 56Cores 112T. AMD's Zen 4 ? 96C/192T lmao that battle is going to be good. Intel is really done with x86 is what I get from this, copying everything from AMD and ARM. Memory Interconnects, Big Little nonsense. Just release the CPU and let it rip Intel, we want to see how it works against 10900Ks and 5900Xs. Reply
  • mode_13h - Friday, August 20, 2021 - link

    > Also funny they are simply adding the older Skylake cores
    > to the processor in a small format without HT

    They're not Skylake cores, of course. They're smaller & more power-efficient, but also a different uArch. 3+3-wide decode, instead of 4-wide, and no proper uop cache. Plus, the whole thing about 17 dispatch ports.

    If you back-ported these to 14 nm, they would lose their advantages over Skylake. If they forward-ported Skylake to "Intel 7", it would probably still be bigger and more power-hungry. So, these are different, for good reasons.
    Reply
  • vyor - Friday, August 20, 2021 - link

    I believe they have a uOP cache though? Reply
  • mode_13h - Saturday, August 21, 2021 - link

    No, Tremont and Gracemont don't have a uop cache. And if Goldmont didn't, then it's probably safe to say that none of the "Atom" cores did.

    The article does mention that some aspects of the instruction cache make it sound a bit like a uop cache.
    Reply
  • Silver5urfer - Saturday, August 21, 2021 - link

    I see the only reason - Intel was forced to shrink the SKL and shove them in this designs because their node Fabs are busted. Their Rocket Lake is a giant power hog. Insane power draw. Intel really shined until 10900K, super fast cores, ultra strong IMC that can handle even 5000MHz and any DRAM. Solid SMT. High power but it's a worth trade off.

    With RKL, backport Intel lost - IMC leadership, SMT performance, ST performance (due to Memory latency) AND efficiency PLUS Overclockability. That was the time I saw Intel's armor cracking. HEDT was dead so was Xeon but the only reason Mainstream LGA1200 stood was super strong ring bus even on RKL.

    Now FF to 10SF or Intel 7 whatever they call it. No more high speed IMC now even double whammy due to the dual ring system and the ring is shared by all the cores connected, I doubt these SKL cores can manage the highspeed over 3800MHz DDR4 RAM, which is why they are mentioning Dynamic Clocking for Memory, this will have Gearing memory system for sure. High amount of efficiency focus due to the Laptop market from Apple and AMD pressure. No more big core SMT/HT performance. Copying ARMs technology onto x86 is pathetic. ARM processors never did SMT x86 had this advantage. But Intel is losing it because their 10nm is a dud. Look at the leaked PL1,2,4 numbers. It doesn't change at all, they crammed 8 phone cores and still it's higher and higher.

    Look at HEDT, Sapphire Rapids, tile approach, literally copied everything they could from AMD and tacked on HBM for HPC money. And I bet the power consumption would be insanely high due to no more phone cores cheating only big x86 real cores. Still they are coming back. At this point Intel would have released "Highest Gaming Performance" marketing for ADL, so far none and release is just 2 months. RKL had that campaign before 2 months and CFL, CML all of them had. This one doesn't and they are betting on I/O this time.

    Intel has to show the performance. And it's not like AMD doesn't know this, which is why Lisa Su showed off a strong 15% gaming boost. And remember when AMD showcases the CPUs ? Direct benchmarks against Intel's top - 9900K, 10900Ks all over the place. No sign of 5900X or 5950X comparisons from Intel.
    Reply

Log in

Don't have an account? Sign up now