Intel Thread Director

One of the biggest criticisms that I’ve levelled at the feet of Intel since it started talking about its hybrid processor architecture designs has been the ability to manage threads in an intelligent way. When you have two cores of different performance and efficiency points, either the processor or the operating system has to be cognizant of what goes where to get the best result from the end-user. This requires doing additional analysis on what is going on with each thread, especially new work that has never been before.

To date, most desktop operating systems operate on the assumption that all cores and the performance of everything in the system is equal.  This changed slightly with simultaneous multithreading (SMT, or in Intel speak, HyperThreading), because now the system had double the threads, and these threads offered anywhere from zero to an extra 100% performance based on the workload. Schedulers were hacked a bit to identify primary and secondary threads on a core and schedule new work on separate cores. In mobile situations, the concept of an Energy Aware Scheduler (EAS) would look at the workload characteristics of a thread and based on the battery life/settings, try and schedule a workload where it made sense, particularly if it was a latency sensitive workload.

Mobile processors with Arm architecture designs have been tackling this topic for over a decade. Modern mobile processors now have three types of core inside – a super high performance core, regular high performance cores, and efficiency cores, normally in a 1+3+4 or 2+4+4 configuration. Each set of cores has its own optimal window for performance and power, and so it relies on the scheduler to absorb as much information as possible to determine the best way to do things.

Such an arrangement is rare in the desktop space - but now with Alder Lake, Intel has an SoC that has SMT performance cores and non-SMT efficient cores. With Alder Lake it gets a bit more complex, and the company has built a technology called Thread Director.

That’s Intel Thread Director. Not Intel Threat Detector, which is what I keep calling it all day, or Intel Threadripper, which I have also heard. Intel will use the acronym ITD or ITDT (Intel Thread Director Technology) in its marketing. Not to be confused with TDT, Intel’s Threat Detection Technology, of course.

Intel Threadripper Thread Director Technology

This new technology is a combined hardware/software solution that Intel has engineered with Microsoft focused on Windows 11. It all boils down to having the right functionality to help the operating system make decisions about where to put threads that require low latency vs threads that require high efficiency but are not time critical.

First you need a software scheduler that knows what it is doing. Intel stated that it has worked extensively with Microsoft to get what they want into Windows 11, and that Microsoft have gone above and beyond what Intel needed. This fundamental change is one reason why Windows 11 exists.

So it’s easy enough (now) to tell an operating system that different types of cores exist. Each one can have a respective performance and efficiency rating, and the operating system can migrate threads around as required. However the difference between Windows 10 and Windows 11 is how much information is available to the scheduler about what is running.

In previous versions of Windows, the scheduler had to rely on analysing the programs on its own, inferring performance requirements of a thread but with no real underlying understanding of what was happening. Windows 11 leverages new technology to understand different performance modes, instruction sets, and it also gets hints about which threads rate higher and which ones are worth demoting if a higher priority thread needs the performance.

Intel classifies the performance levels on Alder Lake in the following order:

  1. One thread per core on P-cores
  2. Only thread on E-cores
  3. SMT threads on P-cores

That means the system will load up one thread per P-core and all the E-cores before moving to the hyperthreads on the P-cores.

Intel’s Thread Director controller puts an embedded microcontroller inside the processor such that it can monitor what each thread is doing and what it needs out of its performance metrics. It will look at the ratio of loads, stores, branches, average memory access times, patterns, and types of instructions. It then provides suggested hints back to the Windows 11 OS scheduler about what the thread is doing, whether it is important or not, and it is up to the OS scheduler to combine that with other information about the system as to where that thread should go. Ultimately the OS is both topologically aware and now workload aware to a much higher degree.

Inside the microcontroller as part of Thread Director, it monitors which instructions are power hungry, such as AVX-VNNI (for machine learning) or other AVX2 commands that often draw high power, and put a big flag on those for the OS for prioritization. It also looks at other threads in the system and if a thread needs to be demoted, either due to not having enough free P-cores or for power/thermal reasons, it will give hints to the OS as to which thread is best to move. Intel states that it can profile a thread in as little as 30 microseconds, whereas a traditional OS scheduler may take 100s of milliseconds to make the same conclusion (or the wrong one).

On top of this, Intel says that Thread Director can also optimize for frequency. If a thread is limited in a way other than frequency, it can detect this and reduce frequency, voltage, and power. This will help the mobile processors, and when asked Intel stated that it can change frequency now in microseconds rather than milliseconds.

We asked Intel about where an initial thread will go before the scheduling kicks in. I was told that a thread will initially get scheduled on a P-core unless they are full, then it goes to an E-core until the scheduler determines what the thread needs, then the OS can be guided to upgrade the thread. In power limited scenarios, such as being on battery, a thread may start on the E-core anyway even if the P-cores are free.

For users looking for more information about Thread Director on a technical, I suggest reading this document and going to page 185, reading about EHFI – Enhanced Hardware Frequency Interface. It outlines the different classes of performance as part of the hardware part of Thread Director.

It’s important to understand that for the desktop processor with 8 P-cores and 8 E-cores, if there was a 16-thread workload then it will be scheduled across all 8 P-cores with 8 threads, then all 8 E-cores with the other 8 threads. This affords more performance than enabling the hyperthreads on the P-cores, and so software that compares thread-to-thread loading (such as the latest 3DMark CPU Profile test) may be testing something different compared to processors without E-cores.

On the question of Linux, Intel only went as far to say that Windows 11 was the priority, and they’re working upstreaming a variety of features in the Linux kernel but it will take time. An Intel spokesperson said more details closer to product launch, however these things will take a while, perhaps months and years, to get to a state that could be feature-parity equivalent with Windows 11.

One of the biggest questions users will ask is about the difference in performance or battery between Windows 10 and Windows 11. Windows 10 does not get Thread Director, but relies on a more basic version of Intel’s Hardware Guided Scheduling (HGS). In our conversations with Intel, they were cagy to put any exact performance differential metrics between the two, however based on understanding of the technology, we should expect to see better frequency efficiency in Windows 11. Intel stated that even though the new technology in Windows 11 will mean threads will move more often than in Windows 10, potentially adding latency, in their testing it wasn’t in any way human perceivable. Ultimately because the Win11 configuration can also optimize for power and efficiency, especially in mobile, Intel puts the win on Windows 11.

The only question is if Windows 11 will launch in time for Alder Lake.

Alder Lake: Intel 12th Gen Core Golden Cove Microarchitecture (P-Core) Examined
Comments Locked

223 Comments

View All Comments

  • zamroni - Friday, August 20, 2021 - link

    That low power cores for desktop is waste of transistors.
    They are better to be used for more caches or more performance cores
  • mode_13h - Friday, August 20, 2021 - link

    This is what I thought, until I realized that they have better perf/area than the big cores. Not to mention perf/W.

    So, in highly-threaded workloads, their 8+8 core configuration should out-perform 10 cores of Golden Cove. And, when thermally-limited, the little cores will also more than pull their weight.

    It's an interesting experiment they're trying. I'm interested in seeing how it plays out, in the real world.
  • nevcairiel - Friday, August 20, 2021 - link

    > Designed as its third generation of vector instructions (AVX is 128-bit, AVX2 is 256-bit, AVX512 is 512-bit)

    SSE is 128-bit. AVX is 256-bit FP, AVX2 is 256-bit INT.
    And MMX was 64-bit before that. So doesn't this make it the 4th generation, assuming you don't count all the SSE versions separately? (The big ones were SSE1 with 128-bit FP, and SSE2 with 128-bit INT, SSE3/SSSE3/SSE4.1 are only minor extensions)
  • mode_13h - Saturday, August 21, 2021 - link

    Yeah, I came to the same conclusion. It's the 4th major family of vector instructions. Or, another way to clearly demarcate it would be the 4th vector width.
  • abufrejoval - Friday, August 20, 2021 - link

    I wonder how many side channel attacks the power director will enable.

    Also wonder if the lack of details is due to Intel stepping awfully close to some of Apple's patents.

    The battles between the Big little and AVX-512 teams inside Intel must have been epic: I imagine frothing red faces all around...
  • mode_13h - Saturday, August 21, 2021 - link

    > The battles between the Big little and AVX-512 teams inside Intel must have been epic

    : )

    Although, the AVX-512 folks have some egg on their faces from a problematic implementation in Skylake-SP and its derivatives.
  • abufrejoval - Friday, August 20, 2021 - link

    Does Big-little make any sense on a "desktop"?

    And then: Are there actually still any desktops around?

    All around my corporate workplaces, notebooks have become the de-facto desktop for many depreciation cycles, mostly because personal offices got replaced by open space and home-office days became a regular thing far before the pandemic. Since then even 'workstations' just became bigger notebooks.

    Anywhere else I look it's becoming hard to detect desktops, even for big-screen & multi-monitor setups, it's mostly NUCs or in-screen devices these days.

    Those latter machines rarely seem to get turned off any more and I guess many corporate laptops will remain 'turned on' (= stay in some sort of slumber) most of the time, too, so there Big-little overall power consumption might drop vs. Big-only, when both no longer sleep deeply.

    Supposedly that makes all these voice commands possible, but try as I might, I can see no IT admin turning that on in an office, nor would I want that in my living room.

    The only place I still see 'desktops' are really gamer machines and for those it's hard to see how those small cores might have any significant energy budget impact, even while they are used for ordinary 2D stuff.

    For micro-servers Big-little seems much more useful, but Intel typically has gone a long way to ensure that 'desktop' CPUs were not used for that.

    Intel's desire for market differentiation seems the major factor behind this and many other features since MMX, but given an equal price choice, I cannot imagine preferring the use of AVX-512 for dark silicon and two P-core tiles for eight E-cores over a fully enabled ten P-core chip.

    And I'd belive that most 'desktop' users would prefer the same.
  • mode_13h - Saturday, August 21, 2021 - link

    > The only place I still see 'desktops' are really gamer machines

    We still use traditional desktops for software development and VMs for testing. Our software takes long enough to build and the test environment needs to boot a full image. So, a proper desktop isn't hard to justify.
  • abufrejoval - Saturday, August 21, 2021 - link

    Our developers are encouraged to use build servers and the automatic testing pipelines behind them. Those run on machines with hundreds of GB of RAM and dozens of CPU cores, where loads get distribued via the framework. The QA tests will use containers or VMs as required, which are built and torn down to match by the pipeline. With thousands of developers in the company, that tends to give both better performance to any developer and much better economy to the company, while (home-)offices stay cool and quiet. We still give them laptops with what used to be "desktop" specs (32GB RAM, i7 quads), because, well they're cheap enough, and it allows them to play with VMs locally, even offline, should they want to e.g. for education/self-study.

    These days when you're running a build farm on your "desktop", that may really more of a workstation. It may be the "economy" model, which means from a price point it's what used to be a desktop, in my home-lab case a Ryzen 7 5800X 8-core with an RTX 2080ti and 128GB ECC RAM that runs whisper quiet even at full load. It would have been a 16-Core 5950X today, but when I built it, those were impossible to get. It's still an easy upgrade and would get you 16 "P-cores" on the cheap. It's also pretty much a gamer's rig, which is why I also use it after hours.

    My other home-lab workstation is what used to be a "real workstation" some years ago, an 18-core Haswell E5-2696 v3, which has exactly the same performance as the Ryzen 7 5800X on threaded jobs, even uses the same 110 Watts of power, but much lower clocks (2.7 vs. 4.4 GHz all-cores). Also 128GB of ECC RAM and thankfully just as quiet. It's not so great at gaming, because it only clocks to 4 GHz for single/dual core loads with Haswell IPC and I've yet to find a game that's capable of using 18-cores for profit to balance that out.

    Today you would use a Threadripper in that ballpark, with an easy 64 "P-Cores" and matching RAM, pretty much the same computing capacity as a mid-range server, but much quieter and typically tolerable in a desktop/office setup.

    If threaded software builds were all you do, you'd want to use 64 E-Cores on the "economy" variant and 256 E-Cores on the "premium", much like Ian hinted, because as long as you can fully load those 256 cores for your builds, they would be faster overall. But the chances for that happening are vastly bigger on a shared server than on a dedicated desktop, which is why we see all these ARM servers going for extra cores at the price of max single threaded performance.

    As a thought experiment imagine a machine where tiles can be switched dynamically between being a single P-core or four E-cores. For embarrassingly parallel workloads, the E-Cores would give you both better Watt economy (if you can maintain full load or your idle power consumption is perfect) and faster finish times. But as soon as your workload doesn't go beyond the number of P-cores you can configure, finishing times will be better on P-cores, while power effiency very much gets lost in idle power demands.

    The only way to get that re-configurability is to use shared servers, cloud or DC, while a fixed allocation of P vs E cores on a desktop has a much harder time to match your workload.

    I can tell you that I much prefer working on the 5800X workstation these days, even if it's no faster for the builds. Beause it's twice as fast on all those scalar workloads. And no matter how much most stuff tries to go wide and thready, Amdahl's law still holds true and that where P-Cores help.
  • mode_13h - Sunday, August 22, 2021 - link

    > Our developers are encouraged to use build servers

    We use VM servers, but they're old and the VMs are spec'd worse than desktops. So, there's no real incentive to use them for building. And if you're building on a desktop in your home, then testing on a server-based VM means copying the image over the VPN. So, almost nobody does that, either.

    VM servers are a nice idea, but companies often balk at the price tag. New desktops every 4-5 years is an easier pill to swallow, especially because upgrades are staggered.

    > I much prefer working on the 5800X workstation these days,
    > even if it's no faster for the builds. Beause it's twice as fast on all those scalar workloads.

    Exactly. Most incremental compilation involves relatively few files. I do plenty of other sequential tasks, as well.

Log in

Don't have an account? Sign up now