Details on AMD Bulldozer: Opterons to Feature Configurable TDPby Johan De Gelas & Kristian Vättö on July 15, 2011 12:00 AM EST
TDP Power Cap
What makes these new Opterons truly intriguing is the fact that they will offer user-configurable TDP, which AMD calls TDP Power Cap. This means you can buy pretty much any CPU and then downscale the TDP to fit within your server’s power requirements. In the server market, the performance isn’t necessarily the number one concern like it is when building a gaming rig. As all the readers of our data center section are aware, what really counts is the performance per watt ratio. Servers need to be as energy efficient as possible while still providing excellent performance.
John Fruehe (AMD) states, "With the new TDP Power Cap for AMD Opteron processors based on the upcoming 'Bulldozer' core, customers will be able to set TDP power limits in 1 watt increments." It gets even better: "Best of all, if your workload does not exceed the new modulated power limit, you can still get top speed because you aren’t locking out the top P-state just to reach a power level."
That sounds too good to be true: we can still get the best performance from our server while we limit the TDP of the CPU. Let's delve a little deeper.
Power capping is nothing new. The idea is not to save energy (kWh), but to limit the amount of power (Watt) that a server or a cluster of servers can use. That may sound contradictory, but it is not. If your CPU processes a task at maximum speed, it can return to idle very quickly and save power. If you cap your CPU, the task will take longer and your server will have used about the same amount of energy as the CPU spends less time in idle, where it can save power in a lower p-state or even go to sleep (C-states). So power capping does not make any sense in a gaming rig: it would reduce your fps and not save you any energy at all. Buying CPUs with lower maximum TDP is similar: our own measurements have shown that low power CPUs do not necessarily save energy compared to their siblings with higher TDP specs.
In a data center, you have lots of servers connected to the same power lines that can only deliver a certain amount of current at a certain voltage (48, 115, 230 V...), e.g. amps. You are also limited by the heat density of your servers. So the administrator wants to make sure that the cluster of servers never exceeds the cooling capacity and the amps limitations of the power lines. Power capping makes sure that the power usage and the cooling requirements of your servers become predictable.
The current power capping techniques limit the processor P-states. Even under heavy utilization, the CPU never reaches the top frequency. This is a rather crude and pretty poor way of keeping the maximum power under control, especially from a performance point of view. The thing to remember here is that high frequencies always improve processing performance, while extra cores only improve performance in ideal circumstances (no lock contention, enough threads, etc.). Limiting frequency in order to reduce power often results in a server running far below where it could in terms of performance and power use, just to be "safe".