For long-time AnandTech readers, Jim Keller is a name many are familiar with. The prolific microarchitectural engineer has been involved in a number of high-profile CPU & SoC projects over the years, including AMD’s K8 and Zen CPUs and Apple’s early A-series SoCs. Now after a stint over as Tesla for the past couple of years, Intel has announced that they have hired Keller to lead their silicon engineering efforts.

After rumors on the matter overnight, in a press release that has gone out this morning, Intel confirmed that they have hired Jim Keller as a Senior Vice President. There, Keller will be heading up the 800lb gorilla’s silicon engineering group, with an emphasis on SoC development and integration. Beyond this, Intel’s press release is somewhat cryptic – especially as they tend not to be very forward about future processor developments. But it’s interesting to note that in a prepared statement included with the press release, Dr. Murthy Renduchintala – Intel’s Chief Engineering Officer – said that the company has “embarked on exciting initiatives to fundamentally change the way we build the silicon as we enter the world of heterogeneous process and architectures,” which may been seen as a hint of Intel’s future direction.

What is known for sure is that for most of the last decade, Keller’s engineering focus has been on low-power hardware. This includes not only his most recent stint at Tesla working on low voltage hardware, but also his time at Apple and PA Semiconductor developing Apple’s mobile SoCs, and even AMD’s Zen architecture is arguably a case of creating an efficient, low-power architecture that can also scale up to server CPU needs. So Keller’s experience would mesh well with any future development plans Intel has for developing low-power/high-efficiency hardware. Especially as even if Intel gets its fab development program fully back on track, there’s little reason to believe they’re going to be able to duplicate the manufacturing-derived performance gains they’ve reaped over the past decade.

As for any specific impact Keller might have on Intel’s efforts, that is a curiosity that remains to be seen. Keller’s credentials are second to none – he’s overseen a number of pivotal products – but it bears mentioning that modern processor engineering teams are massive groups working on development cycles that span nearly half a decade. A single rock star engineer may or may not be able to greatly influence an architecture, but at the same time I have to imagine that Intel has tapped Keller more for his leadership experience at this point. Especially as a company the size of Intel already has a number of good engineers at their disposal, and unlike Keller’s second run at AMD, the company isn’t recovering from a period of underfunding or trying to catch up to a market leader. In other words, I don’t expect that Intel is planning on a moment of Zen for Keller and his team.


One of Jim Keller's Many Children: AMD's Raven Ridge APU

Though with his shift to Intel, it’s interesting to note that Jim Keller has completed a de facto grand tour of the high performance consumer CPU world. In the last decade he’s worked for Apple, AMD, and now Intel, who are the three firms making the kind of modern ultra-wide high IPC CPU cores that we see topping our performance charts. Suffice it to say, there are very high-profile engineers of this caliber that these kind of companies will so openly court and/or attempt to pull away from the competition.

For those keeping count, this also marks the second high-profile architect from AMD to end up at Intel in the last 6 months. Towards the end of last year Intel picked up Raja Koduri to serve as their chief architect leading up their discrete GPU development efforts, and now Jim Keller is joining in a similar capacity (and identical SVP title) for Intel’s silicon engineering. Coincidentally, both Kodrui and Keller also worked at Apple for a time before moving to AMD, so while they haven’t been on identical paths – or even working on the same products – Keller’s move to Intel isn’t wholly surprising considering the two never seem to be apart for too long. So it will be exciting to see what Intel is doing with their engineering acquisitions over the coming years.

Source: Intel

Comments Locked

70 Comments

View All Comments

  • RedGreenBlue - Thursday, April 26, 2018 - link

    I never could understand the strategy AMD could have had in letting him go. Now, the worst case of all, he goes to Intel. I expect he’s working on that rumored brand new architecture that rewrites or abandons x86. Maybe that’s his dream job, to usher in a new era beyond x86. Seriously though, if that’s what he wanted, maybe AMD, specifically Lisa Su, should have asked him what he wanted to do and paid him to do it. This is the freelance Da Vinci of architecture design.... and they let him go?
  • mukiex - Thursday, April 26, 2018 - link

    Intel's abandoned x86 before. Didn't work out for them. Internally they've dumped x86 for well over a decade; pretty much every modern x86 processor is some type of RISC hybrid internally.
  • peevee - Thursday, April 26, 2018 - link

    Both x86 (more correctly, x64) and any RISC are outdated. Both are concepts based in the understanding from the 80s (and many ideas from the 70s or earlier). Technologies fundamentally shifted requirements for processor architectures, and these are the main roadblocks to further performance increases now (starting from the main idea from the 40s of separation of CPU and memory which hit physical speed of light limitations for a couple decades now and resulted in proliferation of inefficiency in the form of caches, prefetch, speculative execution etc).
  • FunBunny2 - Thursday, April 26, 2018 - link

    "memory which hit physical speed of light limitations "

    leetle electrons don't push through wires at anywhere near speed of light. not even light in fiber does.
  • peevee - Friday, April 27, 2018 - link

    Physical electron speed has nothing to do with signal speed, which is the speed of the electromagnetic wave in the media (slower than c which is speed of light in vacuum).
  • Dragonstongue - Thursday, April 26, 2018 - link

    x64 is AMD property x86 is INTEL property, please make sure you know what you reference...physical speed of light only matter so much when the transmitting parts are at nm scale and mm apart from each other, they are still electronic chips being used beyond test bed for optical interconnect.

    as far as x86 and especially x64 being "outdated" that is laughable at best foolish to say at minimum, seeing as many chips these days are still only being based on 32bit Uarch and 64bit in mos places is still very much in its infancy, not to mention Intel lAMD and many others as well use CISC and RISC to which x86 and x64 are "bolted" if applicable.

    point is 64 bit computing is anything but "outdated" when was x64 "completed" in year 2000 by AMD the possibilities of it are absolutely not even close to being "ancient" even though you did not use these words.

    not everyone builds on x86 (no license) not everyone builds RISC or any other handful of design types for processors and OS but I suppose even though they were based on ideas thought of in 70s/80s funny is it not, how far do you think they have pushed boundaries....tada all based on earlier "ideas" which they have found ways of making happen or have yet to happen.

    anyways AHAHAH is all I can really say peevee
  • close - Friday, April 27, 2018 - link

    The only thing that makes x86 (including x86-64) feel outdated is the insistence on carrying so much legacy going forward. This is great of course because it means you can run 20 year old software on your modern CPU. Imagine running any of today's software on a phone 10 years from now... You'd think it's ridiculous. But running old software on PCs is a different matter.

    This is also not so great because it prevents it from being lean and optimized. This is coincidentally also the biggest burden on Windows OS. Getting rid of legacy would cause an uproar on one side but on the other side it would really improve performance and security.
  • Kevin G - Friday, April 27, 2018 - link

    Legacy support always has a cost associated with it. This has to be compared to the costs associated with porting software to the new architecture and transitioning to the new platform. Generally speaking, keeping legacy x86 support has always won versus developing a new architecture from scratch.

    The last time a new architecture won was recently in ultra mobile: there was always going to be a cost to port software to mobile or develop new applications from scratch. ARM has clearly won this market. The only foot hold x86 has here on the tablet side mainly leverages legacy support as a selling feature (MS Surface etc.). I would not discount the idea of running some ultra mobile software on platforms 10 years from now. Even now I've come across cases where using a slightly older version of Android is preferable on newer hardware to better match application support.
  • FunBunny2 - Friday, April 27, 2018 - link

    "This has to be compared to the costs associated with porting software to the new architecture and transitioning to the new platform. "

    the problem isn't, and hasn't been since *nix/C, the hardware. it's the OS calls. strictly speaking, all you need is a compiler. the tough part is the balance of sys calls to language code. at one time it was estimated that Windoze application code was 80% sys calls. that's going to be a bear to port.
  • peevee - Friday, April 27, 2018 - link

    "at one time it was estimated that Windoze application code was 80% sys calls"

    Estimated by whom?

Log in

Don't have an account? Sign up now