Intel Leadership Shuffle: Stuart Pann in for IFS, Raja Koduri out for GPUs & off to AI Startupby Ryan Smith on March 21, 2023 2:45 PM EST
- Posted in
- Raja Koduri
- IDM 2.0
- Randhir Thakur
In a combination of a press release and series of tweets from CEO Pat Gelsinger, Intel this afternoon has announced a pair of significant corporate leadership changes at the company, which will impact both their Intel Foundry Services (IFS) and graphics/accelerator business segments. In brief, IFS is getting new leadership, while Intel GPU guru and chief architect Raja Koduri is leaving the company for new pastures.
First and foremost, Intel today is announcing in a press release that they have promoted Stuart Pann to be the new senior VP and general manager of Intel Foundry Services. Pann replaces Dr. Randhir Thakur, who was IFS’s inaugural president. Thakur announced late last year that he was stepping down from the position and leaving Intel at the end of March, so we have been expecting Intel to appoint a new IFS head before the month was out.
Pann, in turn, is a long-serving Intel employee with a history inside and outside of the company, most recently returning to Intel in 2021 to serve as senior VP, chief business transformation officer and general manager of Intel’s Corporate Planning Group. Intel credits him with being one of the chief organizers behind Intel’s IDM 2.0 strategy – as well as Intel’s internal foundry model – which in turn are among the primary reasons for establishing IFS.
Pann’s background is, broadly speaking, on the business side of matters. while he holds an EE degree, his time at Intel has been spent in business management and corporate planning, rather than working within Intel’s fab group itself. This is a notable change from Dr. Thakur, who had an extensive background in fab engineering before moving into his leadership role. With that said, given that IFS’s success will hinge, in part, on being able to attract outside customers (and not just developing advanced fab technologies within the company), it’s not wholly surprising to see Intel appoint a more business-experienced leader for the growing fab business.
Intel Graphics Guru Raja Koduri Leaves for AI Software Startup
Besides Dr. Thakur’s previously arranged departure from Intel, it turns out the company will see one other major leadership change. As first revealed in a tweet from CEO Pat Gelsinger, Raja Koduri will be leaving Intel at the end of the month. A well-known name in the graphics business for decades, Koduri has most recently been serving as a chief architect for Intel’s GPU/accelerator businesses, and prior to that was the GM of Intel’s Accelerated Computing Systems and Graphics Group (AXG).
Thank you @RajaXg for your many contributions to Intel tech & architecture-especially w/high-performance graphics that helped bring 3 new product lines to market in ‘22. Wishing you success as you create a new software co. around generative AI for gaming, media & entertainment.— Pat Gelsinger (@PGelsinger) March 21, 2023
Koduri joined Intel in 2017 (coming from AMD), and has been the cornerstone of Intel’s modern efforts to grow its GPU and accelerator businesses. Besides the various flavors of Intel’s Xe graphics architecture and resulting products like the Data Center GPU Max series (Ponte Vecchio) and Arc A-series video cards, Koduri has also overseen the simultaneous development of Intel’s oneAPI software stack, which is designed to provide a well-crafted software development platform for Intel’s GPUs while also unifying Intel’s overall software development efforts behind a single, unified API and toolset (literally, one API).
While Intel is still working to better establish its footing within the GPU space, the AXG business unit itself has undergone some changes, which in turn have impacted Koduri’s position within the company. Koduri was the head of AXG up until December of 2022, when Intel announced that it would be splitting up AXG into separate consumer and datacenter/AI groups, which in turn were placed under Intel’s Client Computing Group and Dataceter and AI Group respectively. Following that split, Koduri returned to serving as Intel’s chief architect for GPUs, accelerators, and their convergence with Intel’s traditional CPU products.
According to a tweet published by Koduri in response to Gelsinger’s initial announcement, Raja has announced that he will be moving on to a software startup focusing on generative AI for gaming, media, and entertainment. Koduri says that he “will have more to share in coming weeks,” but at a high level, this certainly sounds like a good fit for someone so steeped in to computer graphics and AI.
Thank you Pat and @intel for many cherished memories and incredible learning over the past 5 years. Will be embarking on a new chapter in my life, doing a software startup as noted below. Will have more to share in coming weeks. https://t.co/8DcnNdso3r— Raja Koduri (Bali Makaradhwaja) (@RajaXg) March 21, 2023
Still, it will be interesting to see what kind of impact this has on Intel’s GPU and accelerator efforts. Raja Koduri has been a driving force for Intel’s GPU efforts for the last 6 years, leaving a sizable impression on their efforts in both the consumer and datacenter spaces. Intel is about to have a chart-topping, exascale-class GPU-based supercomputer to their credit with the nearly finished Aurora system, and Intel’s discrete GPU shipments for consumers are already within closing distance of AMD’s. All of which come from projects overseen by Koduri.
At the same time, however, Koduri’s departure comes at a turbulence for Intel’s GPU efforts. Besides last year’s reorganization, Intel cancelled Rialto Bridge, their Ponte Vecchio successor, earlier this month. That cancellation set back Intel’s data center GPU follow-up plans by about 2 years, as their next product will now be Falcon Shores in 2025. So 2023 is proving to be a time of great transition for Intel, both in regards to their product stack and their GPU leadership.
Post Your CommentPlease log in or sign up to comment.
View All Comments
brucethemoose - Tuesday, March 21, 2023 - linkOf course we don't know whats going on underneath, but on the surface this does not look great.
The fab division is swapping out an engineering lead for a business one, which historically is a bad precedent for Intel.
They have gone all in on the GPU division, a business that takes forever to spin up. The last thing it needs is a sudden direction change, especially in light of all the recent cancelations. Reply
WaltC - Tuesday, March 21, 2023 - linkKoduri gave AMD all he had--he didn't really have anything left to give them when he jumped to Intel. I always felt he was way overrated at AMD. Ironically, in his parting remarks for AMD, Raja advised AMD to do better at execution, IIRC... Then he moved to Intel where he learned all about what failing to execute looks like....;) (AMD's GPU tech and production went straight up after Koduri left, I noticed.)... What "software startup" I wonder? Vintage Koduri. "In a few weeks" he might actually have a job, and he will tell us all about it, he says. Changes in the FABs indicates a decision to more heavily concentrate on marketing as opposed to technology--Intel has brutal competition these days, so I hope there's more to it than that. Reply
brucethemoose - Tuesday, March 21, 2023 - link"AMD's GPU tech and production went straight up after Koduri left, I noticed."
Was RDNA1 a Raja architecture?
One of AMD's most (IMO) mixed decisions was bifuricating the GPU line into CDNA/RDNA and cutting compute performance for gaming performance with the 5000/6000 series.
This may have saved their competitiveness, much like cutting everything for Zen did. But I loathe Nvidia's stranglehold on desktop compute (which is also huge advantage for server compute). And its about to bite even more with the generative AI craze. Reply
mode_13h - Wednesday, March 22, 2023 - linkI think RDNA isn't exactly bad at compute. It just didn't get the same Matrix Cores as CDNA, nor will there ever be a fp64-heavy version. Reply
andrewaggb - Sunday, March 26, 2023 - linkbiggest issue imo, more than speed, is the lack of software support for amd compute. Everything just works on nvidia and lots isn't supported or is hackish on amd. I really wish that wasn't the case as AMD has more ram so even if their cards were slower at compute they could run larger models and whatnot and would still be good choices. But so much is cuda only or cuda first that I don't personally consider AMD to even be competing in this space. Reply
mode_13h - Monday, March 27, 2023 - linkThe point I was responding to complained about the CDNA/RDNA split. That's only tangentially related to compute on RDNA, in that it causes AMD to focus on software support for CDNA. Otherwise, I don't really see how it's relevant to the suitability of RDNA for (non-fp64 intensive) compute. Reply
Otritus - Monday, March 27, 2023 - linkPer unit of gaming performance delivered Vega, Turing, Ampere, and Lovelace are much better architectures for compute than RDNA1-3. Compute performance isn’t just measured by TFLOPS as there are internal bottlenecks that limit performance. RDNA’s compute performance is probably like Maxwell or Pascal. It’s certainly not bad, but the architecture is clearly not geared towards compute. Reply
Otritus - Monday, March 27, 2023 - linkRaja left AMD in part because he was mad at AMD deprioritizing Vega for Navi. RDNA is Navi and came out in 2019 when he left in 2017. RDNA likely wasn’t his project like GCN was, hence his frustration. Reply
grrrgrrr - Tuesday, March 21, 2023 - linkI'm afraid that following the AXG split, AI will be moved to data center and on future consumer Arc GPUs you'll get none. I think this is hugely problematic especially for Intel who's trying to catch up with Nvidia -- If the PhD students who develop the AI models from ground up are not hacking with Intel cards, data centers that run those models will not be buying Intel data center GPUs. Reply
mode_13h - Wednesday, March 22, 2023 - link> If the PhD students who develop the AI models from ground up are not hacking with
> Intel cards, data centers that run those models will not be buying Intel data center GPUs.
Pretty much. Although Intel does PyTorch integration, it seems researchers often need to write custom layers. If they do that using CUDA, then their network won't run on Intel GPUs. This is the secret to Nvidia retaining dominance, even though most people are using standard frameworks. Reply