Some analysts consider Intel to be a processor company with manufacturing facilities – others consider it to be a manufacturing company that just happens to make processors. In the grand scheme of things, Intel is a hybrid of product, manufacturing, expertise, investment, and perhaps most importantly, research. Intel has a lot of research and development on its books, most of it aimed at current product cycles in the 12-36 month time span, but beyond that, as with most big engineering companies, there’s a team of people dedicated to finding the next big thing over 10-20+ years. This is usually called the Moonshot Division in most companies, but here we find it called Intel Labs, and leading this team of path-finding gurus is Dr. Richard Uhlig.

I’ve had a number of divisions of Intel in my periphery for a while, such as Intel’s Interconnect teams, Intel Capital (the investment fund), and Intel Labs. Labs is where Intel talks about its Quantum Computing efforts, its foray into neuromorphic computing, silicon photonics, some of which become interesting side announcements at events like CES and Computex. Beyond that, Intel Labs also has segments for federated learning, academic outreach and collaboration, government/DARPA project assistance, security, audio/visual, and many others that are likely kept out of the eye from persistent journalists.

Recently Intel Labs was put under the head of Raja Koduri’s division, and part of the momentum behind Intel’s messaging of late has been to focus on the future of the company, which means more outreach from departments like Intel Labs, and the opportunity to talk to some of the key people inside. Leading up to this interview, I had briefings from Intel’s neuromorphic and integrated photonics teams as part of a new annual ‘Intel Labs Day’ event for the community around Intel’s research offerings.

Dr. Richard Uhlig’s official title is ‘Senior Fellow and Vice President in the Technology, Systems Architecture and Client Group and the Director of Intel Labs’. Dr. Uhlig joined Intel in 1996 and led the definition of multiple generations of virtualization architecture for Intel processors and platforms, known collectively as Intel Virtualization Technology (Intel VT), and has worked on projects in this line leading up to Software Guard Extensions (SGX) and beyond. Uhlig was the director of Systems and Software Research in Intel Labs from 2013 to 2018, where he led research efforts in virtualization, cloud-computing systems, software-defined networking, big-data analytics, machine learning and artificial intelligence.

This interview was performed in late 2020 - references are made to the Intel Labs event, which was in December 2020.

What is Intel Labs, How Does It Operate

Ian Cutress: When someone hears the words ‘Intel Labs’, one might conjure up something akin to a chemistry laboratory with scientists in white coats and safety glasses. However as far as I can tell, Intel Labs is more similar to Alphabet's X division (previously Google's X), an entity that exists purely to find the next big innovations. How close is that to the truth, or how would you put Intel Labs in your words?

Rich Uhlig: You got it right. We are meant to explore the future for Intel, and to look at the sort of the disruptive technologies that could change the way that the business works in the future. So we’re not so much about incremental advancements, we try to look for the moonshot ideas - the things that move the needle for the company by exploring new areas. Our scope is everything from the circuits up - we look at circuit innovation and microarchitecture. Within the architecture we look at system software, OS and virtual machine monitors, we look at programming systems, we look at emerging workloads and applications, and we look at how people use systems. We take that full stack view, and we have people that can contribute in all of those areas as we seek out possible disruptive innovation for the company.

 

IC: Because the goal of Intel Labs is to focuses on those future generations of compute to solve the world's problems, such as new computational and networking models and paradigms, users might think that you’re also involved in next-generation manufacturing methods, but you are not. Is that a holistic split at the vision level, or just the dynamics of Intel?

RU: That’s right – Intel has our own process development department, what we call internally as ‘TD’ or ‘Technology Development’, and there is a [branch of Intel] that supports that called Components Research.  

Organizationally it's separate from what we look at, although of course we collaborate closely with them because there are often times of opportunity at the intersection between process and circuits as well as at the microarchitecture/architecture level and what circuits we build.

IC: For the sort of stuff Intel Labs does, building on the leading edge process node isn’t necessarily always a requirement in that long term vision?

RU: We do silicon prototypes and we use Intel’s fabrication facilities for that. But much of what we do doesn't even involve silicon prototyping at all - it may involve some new software innovation, it may we may be putting together systems with other methods or ingredients.

IC: Intel Labs almost sounds as if it’s a separate entity to Intel, albeit with access to manufacturing and such. I understand that Intel Labs now fits under Raja Koduri’s side of the organization – how much autonomy does Intel Labs get (and is that the right amount)?

RU: In our history we've always had a good deal of autonomy and that that's by design. [This is] because our purpose is to explore disruptive technologies. We are funded at the corporate level, and in a manner that allows us to select the research bets that we think could pay off, or to take risks that other parts of the company wouldn't do. The recent change where we became part of Raja’s organization - it helps us in that it creates new linkages into the product teams and the engineering organizations that Raja runs. We still have our autonomy to explore innovation, but we also have new pathways to transfer expertise and knowledge within the organization. And I would say that we/Intel Labs has always had to fit somewhere inside the company, and this this most recent move I think has been a really positive one.

 

IC: Can you give us a sense of scale of how big Intel Labs is – budgets, employees, offices? It’s my understanding its more than just Silicon Valley.

RU: We are round about 700 researchers, largely PhDs in those domains that I talked about at the beginning, and we cover everything up and down the stack. We're a worldwide organization as you noted, and we have labs in on the West Coast in Oregon and California. But we're also present in India, in China, in Germany, in Israel, in Mexico. That worldwide footprint is important to how we do our work, because we don't just do research inside the company - we engage academia, and we are spread out as this allows us to be working closely and directly with researchers at leading universities across the planet. This also allows us to engage different government agencies as well, and to understand the market specifics of each of those geographies. It's important to our whole methodology.

 

The Focus of Intel Labs Research

IC:  As part of the Intel Labs event (Dec 2020), the company is [going to give/has given] us some insight into five key areas: Integrated Photonics, Neuromorphic Computing, Quantum Computing, Security for Federated Learning, and Machine Programming. That’s quite a mouthful! I’m guessing that each of these areas isn’t simply 20% of Intel Labs! Is there a topic that you want to talk to the world about that isn’t on this list (and can you give us a teaser)?

RU: In the time that we had for the Intel Labs event we had to be selective so we're picking a few highlight areas, but it's certainly not the full scope of what we do. A big chunk of our investment is in new compute models: neuromorphic and quantum computing would be examples of that. But we also do core research in accelerators for different kinds of specialization, as you know there's been a lot of focus in the industry towards improving the energy efficiency of AI algorithms, like deep neural networks and things like that. So we do research in ways to improve those kinds of workloads. We [also] do work in storage and memory technologies, we do work in novel sensing technologies, we look at connectivity technologies. In addition to silicon photonics, which is a connectivity technology, we have substantial investment in wireless, in mmWave communications, and in supporting 5G and beyond. We also have a thrust in ways to more efficiently program systems or design systems - we have a strategic CAD lab that does work in those areas, as well as just a general focus on trust, security, privacy research and such.

IC: When you're talking about 5G, such as mmWave, we know that Intel sold, obviously the modem business, to Apple over the last 12-18 months. So when you say Intel is working on mmWave, how does that fit into the context of the industry?

RU: Modems are the endpoint, the thing that goes into devices, but Intel still has a huge bet on building out 5G infrastructure. [For that] you need research and advanced technologies to succeed with that kind of a strategy, and that's really where our thrust is. Also, not only mmWave, but we're also looking at all the things that go into building radio access networks, the network infrastructure core, [and so on]. The big transition that's happening in the industry is that we're going from purpose built networking gear to things that are based on more general purpose hardware, much like the transition that happened in cloud data centers, but it's a different kind of workload and it has to be optimized for in a different way. But we do a lot of work in that area, applying technologies that we've developed in the Labs and including things like virtualization, and network function virtualization would be an example of how you know we're contributing to that opportunity for Intel.

IC: Intel has announced Snow Ridge, its base station 5G platform – did Intel Labs have a hand in that?

RU: We did the research in the Labs in a bunch of different areas, including network function virtualization and optimizing for baseband processing and things like that, which are key technology ingredients for the Snow Ridge platform. Labs did contribute to that through the product team, so that's an example of how we interface with the product teams to bring things to market, although Labs didn’t directly deliver that platform [as a product].

 

IC: What proportion of Intel Labs is Software versus Hardware?

RU: Almost everything we do involve some kind of software, even the hardware parts! So I would venture to guess something like two thirds of software to one third hardware. It depends on how you want to define it - we often at times don't think of ourselves specific software people or hardware people, but rather as systems people. We really believe in that multi-disciplinary approach.

 

IC: How many of the projects at Intel Labs are necessarily non-public at any given time?

RU: Much of what we do is public, and in fact we publish a lot of our work. When we're in a pre-competitive phase of the research, we view that that's important because we need to be at the top of our game. We need to make sure that the research is relevant and competitive, and stands up to the best research in the world. So we test ourselves [in] that way, by publishing in the top conferences and venues, and we'll typically do that quite freely at early stages of a research project. What can happen is, once we decide that something is an idea that we want to productize, it can go through a phase where we have to go silent for a while until we're ready at the other end, prior to a product or capability launch, that we can be public again. Often at times it has to do just with the phase of the project that determines when we can speak about it externally.

 

New Ideas at Intel Labs

IC: I've heard the term ‘Technology Strategic Long Range Planning’ in relation to Intel Labs as an event where new ideas are discussed. Is this how new seed projects inside Intel Labs are planted, and perhaps where budgets and developments are discussed?

RU: It is one of the ways [in which that happens]. We call it, affectionately, ‘TSLRP’ (tea slurp): the Technology Strategic Long Range Planning. You can think of it as a place to sort of crowdsource the best ideas and technology ideas in the company. It's something that Intel Labs administers and organizes, but it's actually open to all the technologists within the company. We run it as a yearly process, and it is also something that runs throughout the year, and we invite our technologists to make proposals about something that they think is important that the senior executives in the company should be paying attention to. Then we can run it through a development process where we really start to kick/bounce the ideas around, test them and sort of challenge the proposers in a way that gets it in a form that can be presented to the leadership. Often at times what comes out of these sorts of presentations is a new investment or new direction that the company may take. That's been true for a lot of the things that have come out of the Labs or that Intel has decided to pursue.

IC: Would you say Intel Labs gets enough funding from Intel? With the right project, the right people, and the right goal, would you think there would be billions poured into it? Or at that point would it perhaps graduate out into a separate division?

RU: We have the right level of funding to do our mission, which is to explore possible investments that the larger Intel could make once you want to scale that idea. We have sufficient funding for that exploration and we have financials to take things to a certain degree of maturity, so we can have confidence that a given technology makes sense. At that point we have mechanisms to transfer it to the larger execution machine of the company, and then additional resources and funding goes into it at that point. That is basically how we scale the things is through partnership with the rest of the company when we get to that stage.

IC: It’s worth mentioning that the Neuromorphic Computing division inside Intel Labs came through an acquisition. Can you talk us through how that process came about before the M&A team stepped in?

RU: The talent behind our neuromorphic efforts are experts in asynchronous design methodology, and that was from our Fulcrum acquisition. They were a networking switch company, and asynchronous design was used in that switch. But the leaders behind our neuromorphic team are taking that that same design methodology and applying it to our neuromorphic computing programme which is a completely different application of that computational design. It [is] a very interesting body of work, and we're looking at [various] different objectives with that work.

 

Successes From Intel Labs

IC: I’ve heard that one vector of Intel Labs you like to promote is the ‘program graduates’. Are there any that we’ve heard of, and are there any particular highlights?

RU: Something that would probably be familiar to many would be USB and Thunderbolt! We've done a lot of work in IO technologies. So as you may know with Thunderbolt, we started it as a vision inside the Labs towards converging all the different IO connector types that we had on the PC platform years ago, and just get them all onto a single connector and tunneled the protocols over the same link. That was a combination of architectural innovation as well as circuit and signaling technology that we bought together to make that that case.

Something that I personally worked on before I led Intel Labs, before my current mission of leading Intel Labs, is virtualization technology. I spent a good 15 years, and it goes all the way back to the late 90s, when I started working on the very earliest proposals around Virtualization and what we might do to our processors and platforms to make them more easily virtualized. We’ve delivered multiple generations of Intel Virtualization Technology, VT.

Silicon Photonics as well, that started in the Labs more than a decade ago, just doing the basic physics behind the different ingredients behind building a Silicon Photonics solution – they hybrid laser, the silicon modulators, the waveguides, all of these things and packaging them together. That worked for many years in the Labs, and that created a brand new business unit that Intel is now delivering those Silicon Photonics solutions to market.

We've done a lot of work in Trusted Execution environments, building a place in the platform where you can run code in a secure way and in a testable way so that you know what the surrounding environment for that code is. Those were extensions to VT in the first instantiations of Trusted Execution environments, but we also did the work around Software Guard Extensions (SGX), which was an architecture that came out of the Lab. Those would be some of the highlights off the top of my head!

IC: Are there any projects that had amazing potential to start but led to a dead-end?

RU: We had a big thrust in wearables and, really energy efficient endpoint devices. We were working on things like zero-net energy in computing devices that we thought would have a lot of promise, the idea being that you could harvest energy from the environment and then just be able to run that device continuously without charging. Those technologies actually were quite interesting from a prototyping point of view, and I think it was demonstrated that it was a success, but it was harder to figure out what the business behind that was. As a result we sort of moved away from that, in part the company itself also moved away from that direction. But that be an example of where it didn't pan out.

IC: How much of your role is involved with locking down IP under the Intel heading, or ensuring that collaboration with academia and industry runs smoothly?

RU: That's a great question. One of our important missions is to engage academia, and we have to do so on terms that that are agreeable to them. Often in that sort of pre-competitive phase of research we have a funding model where we say that this is an open collaborative [approach], which just means that we're not expecting any IP rights or patents from the research that we're funding. In fact, [we tell them that] we want [them] to publish and to do open source releases, in order to get the technology out there on the academic side. The benefit that we get is that we're close to the work as it's happening, and we have to pay attention so that we can we can [identify] those key technologies. [Then] at some point we begin the process of productization, and when that happens when we do that transition, that's when we’ll start looking at different IP models, or we may be filing patents or even just keeping trade secrets on the further development that we do once we take it more into an internal research development process. But that's kind of how we manage that tension. We realized we have to take different approaches based on the collaborations and collaborators that are happening at any point in time.

IC: One of Intel Labs’ biggest wide-appeal public win was enabling the late Stephen Hawking to communicate and even access the internet. How has that technology evolved and where does it sit today?

RU: That was work led by Lama Nachman and her team, and it's a great example of the multidisciplinary approach that we took. Many others had tried to work on technologies for Stephen Hawking, and he rejected those because they just didn't fit with the way that he worked and his expectations around that. It was trying to enable a brain-computer interface for him. What this team did is they really spent time with him to understand how he worked and what he needed. Based on that feedback they developed [and iterated on] the solution. So that's just a note on the methodology. But to answer your question, that's an example of technology that we contributed into open source as an assistive computing technology that can be used by other disabled individuals and [it can be] adapted for them.

 

Intel Labs Research Today: Silicon Photonics

IC: Silicon Photonics has been a slowly growing success story, being able to generate and process light in silicon, particularly as it comes to networking. We’ve seen Intel discuss now millions of units sold, speeds up to 400 Gbps and Silicon Photonics networking engines. As a successful product with a roadmap, why is it still under the Intel Labs umbrella, rather than say, Networking Solutions? Is there untapped potential?

RU: We have a whole silicon photonics product division that delivers those products that you mentioned, and those are not in the Intel Labs umbrella. But because there are future generations of improvement possible, we have a parallel track where we continue to do research in the area. To explain that further, the products we offer today are still discrete devices, they sit outside of a CPU or GPU or an FPGA - they're not integrated links into the compute engines. That's important because when you think about the end to end path, the data follows from one compute engine to another, it still has to follow an electrical link, even if it's a short one, even if just a few inches, before you get to the optical device and the transceiver. Just that little few inches is where a lot of power still goes. In fact, if you study the power budget for a high end Compute Engine, increasingly just to feed the beast more and more of the data is going to IO power.

So what we are exploring with our research is ways to truly integrate the photonics, the silicon photonics, into the package.  There's a bunch of innovation that's required to make that possible. The first is that you have to find a solution for modulation. The modulators that go into these discrete devices are [typically] large devices, and we've figured out how to build micro ring modulators that are much smaller in dimensions. Because they're smaller, we can array them around the shoreline of the package, and we can run them at different wavelengths so now we can get much more bandwidth over the optical links - that's what we call integrated photonics. It's something that we think will overcome that IO power wall and something that we're really excited about.

IC: So there’s the Silicon Photonics product team, and then you've got Integrated Photonics which is the Intel Labs side.

RU: Yeah, we're exploring the ingredient technologies to do the integrated photonics and then once we prove it out with research prototypes will we will partner with our silicon photonics product division to bring it to market at some point in the future. To be clear, there are no plans for that [productization] today, but our methodology is that we closely collaborate with those the product teams.

 

Intel Labs Research Today: Neuromorphic

IC: On the Neuromorphic computing side, we are also starting to see product come to market. The Loihi chip built on 14nm, with 128000 neurons, scales up to Pohoiki Springs with 768 chips and 100 million neurons, all for 300 watts – Intel’s slide deck says this is the equivalent to a hamster! Intel recently promoted an agreement with Sandia National Laboratories, starting with a 50 million neuron machine, scaling up to a billion neurons, or 11 hamsters worth of neurons, as needed when research progresses.

IC: Can neuromorphic computing simply be scaled in this way? Much like interconnect performance is the ultimate scale-out limiter for traditional compute, where is Neuromorphic computing headed?

RU: We're exploring two broad applications application areas. The first is small configurations, maybe just a single low Loihi chip in an energy constrained environment where you want may want to do some on-the-fly learning close to the sensor with the data. You might want to, for example, build a neuromorphic vision sensor. The other fork is to look at these bigger configurations where we're clustering together lots of Loihi chips. There you might be trying to solve a different problem like a constraint satisfaction problem or similarity search across a large dataset. Those are the kinds of things that we would like to solve with [large amounts of Loihi]. Incidentally, we have a neuromorphic research community, or IRC, that that is that we collaborate with as an example of where we work with academic researchers to enable them with these platforms to look at different areas.

But to answer your question: what are the limiters to building the larger configurations? It's not so much the interconnect, it’s a matter of fabric design, and we can figure that out. Probably the biggest issue right now is that if you look inside a Loihi chip, it's the logic that helps you build the neuron model and run it efficiently as an event processing engine. But [there’s] a lot of SRAM, and the SRAM can be low power, but it's also expensive. So as you get [to] really large clusters of networked together SRAM, it's an expensive system. We have to figure out that memory cost problem in order to really be able to justify these larger Loihi configurations.

IC: So the cost is more dollars than die area, so stacking technologies are too expensive?

RU: It is expensive, and however you slice it it’s going to be costly on a cost-per-bit perspective. We have to overcome that in some way.

IC: You mentioned that for smaller applications, a vision engine processing model is applicable. So to put that into perspective, does that mean Loihi could be used for, say, autonomous driving?

RU: It might be a complement to the other kinds of sensors that we have in autonomous vehicles - we've got regular RGB cameras that are capturing visual input, but LIDAR [is also] useful in autonomous vehicles, [which] could be another sensor type. The basic argument for having more than one is redundancy and resiliency against possible failure, or [avoiding] misperception of things that just makes the system safer overall. So the short answer is yes.

IC: One of the things with neuromorphic computing, because it's likened so much to brain processing, is the ability to detect smells. But what I want to ask you is what the weirdest thing you've seen partners and developers do with neuromorphic hardware that couldn't necessarily easily be done with conventional computing?

RU: Well, that's one of them! I think that's a favorite, teaching a computer how to smell - so you already took my favorite example!

But I think it's quite interesting how the results that we're getting around problems like similarity search. If you imagine you've got a massive database of visual information and you want to find similar images, so things that look like a couch or [have] certain dimension or whatever, being able to do that in a very energy efficient way is kind of interesting. [It] can also be done with classical methods, but that that's a good one [for Neuromorphic]. Using it in control systems for like a robotic arm controller, those are interesting applications. We really are still at that exploratory stage to understand what are the best ways that you could do stuff - sometimes for control systems you can you can solve them with classical methods but it's just really energy consuming, and the methods for training the system make it less applicable in dynamically changing environments. We're trying to explore ways that neuromorphic might be able to tackle those problems.

IC: One of the examples you’ve mentioned is kind of like an image tag search - something that's typical machine learning might do. If we take YouTube, when it's looking for copyrighted audio and clips, is neuromorphic still applicable to that scale?

RU: One straightforward application for neuromorphic is that we were looking at artificial neural networks, like a DNN or CNN, and that would be trained with a large dataset. Once it's been trained, we're transferring it over into a spiking neural network (or SNN) which is what Loihi does, and then seeing that once trained we can run the inference part of the task more efficiently.

That's a straightforward application, but one of the things that we're trying to explore from a research point of view with Loihi is how can it learn with less data? How can it adapt more quickly, without having to go back to the extensive training process where you run a large labelled data set against the network.

IC: Brains take years to train using a spiking neural net method - can Intel afford years to train a neuromorphic spiking neural network?

RU: That's one of the big unanswered questions in AI. Biological brains experience the environment and they learn continuously, and it can take years. But even then, in the early stages they can do remarkable things - a child can see a real cat, and then see a cartoon of a cat, and generalize from those two examples with very little training. So there's something that happens in natural biological brains that we aren't able to quite replicate. That's one of the things that we're trying to explore - I should be really clear, we've not yet solved this yet, but that's one of the one of the interesting questions we're trying to understand.

IC: The Loihi chip is still a 2016 design – are there future hardware development plans here, or is the work today primarily focused on software?

RU: We are doing another design, and you'll be hearing more about that in the future. But we haven't stopped on the hardware side - we've learned a lot from the current design, and [we’re] trying to incorporate [what we’ve learned] into another one. At the same time I would say that we really are trying to focus on what is it good for what are the applications that make the most sense and that's why we have this methodology of getting working Loihi systems out in the hands of researchers in the field. I think that's a really important aspect of the work - it is more of that workload, [more] exploration software development.

 

Intel Labs Research Today: Quantum Computing

IC: On the quantum computing side of Intel Labs, the focus has primarily been on developing Spin Qubit technology, rather than other sorts of Qubits. Is this simply a function of Intel’s manufacturing expertise, or does it appear that spin qubits are where the future of Quantum Computing is going?

RU: When we started our quantum programme, we decided to look at both quantum dot spin qubits as well as transmon superconducting qubits - we had a bet on both. We decided to focus on spin qubits after the first few years of investigation because we were trying to look forward to what has to happen in order to build a truly practical quantum system. You have to be able to scale to really large numbers of qubits - not just hundreds or thousands, but probably millions of qubits. These also [have to be] fault tolerant – we have quantum error correcting codes that require more physical qubits than you have logical qubits. So if you're going to get to millions, you can't have qubits that are big - it's almost like you can't have vacuum tube computing systems as you're going to be limited in how much you can do. So that was one thing that we figured out, but it's not just the selection of the qubit – [spin qubits] align to our core competences, and we're able to build these devices in small dimensions  and at scale, so it does align to the core competence that the company has.

But getting qubits to scale is going to require other solutions. We [also] have to figure out how to how to control the qubits, and that's where Horse Ridge comes in - being able to control these qubits [requires] running at very low temperature, [which means] you have to have the control electronics run at very low temperature as well. If you can't do that, then you've got lots of bulky coax cables coming from a room temperature [environment] into the dilution fridge - that's not going to scale. You can't have millions of cubits controlled in that way. These are the kinds of things that drive our decisions - what do we have to do, what problems you have to solve, so that we can get to a million qubits at some point in the future.

IC: When you look at those dilution chambers, they all look impressive when you take the covers off with all the thin tubes, and it looks like a chandelier. So to that extent, would you say that quantum computing has the biggest R&D budget in Intel labs?

RU: Actually, it doesn't! We're able to do a lot with a relatively modest investment I would say. It's definitely not the bulk of our investment at all - those fridges do cost a lot, but it's not where the bulk of the money goes!

IC: Intel's current press materials state that the commercial phase for quantum computing sits at 2025 and ‘beyond’. Sitting where we are today, moving into 2021, is that still the goal for the commercialization process? Is Intel still looking at 2025, or should we be thinking another five years out beyond that?

RU: We always talk about this as a 10 year journey. I think it's a bit early to be talking about productization at this stage - even picking a date, there's still some fundamental science and engineering problems that have to be solved before we would we would pick a date. We ask ourselves questions around what time to start engaging the ecosystem of application developers. That's important in the same way that we have with neuromorphic - we go outside with working hardware to understand what this might be good for - we have to get to that point with quantum as well. We, at some level, already do that - we already have collaborations with TU Delft (Dutch University) and QuTech. That's the way that we're collaborating with universities and partners - I think we're still a ways away from productization.

IC: You say that to get to that point, Intel has to start speaking about having millions of cubits. Intel's third generation Tangle Lake quantum processor is currently at 49 qubits. So does that mean we should wait and expect two, three, four, or five more generations before we get hit that inflection point where perhaps current commercialization is more of a reality?

RU: So Tangle Lake was an example of the transmon superconducting qubit type, which as I explained earlier we began to deprioritize. We're looking to scale our spin qubit designs, and we're on a path to increase the numbers there. But really you've got to get quality qubits before you think about scaling to larger numbers. [We have to] solve these other problems around controlling [the qubits], and I think we've made really great progress on that with Horse Ridge.

IC: A report in the past couple of years from Google claimed that they had achieved Quantum Supremacy. Do you believe it was achieved?

RU: By the definition of Quantum Supremacy, which is to pick a problem that you can’t solve with classical methods but that’s computationally complex and build a system that can solve it, they’ve achieved that. Notice that the problem doesn’t have to be something that’s useful! It doesn’t have to be a problem that people need to solve, but it was a milestone in the field, and certainly it was a good thing to have achieved. The way we think about it is about when do we get to a point where we get to a practical solution that we’re solving - something that people would care about, and something that you can’t solve in other ways with classical methods more economically. We’re still far from that, I don’t think we’ve reached that era of quantum practicality.

 

Intel Labs Growth and Outreach

IC: Should any engineers be reading and get excited by any of the topics, how would they go about either aligning their research to something Intel Labs could help accelerate, or get involved with Intel Labs directly?

RU: For those who are still in grad school we have internships programs in a broad range of areas – you can check out our website and reach out to the researchers in different areas. We engage academia and universities in the various areas that we’re interested in, and it could very well be that your university has a program with Intel. We often at times set up research centers in different areas, and we have a number of those kinds of programmes. That’s the most natural way to get plugged into the work that happens in the Labs. We work directly with our research collaborators, we publish papers together with them, and so even during your studies you can be working with Intel Labs folks. Based on that experience, then it can develop into joining the Labs in the future. I think that through that internship programme, and through our academic funding, that’s the most natural path for [people in their] early career. For those outside of school already, then it’s a matter of reaching out to the researchers in your area, as well as finding out information from our website.

One of the things that we do is, as an outgrowth of our academic investment, is that a lot of startups come out of academia, and we do have with Intel Capital programs where we look at startup that are at that seed stage. We often know them because we funded their research, but we will help them through those early stages of the company. Often at times we look at not just funding opportunities but also at the technologies that might help the startup to succeed. We do have programs like that present some opportunities.

 

IC: On the Intel Labs, what is the future of Intel Labs’ public outreach? Is Intel Labs day going to become an annual occurrence?

RU: I expect it will. We want to put a lot more energy into talking about our work outside the company. We did go through a period where we did less of that, but in our history we did more of it, but now we expect to do more of this! We were going to have a physical Labs event at the beginning of the year, but we will certainly be talking a lot more about our work going into the future.

 

Many thanks to Dr. Richard Uhlig and his team for their time.

 

Comments Locked

30 Comments

View All Comments

  • DannyH246 - Tuesday, March 2, 2021 - link

    haha - no reviews of hardware, but another marketing presentation from Intel.
  • Alistair - Tuesday, March 2, 2021 - link

    "quantum practicality" - nice, i like that concept
  • YB1064 - Tuesday, March 2, 2021 - link

    It is interesting to take a peek behind the curtains once in a while. I wish the interview was technical, especially in the exciting area of Si photonics. I appreciate the content nonetheless. But you are correct, the number of AT hardware reviews have decreased and depending on the reviewer, the quality has also dropped.

    Finally, a shout-out to Duke Nukem himself, Mr. Ryan Smith. Henceforth, referred to as "The Duke". You heard it here first!
  • Operandi - Tuesday, March 2, 2021 - link

    Yeah, its a problem. Indepth technical 'reviews' are what this site is known for. These types of articles are nice to have every now and then but should be considered just a bonus. And yeah, it is partially marketing opportunity for Intel but as long as it objective there doesn't have to be a conflict of interest. Not really why I come to this site though.

    I only post this because I think its important to keep written media around from societal perspective (speaking generally not just tech journalism). Anandtech was one of the best examples in the tech space and it would be nice to have them return to prior form.
  • Ian Cutress - Tuesday, March 2, 2021 - link

    Judging by my stress levels, I've done more reviews in recent memory than I've ever done at AnandTech, and there's always something more to talk about and microarchitecture deep dives end up stretching for pages and pages, along with the actual product reviews. Not sure where you get the idea that we've dropped quality: Billy's SSD reviews are second-to-none, Andrei's SoC and Arm server analysis goes above and beyond what others test. Gavin on motherboards is going into detail about VRM and thermals in a way we've never done before at AnandTech, and Ganesh continues to do amazing work on HTPCs.
  • Mr Perfect - Tuesday, March 2, 2021 - link

    My guess is they're either trolling or referring to GPUs.
  • Holliday75 - Tuesday, March 2, 2021 - link

    Yeah most likely. While I do love seeing those reviews, these articles open up a side of the biz we don't get to see often and are really enjoyable.
  • wrkingclass_hero - Wednesday, March 3, 2021 - link

    I don't think anyone wants to directly address the elephant in the room, but there hasn't been a GPU review for over a year. Ryan seems like he is going through a mental crisis that prevents him from releasing GPU reviews, but nothing can be done about it because he's the big boss and they are always on their way even though they never come out. At this point if they do come out they'll be obsolete.
  • shabby - Tuesday, March 2, 2021 - link

    What about Ryan? 🤔
  • shabby - Tuesday, March 2, 2021 - link

    Why can't you be forthcoming and say what the hold up is? Did he get addicted to mining or did his house actually burn down?

Log in

Don't have an account? Sign up now