Quantum Machines and Nvidia use machine learning to get closer to an error-corrected quantum computer

Share this post:

About a year and a half ago, quantum control startup Quantum Machines and Nvidia announced a deep partnership that would bring together Nvidia’s DGX Quantum computing platform and Quantum Machine’s advanced quantum control hardware. We didn’t hear much about the results of this partnership for a while, but it’s now starting to bear fruit and getting the industry one step closer to the holy grail of an error-corrected quantum computer.

In a presentation earlier this year, the two companies showed that they are able to use an off-the-shelf reinforcement learning model running on Nvidia’s DGX platform to better control the qubits in a Rigetti quantum chip by keeping the system calibrated.

Yonatan Cohen, the co-founder and CTO of Quantum Machines, noted how his company has long sought to use general classical compute engines to control quantum processors. Those compute engines were small and limited, but that’s not a problem with Nvidia’s extremely powerful DGX platform. The holy grail, he said, is to run quantum error correction. We’re not there yet. Instead, this collaboration focused on calibration, and specifically calibrating the so-called “π pulses” that control the rotation of a qubit inside a quantum processor.

At first glance, calibration may seem like a one-shot problem: You calibrate the processor before you start running the algorithm on it. But it’s not that simple. “If you look at the performance of quantum computers today, you get some high fidelity,” Cohen said. “But then, the users, when they use the computer, it’s typically not at the best fidelity. It drifts all the time. If we can frequently recalibrate it using these kinds of techniques and underlying hardware, then we can improve the performance and keep the fidelity [high] over a long time, which is what’s going to be needed in quantum error correction.”

Quantum Machine’s all-in-one OPX+ quantum control system.Image Credits:Quantum Machines

Constantly adjusting those pulses in near real time is an extremely compute-intensive task, but since a quantum system is always slightly different, it is also a control problem that lends itself to being solved with the help of reinforcement learning.

“As quantum computers are scaling up and improving, there are all these problems that become bottlenecks, that become really compute-intensive,” said Sam Stanwyck, Nvidia’s group product manager for quantum computing. “Quantum error correction is really a huge one. This is necessary to unlock fault-tolerant quantum computing, but also how to apply exactly the right control pulses to get the most out of the qubits”

Stanwyck also stressed that there was no system before DGX Quantum that would enable the kind of minimal latency necessary to perform these calculations.

A quantum ComputerImage Credits:Quantum Machines

As it turns out, even a small improvement in calibration can lead to massive improvements in error correction. “The return on investment in calibration in the context of quantum error correction is exponential,” explained Quantum Machines Product Manager Ramon Szmuk. “If you calibrate 10% better, that gives you an exponentially better logical error [performance] in the logical qubit that is composed of many physical qubits. So there’s a lot of motivation here to calibrate very well and fast.”

It’s worth stressing that this is just the start of this optimization process and collaboration. What the team actually did here was simply take a handful of off-the-shelf algorithms and look at which one worked best (TD3, in this case). All in all, the actual code for running the experiment was only about 150 lines long. Of course, this relies on all of the work the two teams also did to integrate the various systems and build out the software stack. For developers, though, all of that complexity can be hidden away, and the two companies expect to create more and more open source libraries over time to take advantage of this larger platform.

Szmuk stressed that for this project, the team only worked with a very basic quantum circuit but that it can be generalized to deep circuits as well. If you can do this with one gate and one qubit, you can also do it with a hundred qubits and 1,000 gates,” he said.

“I’d say the individual result is a small step, but it’s a small step towards solving the most important problems,” Stanwyck added. “Useful quantum computing is going to require the tight integration of accelerated supercomputing — and that may be the most difficult engineering challenge. So being able to do this for real on a quantum computer and tune up a pulse in a way that is not just optimized for a small quantum computer but is a scalable, modular platform, we think we’re really on the way to solving some of the most important problems in quantum computing with this.”

Stanwyck also said that the two companies plan to continue this collaboration and get these tools into the hands of more researchers. With Nvidia’s Blackwell chips becoming available next year, they’ll also have an even more powerful computing platform for this project, too.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *