“Rapid calculation, all right,” went on the inventor. “It has to try out in a certain formula about ten million numbers. Each number would take a man at least six minutes to examine, which comes to sixty million minutes, or about a million hours. A man could not work at this sort of thing more than ten hours a day, so that gives a hundred thousand days. One could do it in three hundred years if he did not get stale.” “How fast is the machine working on this list of ten million numbers?” some one asked. “About a hundred thousand a minute,” replied the young man. “It may take an hour and a half to clean up the problem. With a larger driving motor we could make it in twenty minutes. The electric eye would catch it if it were going five times as fast.”
Suddenly, click! The power was shut off. “It must have seen something.” The machine was turned slowly back till a tell-tale beam of light appeared through the little hole before which the electric eye had been watching. Then some reading of dials and a little grinding of a computing machine and two numbers were found such that the square of one of them plus seven times the square of the other were equal to the number under examination. …
The machine had done its duty. … A few minutes computation still remained, and thus it was, while coffee was being served on one of the working tables in the laboratory the big number was broken up into the factors 59,957 and 88,114,244,437. These are the two hidden numbers which when multiplied together will give the sixteen digit number under examination. It may seem to the man in the street an odd thing to get excited about, but on this occasion
All Rome sent forth a rapturous cry,
And even the ranks of Tuscany
Could scarce forbear to cheer.
— Derrick N. Lehmer, Hunting Big Game in the Theory of Numbers (1932)
The problem of integer factorization has come up repeatedly throughout this dissertation, so I hope that the reader will forgive me for beginning the conclusion with a somewhat silly anecdote about it. Long before Shor’s algorithm, and long before even the RSA cryptosystem based on factoring, number theorists had a strong interest in factoring integers. It is such a straightforward problem—the inverse of multiplication, an operation which has been known since time immemorial—yet for so long has remained stubbornly inefficient. Before the turn of the 20th century, the endeavor of factoring numbers essentially consisted of coming up with successively more and more clever methods of doing so by hand. One such method involved the realization that factoring N could be accomplished by choosing a few small numbers pi (say, 13,17,19,23,25), and finding an integer x for which xmodpi (the remainder when divided by each of the small numbers) landed on one of a few “good” values, for all pi simultaneously.11 1 The set of “good” values for each pi is a function of Nmodpi. For a nice overview of how this works, see [155] or any article on the “Lehmer sieve.” Of course, simply exhaustively searching through integers by hand in an effort to find such an x is extremely slow. So, in the late 1920s, Derrick H. Lehmer went into the student shop of the physics building at UC Berkeley (precisely the same building in which the work presented in this dissertation was performed!) and built a device consisting of an axle with several gears on it, a loop of bicycle chain hanging off of each gear, and some basic electrical components. [231] The number of links in each bicycle chain was set to correspond to each pi, and a small piece of metal was attached to the chain links that corresponded to the “good” set of values modulo pi. As the axle rotated, it incremented a counter mechanism that displayed a value x; the bike chain loops, having length pi, naturally tracked the values of xmodpi. The device was designed such that when the attached metal pieces all simultaneously aligned—corresponding to an xmodpi in the “good” set for all pi—it completed an electric circuit causing the axle’s rotation to stop, revealing x on the counter and, with an easy further calculation, the factors of N. (The story recounted in the quote from Lehmer above, with its “electric eye,” corresponds to a later iteration of the machine which used a photoelectric detector as a more reliable way to detect when to stop the axle. Despite the fantastical-sounding nature of it, as far as I know it is a real story. If only it were still acceptable to write scientific papers with such flowery language…) It is not an understatement to say this machine revolutionized the practice of factoring numbers. It smashed records for the largest numbers that had been factored at that time by orders of magnitude, and kicked off a new era of using machines, rather than pen and paper, to tackle this, and other, challenging problems.
At some level I feel that despite the dramatic difference in the hardware available to us, modern computing research, and especially quantum computing, is in the same spirit as Lehmer’s work. As researchers in these fields, our task is to take the physical laws of the world in which we live—making use of the inventions of those who came before us, whether they be bicycle chains or semiconductors—and use them to process information. I am honored to have had the chance, in this dissertation work, to contribute small steps forward in this endeavor, alongside many colleagues all across the world.
Of course, the work continues. For the reasons laid out in Section 1.1 of the introduction, I expect classical numerical study to remain an important tool in the analysis and development of quantum systems for years to come. Classical computing hardware is advancing rapidly, and leveraging new technologies—whether GPU acceleration as discussed in Chapter 2, or perhaps even newer types of processors just appearing on the horizon such as tensor processing units (TPUs) [HMB+21, MHB+22]—will require constantly upgrading the software implementing our numerical techniques. As shown in Chapter 3, tuning numerical techniques for the specific problem at hand has the potential to yield considerable benefits; it would be interesting to explore which of those innovations can applied more generally, or at least ported to more general software packages like the dynamite library presented in Chapter 2. One specific direction which has already begun to be explored (in yet unpublished work) is the potential to apply the techniques of Chapter 3 to new quantum systems with similar structure, such as the Heisenberg plus random field model in two dimensions.
In the field of quantum cryptographic protocols, there are too many open research directions to list them all here. The most direct extension of the work in Part II is to continue the push to implement an efficiently-verifiable demonstration of quantum advantage at scale. One idea which to my knowledge has not been explored much is to relax our definition of “efficient” verification. The quantum advantage protocols discussed in Part II of this dissertation can be verified by a classical machine in polynomial-time—but this may be overkill. A protocol which is exponentially hard to reproduce classically, and also exponentially hard to verify, could still be useful if the verification exponential is considerably smaller. For example, if reproducing the results takes 2n classical operations, but verifying them takes only 2n/2, it would be possible to run an experiment that is classically infeasible to reproduce but still can be verified with some effort.
Taking a broader perspective, demonstrations of quantum advantage will only remain interesting for some time—after efficiently-verifiable ones have been convincingly implemented, they will not have much direct use. But quantum cryptographic protocols with similar structure have already been shown to be useful for a number of more interesting tasks, such as certifiable randomness generation, verifiable remote state preparation, and verifiable delegated quantum computing. [BCM+21, GV19, MAH18] A clear next step forward is to explore other types of practically-useful tasks to which these protocols may be applied.
Looking at the field of quantum computing more generally, it is up to anyone’s guess what the future holds. From the theoretical side, there are few explicit proofs bounding how things may go. The number of problems for which a truly radical superpolynomial quantum speedup has been explicitly shown is not large—and this set consists largely of somewhat abstract number theoretic problems, which are only considered important to people’s lives due to their use in cryptography. Furthermore, from the perspective of complexity theory, there is not even any real evidence that factoring, for example, is a hard problem classically: our best evidence so far is that a lot of very smart people have tried to figure out how to do it efficiently, and nobody has succeeded. On a somewhat similar note, quantum computers are widely conjectured (or hoped, depending on your perspective) to be useful for crucial real-world applications such as quantum chemistry—but concrete evidence for these claims is hard to come by. [LLZ+23] Given these points, I see three broad paths that the future of quantum computing may follow.
In one (perhaps unlikely, but I think not implausible) future, an efficient classical algorithm (or algorithms) will be discovered for the problems like factoring that quantum computers have been conjectured to dramatically to speed up. This is the world in which the complexity classes BQP and BPP—informally, the problems efficiently solvable by quantum and classical computers respectively—are equal. In this case we would be stuck with only polynomial speedups from quantum computers, and getting any useful advantage from them in solving real-world problems would require astounding advances in quantum hardware (see Section 1.2 of the introduction). Hopefully such hardware advances will be achieved, but it seems likely to take many, many years of work. In this world, quantum computers can still find use as extremely precisely programmable physics experiments, with the ability to implement large classes of quantum Hamiltonians with the push of a button.
In another future, problems like factoring remain hard classically, but no new “killer” applications are found beyond the broad categories of quantum speedup already explicitly known. In this case, quantum computers will eventually force the world’s cryptography to move to post-quantum secure algorithms, but aside from that, the outlook doesn’t actually look much different than that of the BQP=BPP world. Building quantum computers applicable to other real-world problems will take an enormous amount of effort.
In the third and most optimistic future, new applications of quantum computers, in which they show considerable and practical advantage over classical ones, will be discovered in the next few years. It seems that this future is the one that many people are hoping for (and in some cases betting their money on). It is not a terrifically implausible scenario: as discussed in the introduction, it is very difficult to discover new algorithms if you can only run them in your imagination. Hopefully, with the increasing availability of programmable quantum systems of considerable size, exploring their capabilities will lead to unexpected new directions. Such a situation would not be without precedent: an example directly relevant to Chapter 2 is the case of classical Krylov subspace algorithms for the matrix exponential. To quote Sidje [SID98], “It seems these techniques were long established among chemical physicists without much justification other than their satisfactory behavior.” That is, someone just tried it, and it seemed to work pretty well! Only later was it rigorously understood why the error remained well-bounded. Perhaps new quantum algorithms will follow this path as well, with meaningful impacts on the world’s gravest problems such as climate change.
With that, I would like to conclude with a reminder that we ought to consider not only what problems technologies like quantum computing could solve, but also whether they are the right solution to pursue. Take climate change, for example: as just discussed, there is a small chance that quantum computers will lead to a breakthrough in carbon capture or, say, the production of fertilizer, that meaningfully helps to avert climate catastrophe. But the thing is, we already know how to avert climate catastrophe—by cutting down on the many wildly inefficient and unnecessary sources of greenhouse gas emissions that permeate our society. We even already know how to capture carbon—by preserving and restoring the world’s forest and ocean ecosystems which serve as massive natural carbon sinks. To be clear, I am not suggesting that investment in technology like quantum computing is useless. I certainly think it is worth pursuing, and I care on a personal level about its advancement. I simply hope that we can move it forward without ignoring the other, potentially less flashy, yet very important solutions to the critical problems of our world.