In a major departure from conventional computer technology, International Business Machines (IBM) Corporation today introduced its first computer using a main memory made entirely of monolithic circuits.
To store its data and instructions, the new IBM System/370 Model 145 uses silicon memory chips, rather than the magnetic core technology that has been the mainstay of computer memories for the past 15 years. The machine boasts an impressive 2.5 MHz of processing power, with 500 kilobytes of RAM and 233 megabytes of hard disk space.
Purchase price ranges from $US4.3 million – $US10.8 million – IBM press release, September 23, 1970
The Law of Accelerating Returns
THE computer has come a long way since 1970. The IBM System/370 was large enough to fill an entire room, but by today’s standards, would only have enough processing power to store and access a small album of photos (and rather slowly at that). By comparison, your most basic smartphone in 2018 is at least a thousand times faster, can store over 80 times the amount of data, and can be purchased with an average week’s wages.
These drastic advances in capability and accessibility are largely due to our ability to create smaller and smaller integrated circuits that can store, process, and transmit data. As our manufacturing capabilities enabled us to further miniaturise the transistors that physically represent this data, we could fit more and more of them into the microprocessors that perform the critical functions of everything from the smallest handheld devices to the largest supercomputers.
Transistor count per chip has effectively doubled every eighteen months since the 70’s – early microprocessors contained a few thousand transistors, whereas those found in smartphones and laptops today contain billions.
The more transistors, the more processing power, the more complex the logic operations that can be performed, and the more data that can be stored. But advancement cannot continue at this pace forever. As populations grow, and advanced computer technology becomes ubiquitous, more powerful processors doing increasingly complex tasks will be added to electricity grids at an exponential rate, creating serious energy consumption concerns.
Further miniaturization will also present issues, as when the length scale of transistors reaches tens of nanometres, you actually only have a finite number of atoms to work with, and suddenly you’re up against the enigmatic laws of quantum mechanics.
So what is the next step for computing? How do we continue creating smaller devices that can perform more complex operations, yet use less energy? One solution is to completely rethink the fundamental building blocks of circuits, exploiting quantum mechanical phenomena to create a revolutionary new device known as a quantum computer.
The Quantum Leap
TO understand why quantum computers are such a huge deal, we should take a quick look at how classical computers work. Traditional computing systems, such as the one I’m writing this story on, operate via binary code, assigning a pattern of digits to each character. The digits are known as ‘bits’ and can either be a ‘0’ or a ‘1’ (for example, a capital ‘A’ is encoded as ‘1000001’, while a question mark [?] is ‘00111111’).
Computers calculate by using circuits called ‘logic gates’, made from a number of transistors connected together, that compare patterns of bits stored in temporary memories called ‘registers’. These are then turned into new patterns of bits – essentially the computer equivalent of addition, subtraction, or multiplication.
Quantum computers, due to their quantum mechanical components, operate outside common, humanly-experienced principles of their classical computing counterparts. Rather than a binary system, where bits must be assigned one of two definite values, quantum bits (qubits) can be in a state of what is known as quantum superposition; in layperson terms, qubits can be either a ‘1’ or a ‘0’, or a mixture of both ‘0’ and ‘1’.
Much like the ‘wave-particle duality’ of quantum theory, where a photon can behave like both a well-defined single particle, as well as a light wave spread out over space which may interfere with other waves, a single qubit can exist in a well-defined single classical state (i.e. 0 or 1), as well as a wavestate spread out over classical states that may interfere with other qubits.
Fulbright Scholar/quantum computer whiz Noah Johnson explains this notion using the (rather morbid) thought experiment known as Schrödinger’s Cat:
“So there’s this cat in a box with a small vial of poisonous radioactive material and a Geiger counter.
“Within the course of it’s half-life, there is a chance that the radioactive material has begun to decay, activating the Geiger counter, releasing the poison and killing the cat. However there is an equal probability that the material has not decayed, and the cat is still alive.
“The ‘quantum mechanical’ side of this is, until you open the box and see it, technically you don’t know whether the cat is alive or dead, so – in terms of quantum behaviour and superposition – the cat is actually both alive and dead at the same time.
“So this is the same as these qubits; until we actually look at them and read the information they’re storing, they’re simultaneously in ‘0’ and ‘1’, so you can encode a tremendous amount more data. With 300 qubits, you could, theoretically, sustain more parallel computations than there are atoms in the universe.”
Have we lost you yet? Don’t fret! One of the very few universally understood facts about quantum theory is that the classical laws of physics we take for granted in our everyday world no longer automatically apply. What is important to understand is that qubits represent a revolutionary new technology for storing and processing data, enabling quantum computers not only to perform certain calculations much, much faster, but to also solve certain problems that would be fundamentally impossible on classical computers.
The Fulbright Connection
Here I should give Noah Johnson a proper introduction, as he is the reason I’m writing this article.
Noah is a Fulbright Postgraduate Scholar from the University of Wisconsin at Madison, where he majored in physics and mathematics. He is in currently in Australia, studying at the ARC Centre of Excellence for Quantum Computation and Communication Technology (CQC2T) at UNSW, under the globally renowned professor of Electrical Engineering and Telecommunications, Andrea Morello.
Noah at the 2018 Fulbright Gala Presentation Dinner
Late last year, Professor Morello’s team announced the discovery of a novel architecture for the microscopic processors that will be used in quantum computers (quantum chips), potentially enabling them to be manufactured more cheaply and easily than previously thought possible.
Noah’s work at CQC2T involves testing quantum chips at the near absolute zero temperatures required to avoid unwanted environmental interactions and to minimise errors. This process, in itself, is pretty amazing.
“Andrea once told me that if you look at a heat map of space, the coldest places in the known universe are right here on Earth, due to the extreme temperatures we need to artificially induce for these experiments.” Said Noah.
“The low temperatures enable us to accurately determine the specific quantum state of the qubit. We create the qubit levels by creating two states with different energy levels – temperature can be thought of as an average amount of energy in a system.
“So if the temperature is hotter than the energy difference of the qubit, then there is a chance the qubit can start to occupy the higher energy state when you don’t want it to.
“Also temperature can lead to specific problems for us as we use a spin in a solid state system. Higher temperatures cause vibrations in the lattice of the crystal which can negatively couple to the spin system and destroy quantum information. In simple terms, you can think of these cryogenic systems as the equivalent of fans in your computer to dissipate unwanted heat.”
The Quantum Conundrum
SO why don’t we have mass-produced, consumer-priced quantum computers yet? Well, there are a number of barriers standing in the way.
Previously for a silicon spin-based quantum computer it was thought necessary to precisely place the atoms within silicon that would be used to ‘trap’ the electrons used for qubits. This is due to the proximity needed for coupling two qubits, and the large variations of this coupling that result if the placement is not correct.
So it is difficult to foresee building up qubits in this way as room for the electronics needed to control and read out each qubit takes up a lot of space at this scale. Having a large array of closely placed qubits would require a new way of reading out and controlling individual qubits. Also, the fabrication of these devices with that accuracy is extremely difficult and expensive to achieve.
One solution is just getting better at placing atoms in a silicon lattice. This is very difficult, but is currently being attempted by Scientia Professor Michelle Simmons – also at CQC2T. [At the time of publication, Professor Simmons’ team announced a major breakthrough, witnessing the first observation of controllable interactions between two qubits.]
Another solution is the flip-flop qubit that Noah is working on, which uses both the electron and nuclear spin as the qubit. Utilising new methods (electric dipole interaction) to mediate coupling between two qubits allows for larger spacing between the qubits. This larger spacing allows for easier placement of all of the necessary electronics without worrying about more precise fabrication of devices.
This way, you can rely on all the methods built up to control and readout single qubits while still having a method to couple larger numbers of qubits. In layperson terms, this essentially means that the circuitry required for this quantum computer architecture could be manufactured more easily, cheaply, and quickly.
An electrical component that is utilised to help prevent unwanted electromagnetic signals from reaching the qubit
While the flip-flop qubit idea is a huge breakthrough, an actual flip-flop qubit device has yet to be created and there are still some issues that need to be addressed.
“In our system, one thing that would be very helpful for scaling up from one qubit is a way to tune the read out time of our qubits.
“This would drastically help device performance and yield, which is increasingly important as our device fabrication processes become more complex. Many of the donors we implant in silicon are too far or close to our readout device which makes them unusable as qubits, but having a method to tune the readout time would allow us to use more donors and thus more devices.
“This could also allow us to tune the individual qubit readout times to better discern which qubit we are reading out in multi-qubit structures.”
Translation: Noah is looking into ways to better calibrate the devices they use to read the data stored on the qubits. This could help his team to scale up the manufacturing of the devices, as well as potentially creating a more efficient system.
“At this point in time we have a much greater understanding of the theory of quantum computation than the experimental understanding required for building a large-scale quantum computer.
“This discrepancy is basically a result of the difficulty of trying to closely communicate with and readout the qubit to the user in the outside environment, while that same process introduces a large amount of possible error due to deconstructive interactions between the qubit and environment.
“This phenomenon, known as quantum decoherence, prevents a quantum system from interfering with itself and, as a result, destroys the possible superposition of a quantum states and causes a loss of the ‘quantum’ nature of information that we want.
“The challenge of building an instrument capable of quantum computation depends on roughly five requirements, icluding well-defined two-level quantum states, or qubits; reliable preparation of these bits in different states; low decoherence; accurate quantum gate operations; and a reliable method of accurately measuring stored information.”
In other words, there are still some significant challenges ahead for Noah, and Professor Morello’s team at CQC2T before flip-flop qubit-based quantum computers can be commercialised. However their work is at the forefront of the field, and theirs is one the most promising breakthroughs in computing in decades.