Information & Computation

Center

BQIC seminars are typically held in the CQCS seminar room, 101D Campbell hall. We welcome a wide range of talks related to quantum information and computation. Please contact Jeffrey Epstein at epstein@berkeley.edu if you would like to be added to the mailing list or are interested in giving a seminar.

Machine learning techniques have been recently proposed as a way to efficiently represent certain quantum states with applications in state tomography and ground state estimation. In this talk, we present a practically usable deep architecture for representing and sampling from probability distributions of quantum states. Our representation is based on variational autoencoders, a type of generative model in the form of a neural network. We show that this model is able to learn efficient representations of states that are easy to simulate classically and can compress states that are not classically tractable. Specifically, we consider the learnability of a class of quantum states that are provably hard to sample for classical computers, but not for quantum ones, under plausible computational complexity assumptions. The good level of compression achieved for hard states suggests these methods can be suitable for characterising states of the size expected in NISQ devices.

To understand the fundamental trade-offs between training stability, temporal dynamics and architectural complexity of recurrent neural networks (RNNs), we directly analyze RNN architectures using numerical methods of ordinary differential equations (ODEs). We define a general family of RNNs - the ODERNNs - by relating the composition rules of RNNs to integration methods of ODEs at discrete time steps. We show that the degree of functional nonlinearity n and the range of temporal memory t of RNN representation can be mapped to the stage of Runge-Kutta recursion and the order of time-derivative of the ODEs. We prove that popular RNN architectures, such as LSTM and URNN, fit into different orders of n-t-ODERNNs. This exact correspondence between RNN and ODE help us to establish the sufficient conditions for RNN training stability, and facilitates more flexible top-down designs of new RNN architectures using existing knowledge in ODE solutions. We provide such an example: the Quantum-inspired Universal computing Neural Network (QUNN), to reduce the required number of training parameters from polynomial in both data length and temporal memory length to only linear in temporal memory length.

References 1

Deep neural nets are a form of function approximators that have shown tremendous empirical success in the field of machine learning in recent years. Despite this surge in popularity, the architecture design of these models remains equal parts art and science, often relying on partially formed intuition and heuristics in the absence of any cohesive, centralized framework within which to contextualize them. As a result, the behavior of neural nets at atypical inputs, in different problem domains, their optimization methodology, and a growing wealth of negative results about their generalizability is poorly understood. In this talk, we will survey an intriguing line of research pioneered by researchers at Google Brain and Stanfordâ€™s neural dynamics lab. We will explore a theoretical apparatus for treating neural nets as dynamical systems that iteratively distort their inputs. We will use mean-field theory to understand the limiting behavior of these systems and use this understanding to explore the duality between input data flowing forwards through a net, and error signals flowing backwards during gradient-based optimization of the net. Finally, we will leverage this perspective to derive bounds on the trainability of these models based on their capacity to permit information to flow through them, a phenomenon that can be proven to occur only for models initialized sufficiently close to the edge of chaos.

Various spectral properties, such as the absorption spectrum or magnetic resonance spectra, have important applications in chemistry and condensed matter physics. The molecular vibronic spectrum, exponentially hard to calculate on a classical computer, is an important example that relates to the absorption properties of a molecule. Here we present a quantum algorithm for calculating the vibronic spectrum of a molecule in polynomial time. Both zero- and finite-temperature algorithms are described. Previous quantum computation proposals for simulating the same problem have focused on the use of a boson sampling device, while ours instead is based partly on the quantum phase estimation (QPE) algorithm. Our approach provides several potential advantages over previous quantum-based proposals for this problem. First, classically insurmountable (but chemically relevant) anharmonic effects can be easily included with polynomial overhead. And second, after measurement, relevant information in the quantum state is preserved for further analysis. We include discussion of various boson-to-qubit mappings and generalizations of the spectral calculations. Our algorithmic strategies for calculating full spectra are applicable to other problems of interest in chemistry and condensed matter physics.

I will discuss Renyi entropies in quantum error correcting codes and compare the answer to the cosmic brane prescription for computing them in AdS/CFT. We find that general operator algebra codes have a similar, more general prescription. Notably, for the AdS/CFT code to match the specific cosmic brane prescription, the code must have maximal entanglement within eigenspaces of the area operator. This gives us an improved definition of the area operator, and establishes a stronger connection between the Ryu-Takayanagi area term and the edge modes in lattice gauge theory. We also propose a new interpretation of existing holographic tensor networks as area eigenstates instead of smooth geometries. This interpretation would explain why tensor networks have historically had trouble modeling the Renyi entropy spectrum of holographic CFTs, and it suggests a method to construct holographic networks with the correct spectrum.

We study approximate quantum low-density parity-check (QLDPC) codes, which are approximate quantum error-correcting codes specified as the ground space of a frustration-free local Hamiltonian, whose terms do not necessarily commute. Such codes generalize stabilizer QLDPC codes, which are exact quantum error-correcting codes with sparse, low-weight stabilizer generators (i.e. each stabilizer generator acts on a few qubits, and each qubit participates in a few stabilizer generators). Our investigation is motivated by an important question in Hamiltonian complexity and quantum coding theory: do stabilizer QLDPC codes with constant rate, linear distance, and constant-weight stabilizers exist?

We show that obtaining such optimal scaling of parameters (modulo polylogarithmic corrections) is possible if we go beyond stabilizer codes: we prove the existence of a family of \([[N,k,d,\epsilon]]\) approximate QLDPC codes that encode \(k=\widetilde{\Omega}(N)\) logical qubits into \(N\) physical qubits with distance \(d=\widetilde{\Omega}(N)\) and approximation infidelity \(\epsilon=\mathcal{O}(1/\textrm{polylog}(N))\). The code space is stabilized by a set of 10-local noncommuting projectors, with each physical qubit only participating in \(\textrm{polylog}(N)\) projectors. We prove the existence of an efficient encoding map, and we show that arbitrary Pauli errors can be locally detected by circuits of polylogarithmic depth. Finally, we show that the spectral gap of the code Hamiltonian is \(\widetilde{\Omega}(N^{-3.09})\) by analyzing a spacetime circuit-to-Hamiltonian construction for a bitonic sorting network architecture that is spatially local in \(\textrm{polylog}(N)\) dimensions.

References 1

Adiabatic quantum computing (AQC) has attracted considerable attention over the years, largely driven by the commercial availability of quantum annealing systems. Though AQC is known to be polynomially equivalent in computational power to circuit model quantum computation, it has proved challenging to protect adiabatic computations from noise. In this informal talk, I'll begin with a brief overview of AQC and quantum annealers. I'll then discuss of a family of [[6k,2k,2]] error detecting quantum codes that can be used to suppress errors in adiabatic quantum computers. If there's time, I'll close by summarizing the challenges that arise when trying to implement a fully error corrected adiabatic quantum computation.

References 1

Harnessing the unique properties of quantum mechanics offers the possibility to deliver new technologies that can fundamentally outperform their classical counterparts. These technologies only deliver advantages when components operate with performance beyond specific thresholds. For optical quantum metrology, the biggest challenge that impacts on performance is optical loss.

In this talk I will discuss how including an optical delay and an optical switch in a feed-forward configuration reduces the detector efficiency required to enable quantum enhanced sensing when using biphoton-pair-sources. I will also discuss how combining feed forward with a high detector efficiency CCD camera can boost this advantage enabling a demonstration of an absolute quantum advantage per photon probe in the precision of optical direct transmission measurements and imaging.

Adiabatic Quantum Computing (AQC) is an attractive paradigm for solving hard integer polynomial optimization problems. Available hardware restricts the Hamiltonians to be of a structure that allows only pairwise interactions. This requires that the original optimization problem to be first converted -- from its polynomial form -- to a quadratic unconstrained binary optimization (QUBO) problem, which we frame as a problem in algebraic geometry. Additionally, the hardware graph where such a QUBO-Hamiltonian needs to be embedded -- assigning variables of the problem to the qubits of the physical optimizer -- is not a complete graph, but rather one with limited connectivity. This "problem graph to hardware graph" embedding can also be framed as a problem of computing a Groebner basis of a certain specially constructed polynomial ideal. We develop a systematic computational approach to prepare a given polynomial optimization problem for AQC in three steps. The first step reduces an input polynomial optimization problem into a QUBO through the computation of the Groebner basis of a toric ideal generated from the monomials of the input objective function. The second step computes feasible embeddings. The third step computes the spectral gap of the adiabatic Hamiltonian associated to a given embedding. These steps are applicable well beyond the integer polynomial optimization problem. Our paper provides the first general purpose computational procedure that can be used directly as a translator to solve polynomial integer optimization. Alternatively, it can be used as a test-bed (with small size problems) to help design efficient heuristic quantum compilers by studying various choices of reductions and embeddings in a systematic and comprehensive manner. An added benefit of our framework is in designing Ising architectures through the study of Y minor universal graphs.

References 1

We present a quantum machine learning model inspired by convolutional neural networks (CNNs). After a brief discussion of classical CNNs, we describe the concrete circuit model/architecture for quantum CNNs and discuss how learning is performed. While quantum CNNs can be applied to classical or quantum input, we focus on the latter case. Specifically, we discuss how quantum CNNs can be used to detect quantum phase transitions and present results of numerical simulations for identifying a 1D SPT phase. Finally, we provide a theoretical explanation for the success of quantum CNNs, by relating our quantum CNN model to renormalization-group flow using the multi-scale entanglement renormalization ansatz (MERA) and quantum error correction.

References 1

Emerging reinforcement learning techniques using deep neural networks have shown great promise in control optimization. They harness non-local regularities of noisy control trajectories and facilitate transfer learning between tasks. To leverage these powerful capabilities for quantum control optimization, we propose a new control framework to simultaneously optimize the speed and fidelity of quantum computation against both leakage and stochastic control errors. For a broad family of two-qubit unitary gates that are important for quantum simulation of many-electron systems, we improve the control robustness by adding control noise into training environments for reinforcement learning agents trained with trusted-region-policy-optimization. The agent control solutions demonstrate a two-order-of-magnitude reduction in average-gate-error over baseline stochastic-gradient-descent solutions and up to a one-order-of-magnitude reduction in gate time from optimal gate synthesis counterparts. We will briefly discuss the extension to multi-qubit controls and the accompanying challenges.

We, as quantum-information scientists, have been lulled into a false sense of security by ubiquitous depictions of quantum state space as the child-friendly Bloch ball. Immediately upon leaving the safety of the qubit, danger appears and the state space acquires all manner of protrusions. These higher-dimensional complications have a profound impact on the performance of quantum-state tomography protocols. I will present a particular, operationally relevant geometric quantity that captures this effect and describe its behavior, focusing on the simpler but illuminating case of a three-level system. Motivated by the applications in tomography, I will also discuss the geometric consequences of the restrictions quantum mechanics places on measurements.

Superconducting qubits are among the most promising candidates for building a quantum computer. In this talk, we discuss high power measurement of their current popular flavor, the transmon qubit. For measurement, the qubit is dispersively coupled to a detuned readout resonator; this forms a Jaynes-Cummings (JC) ladder of energies between the two. In this talk we explore this JC ladder to study the measurement at large powers, and to explain the experimental observations which were performed on tunable transmon qubits. In particular, we introduce dressed squeezed state as an approximation of the joint state of the qubit and its readout resonator, and show that (usually neglected) non-RWA couplings lead to abrupt qubit state deterioration with increasing measurement microwave power.

One of the most important achievements in the field of quantum computation is the discovery of quantum error-correcting codes and other methods for fault-tolerant quantum computation, which has found far-reaching applications in other areas of physics, from topological order to quantum gravity. However, the theory of fault-tolerant quantum computation is mostly developed based on the standard circuit model of quantum computation and cannot be directly applied to other models. In this talk, I will discuss different aspects of fault tolerance in Adiabatic Quantum Computation and, in particular, present results on the performance and limitations of error suppression with energy penalty. In this approach information is encoded in the ground subspace (or the low energy sector) of a Hamiltonian with a large energy gap which penalizes errors from the environment. I present a no-go theorem for quantum error suppression with two-local commuting Hamiltonians.