Quantum Neural Networks (QNNs)

Overview

Quantum Neural Networks (QNNs) represent a hybrid paradigm that combines quantum computing with the principles of artificial neural networks (ANNs). They are designed to harness the computational power of quantum mechanics—superposition, entanglement, and interference—to build models that can potentially outperform classical deep learning for specific classes of problems.


What are QNNs?

QNNs aim to mimic the behavior of classical neural networks but operate on quantum states instead of classical data. While a classical neural network uses neurons, activations, and weighted sums, QNNs use:

  • Qubits instead of neurons
  • Parameterized quantum gates instead of weights
  • Measurement outcomes instead of activations

This architecture allows QNNs to explore richer function spaces that may be intractable for classical models.


Structure of a QNN

A typical QNN consists of:

  • Encoding Layer: Classical data is encoded into quantum states using parameterized gates.
  • Parameterized Quantum Circuit (PQC): The quantum equivalent of hidden layers, where tunable gates apply transformations based on learnable parameters.
  • Measurement Layer: Qubits are measured to extract classical information, serving as the output.

Diagram: Structure of a Variational Quantum Circuit (used in QNNs)


Mathematical Formulation

In QNNs, the learnable parameters are embedded into rotation gates. For example:

  • A rotation gate on qubit i:

\[Ry(θ)=exp⁡(−iθY/2)\]

where θ acts as a weight, and Y is the Pauli-Y operator.

The QNN forward pass:

  1. Input Encoding: Data x is encoded as a quantum state ∣ψ(x)⟩.
  2. Parameterized Transformation: Apply unitary transformations U(θ) with learnable parameters:

∣\[ψ(θ,x)⟩=U(θ)∣ψ(x)⟩\]

  1. Measurement: Expectation values of observables provide outputs:

\[y=⟨ψ(θ,x)∣M∣ψ(θ,x)⟩\]

This is equivalent to the activation function in classical networks.


Training Workflow

Training a QNN involves a hybrid quantum-classical loop:

  1. Initialize parameters θ.
  2. Forward pass: Run the quantum circuit with θ.
  3. Measure output and compute loss against target.
  4. Classical optimizer (e.g., gradient descent, Adam) updates θ.
  5. Repeat until convergence.

Workflow Diagram: Training Loop of QNNs (Classical Optimizer ↔ Quantum Circuit)


Advantages of QNNs

  • Quantum parallelism enables exploration of larger hypothesis spaces.
  • Natural representation of quantum systems (e.g., molecules, physics simulations).
  • Potential for exponential speedups in certain tasks.

Challenges of QNNs

  • Noise in quantum hardware limits circuit depth.
  • Gradient vanishing (barren plateaus) can hinder training.
  • Scalability remains an open research problem.

Applications

  • Quantum chemistry: Simulating molecular structures.
  • Finance: Portfolio optimization.
  • Healthcare: Drug discovery and quantum-enhanced diagnostics.
  • AI acceleration: Faster and more expressive learning models.

Step-by-Step Example: A Simple Quantum Neural Network

To make QNNs more concrete, let’s build a 1-qubit QNN that tries to learn a simple mapping:\(f(x)=cos⁡(x)\)

Step 1: Input Encoding

We encode classical input x into a quantum state using a rotation gate:\[∣ψ(x)⟩=Ry(x)∣0⟩\]

This rotates the qubit by angle x around the Y-axis.

Step 2: Parameterized Layer (Weights)

We add a trainable parameter θ:\[∣ψ(x,θ)⟩=Ry(θ)Ry(x)∣0⟩\]

Here, θ is our weight, adjusted during training.

Step 3: Measurement (Output)

We measure in the Z-basis to get the expectation value:\[y(x,θ)=⟨ψ(x,θ)∣Z∣ψ(x,θ)⟩\]

This produces outputs between −1 and +1, similar to an activation function.

Step 4: Loss Function

We compare the prediction y(x,θ) with the target \(f(x):L(θ)=1N∑i=1N(y(xi,θ)−f(xi))2\)

Step 5: Training (Hybrid Loop)

  1. Classical side picks some training points xi.
  2. Quantum side runs the circuit for each xi with current θ.
  3. Measure → compute predictions.
  4. Classical optimizer updates θ.
  5. Repeat until the loss converges.

Mini Workflow Diagram (Specific to This Example)

  1. Input x → Apply Ry(x)
  2. Apply trainable gate Ry(θ)
  3. Measure qubit (Pauli-Z) → Output y(x,θ)
  4. Compute loss against cos⁡(x)
  5. Classical optimizer updates θ

✅ This toy 1-qubit QNN captures the essence of larger QNNs while staying mathematically simple. Learners can even implement this in Qiskit or PennyLane to see training in action.


Hands-On Example: Training a Simple QNN with PennyLane

We’ll build the 1-qubit QNN from our toy example that learns f(x)=cos⁡(x).

import pennylane as qml
from pennylane import numpy as np

# Step 1: Set up a 1-qubit device (simulator)
dev = qml.device("default.qubit", wires=1)

# Step 2: Define the QNN circuit
@qml.qnode(dev)
def qnn(x, theta):
    # Encode input x into quantum state
    qml.RY(x, wires=0)
    # Trainable weight (theta)
    qml.RY(theta, wires=0)
    # Measurement in Z basis
    return qml.expval(qml.PauliZ(0))

# Step 3: Define the cost (loss function)
def cost(theta, X, Y):
    predictions = [qnn(x, theta) for x in X]
    return np.mean((np.array(predictions) - Y) ** 2)

# Training data: Learn cos(x)
X = np.linspace(0, np.pi, 10)   # inputs
Y = np.cos(X)                   # targets

# Initialize weight (theta)
theta = np.random.randn()

# Step 4: Optimizer
opt = qml.GradientDescentOptimizer(stepsize=0.1)

# Training loop
for epoch in range(50):
    theta, loss = opt.step_and_cost(lambda t: cost(t, X, Y), theta)
    if epoch % 10 == 0:
        print(f"Epoch {epoch}: Loss = {loss:.4f}, Theta = {theta:.4f}")

# Final check
print("Final predictions:", [qnn(x, theta) for x in X])
print("Targets (cos x):", Y)

What this does:

  1. Encodes input x into a qubit.
  2. Applies a trainable rotation (θ) = weight.
  3. Measures output expectation value = activation.
  4. Uses a classical optimizer to update θ.
  5. Trains until predictions ≈ cos⁡(x).

This small QNN actually learns a cosine function using quantum gates + classical optimization!


In summary: QNNs are a powerful approach to merge deep learning with quantum mechanics, with the potential to revolutionize fields requiring high-dimensional learning and quantum-native problem-solving.

➡️ Next: Quantum Boltzmann Machines (QBMs)