Path: blob/main/qaoa-for-max-cut/00_StartHere.ipynb
579 views
Divide-and-Conquer Implementation of QAOA
0.1 Introduction
The Max Cut problem is an optimization problem defined as: Given a graph , find the maximum cut of , where the maximum cut (max cut) of a graph is defined to be a partitioning of the vertices into two disjoint sets so that the number of edges between the two partitions is maximized. The Max Cut problem is a NP-hard problem, and there is a rich body of research to develop classical and quantum algorithms to solve and/or approximate the max cut for large subclasses of graphs. Some of these algorithms fall under the divide-and-conquer paradigm. Divide and conquer breaks a large problem into smaller problems which are simple enough to be solved directly. Additionally, this paradigm lends itself to parallel computation since the smaller problems can often be solved independently. Recently, this approach has been applied to the Quantum Approximation Optimization Algorithm (QAOA) for max cut (arXiv:2205.11762v1, arxiv.2101.07813v1, arxiv:2304.03037v1, arxiv:2009.06726, and arxiv:2406:17383). In the literature the divide-and-conquer approach to max cut with QAOA is sometimes referred to as QAOA-in-QAOA or . In this tutorial, we will introduce this algorithm and implement parallel computation with CUDA-Q.
This lab covers the basics necessary to understand more advanced topics such ADAPT-QAOA, Adaptive Circuit Knitting, and QAOA-GPT.
This tutorial begins in Lab 1 with a demonstration of solving the Max Cut problem for a small graph with QAOA. To set the groundwork for the remaining labs, Lab 1 ends with a preview of the divide-and-conquer paradigm. In Lab 2, we walk through one level of the divide-and-conquer algorithm, and we follow that up in Lab 3 with the recursive implementation for much larger and denser graphs. Additionally in Labs 2 and 3, we experiment with running quantum circuits in parallel on GPUs. Finally, learners can take the assessment in Lab 4 in which they are challenged to approximate the weighted max cut problem.
The learning objectives of this tutorial are:
Execute the QAOA algorithm to find approximate max cuts of a given graph using CUDA Quantum
Understand the limitations of the QAOA algorithm for solving max cut in the NISQ era
Apply QAOA to find an approximate solution to Quadratic Unconstrained Binary Optimization problems
Make various adjustments to the QAOA algorithm to improve results
Simulate quantum circuits in parallel on multiple GPUs to speed up overall run time using CUDA Quantum
Pre-requisites:
Familiarity with Python with enough comfort to refer to Python package documentation, specifically NetworkX, as needed
Completion of the Quick Start to Quantum Computing series or equivalent familiarity with variational quantum algorithms (e.g., VQE or QAOA)
0.2 Getting Started with CUDA-Q
The Jupyter notebooks in this folder are designed to run in an environment with CUDA-Q with Python. For instructions on how to install CUDA-Q on your machine, check out this guide. A Dockerfile and requirements.txt are also including in this folder to help get you set up.
For links to run the notebooks in qBraid, CoCalc, or Google CoLab, please see the READ_ME.md file in this directory.
0.3 Getting Started with Jupyter Notebooks
For this hands-on tutorial, we will be using Jupyter notebooks. The notebooks, including this one, consist of a series of content and code cells. To execute code in a code cell, press Shift+Enter
or the "Run" button in the menu bar above, while a cell is highlighted. Sometimes, a content cell will get switched to editing mode. Pressing Shift+Enter
will switch it back to a readable form.
Try executing the simple print statement in the cell below.
0.4 Hardware requirements
Most of the code in the notebooks can be run on a CPU. The last section on distributed computing in Lab 2 and Lab 3 requires a GPU. Lab 4 also requires a GPU, but could be adapted to run on a CPU similar to the first part of Lab 3. If you have qBraid Lab credits, you can use them to start a GPU instance via their Compute Manager and run GPU-enabled code. If you have Google Colab credits, you can use them to run the GPU portion of the notebooks in Colab.
0.5 Next Steps
Ready to make the cut? Open up the Lab 1 notebook!