Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
NVIDIA
GitHub Repository: NVIDIA/cuda-q-academic
Path: blob/main/qaoa-for-max-cut/04_Assessment.ipynb
579 views
Kernel: Python 3 (ipykernel)
# SPDX-License-Identifier: Apache-2.0 AND CC-BY-NC-4.0 # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
# Instructions for Google Colab. You can ignore this cell if you have cuda-q set up. # Run this portion of the notebook in a CPU runtime # Uncomment the line below and execute the cell to install cuda-q # !pip install cudaq #!wget -q https://github.com/nvidia/cuda-q-academic/archive/refs/heads/main.zip #!unzip -q main.zip #!mv cuda-q-academic-main/qaoa-for-max-cut/images ./images

Divide-and-Conquer Implementation of QAOA

Lab 4 Assessment

\renewcommand{\ket}[1]{|{#1}\rangle} \renewcommand{\bra}[1]{\langle{#1}|}

4.1 Lab Description

Congratulations on making it this far! We hope you enjoyed conquering a large max cut problem while picking up a few skills along the way.

For this assessment, the challenge is to adapt the code that we have created for the max cut problem and apply it to the weighted max cut problem. As we described at the end of Lab 3, there are many options for coding QAOA that can improve performance and accuracy. We encourage you to experiment with at least one of these to achieve a max cut approximation of a weighted version of the example graph from Lab 2. We chose this moderately sized graph for the sake of time, but we do give you the option to experiment with other graphs.

The learning objectives of this tutorial are:

  • Execute the QAOA algorithm to find approximate max cuts of a given graph using CUDA Quantum

  • Understand the limitations of the QAOA algorithm for solving max cut in the NISQ era

  • Make adjustments to the divide-and-conquer QAOA algorithm through selection of initial parameter values, increased layers, choice of optimizer, or other methods

  • Simulate quantum circuits in parallel on multiple GPUs to speed up overall run time using CUDA Quantum

Let's get started!

4.2 Weighted Max Cut Problem

The weighted max cut problem is a variation of the max cut problem. The weighted version of the problem aims to identify a partition of a graph's nodes into two sets which maximizes the sum of the weights of the edges between the two sets. We continue with the notation established in the previous labs. The only difference between this problem and the max cut problem from before is that we now want to maximize: ∑(u,v)∈Eu∈V0,v∈V1wu,v,\sum_{\substack{(u,v)\in E\\ u\in V_0, v\in V_1}}w_{u,v},

where wu,vw_{u,v} is the weight of the edge connecting vertex uu to vv. As before EE is the set of the edges of the graph, and V0V_0 and V1V_1 define a partition of the vertices of the graph.

4.3 Adapting our code from the previous labs

We can use most of the code that we've already developed. There are a few changes that need to be made at the divide, conquer, and merge stages of the QAOA divide-and-conquer algorithm.

# Instructions for Google Colab. You can ignore this cell if you have cuda-q set up. # Run this portion of the notebook in a GPU runtime # Uncomment the line below and execute the cell to install cuda-q # !pip install cudaq
# Necessary packages import networkx as nx from networkx import algorithms from networkx.algorithms import community import cudaq from cudaq import spin from cudaq.qis import * import numpy as np import matplotlib.pyplot as plt from typing import List

4.3.1 Divide

Since we now have a weighted graph, we will want to take these weights into account when identifying the subgraph partition. We've made the adjustment to the subgraph_partition function below. This may produce a different partitioning of our weighted graph than if we had ignored the weights.

def subgraphpartition(G,n, name, globalGraph): """Divide the graph up into at most n subgraphs Parameters ---------- G: networkX.Graph Graph that we want to subdivivde which lives inside of or is equatl to globalGraph n : int n is the maximum number of subgraphs in the partition name : str prefix for the graphs (in our case we'll use 'Global') globalGraph: networkX.Graph original problem graph Returns ------- dict of str : networkX.Graph Dictionary of networkX graphs with a string as the key """ greedy_partition = community.greedy_modularity_communities(G, weight='weight', resolution=1.1, cutoff=1, best_n=n) number_of_subgraphs = len(greedy_partition) graph_dictionary = {} graph_names=[] for i in range(number_of_subgraphs): subgraphname=name+':'+str(i) graph_names.append(subgraphname) for i in range(number_of_subgraphs): nodelist = sorted(list(greedy_partition[i])) graph_dictionary[graph_names[i]] = nx.subgraph(globalGraph, nodelist) return(graph_dictionary)

4.3.2 Conquer

To adapt the dividie-and-conquer QAOA algorithm to handle a weighted graph, we will need to change the Hamiltonian function. We refer you to section 1.4.1 of Lab 1 to derive the Hamiltonian for the weighted max cut problem. Below we've copied and adapted the code from the hamiltonian_max_cut function from the previous labs by adding a new function argument for the weights. You'll need to fix the indicated line of code to take into account the weights of the edges.

HINT: You'll need to consider the weight of each edge, which we have computed for you in the edge_weight variable.

Exercise: Edit the line commented with ###FIX_ME###

# Exercise def hamiltonian_max_cut(sources : List[int], targets : List[int], weights : List[float]): """Hamiltonian for finding the max cut for the graph with edges defined by the pairs generated by source and target edges Parameters ---------- sources: List[int] list of the source vertices for edges in the graph targets: List[int] list of the target vertices for the edges in the graph weights : List[float] list of the weight of the edge determined by the source and target with the same index Returns ------- cudaq.SpinOperator Hamiltonian for finding the max cut of the graph defined by the given edges """ hamiltonian = 0 # Since our vertices may not be a list from 0 to n, or may not even be integers, for i in range(len(sources)): # Add a term to the Hamiltonian for the edge (u,v) qubitu = sources[i] qubitv = targets[i] edge_weight = weights[i] hamiltonian += ##FIX_ME## return hamiltonian

Since we've changed the function arguments for the hamiltonian_max_cut function, we've edited the code from the previous labs that calls this function.

def find_optimal_parameters(G, layer_count, seed): """Function for finding the optimal parameters of QAOA for the max cut of a graph Parameters ---------- G: networkX graph Problem graph whose max cut we aim to find layer_count : int Number of layers in the QAOA circuit seed : int Random seed for reproducibility of results Returns ------- list[float] Optimal parameters for the QAOA applied to the given graph G """ parameter_count: int = 2 * layer_count # Problem parameters nodes = sorted(list(nx.nodes(G))) qubit_src = [] qubit_tgt = [] weights = [] for u, v in nx.edges(G): # We can use the index() command to read out the qubits associated with the vertex u and v. qubit_src.append(nodes.index(u)) qubit_tgt.append(nodes.index(v)) weights.append(G.edges[u,v]['weight']) # The number of qubits we'll need is the same as the number of vertices in our graph qubit_count : int = len(nodes) # Each layer of the QAOA kernel contains 2 parameters parameter_count : int = 2*layer_count # Specify the optimizer and its initial parameters. optimizer = cudaq.optimizers.COBYLA() np.random.seed(seed) cudaq.set_random_seed(seed) optimizer.initial_parameters = np.random.uniform(-np.pi, np.pi, parameter_count) # Pass the kernel, spin operator, and optimizer to `cudaq.vqe`. optimal_expectation, optimal_parameters = cudaq.vqe( kernel=kernel_qaoa, spin_operator=hamiltonian_max_cut(qubit_src, qubit_tgt, weights), argument_mapper=lambda parameter_vector: (qubit_count, layer_count, qubit_src, qubit_tgt, parameter_vector), optimizer=optimizer, parameter_count=parameter_count) return optimal_parameters

4.3.3 Merge

The weights of the edges between subgraphs will impact the merger stage of the algorithm as well.

Exercise: Edit the code block by replacing FIX_ME with the appropriate values to compute the penalties associated with each edge of the merger graph.

# Exercise # Compute the penalties for edges in the supplied mergerGraph # for the subgraph partitioning of graph G def merger_graph_penalties(mergerGraph, subgraph_dictionary, G): """Compute penalties for the edges in the mergerGraph and add them as edge attributes. Parameters ---------- mergerGraph : networkX.Graph Graph of connections between vertices in distinct subgraphs of G subgraph_dictionary : dict of networkX graph with str as keys subgraphs of G that are represented as nodes in the mergerGraph G : networkX.Graph graph whose vertices has an attribute 'color' Returns ------- networkX.Graph Merger graph containing penalties """ nx.set_edge_attributes(mergerGraph, int(0), 'penalty') for i, j in mergerGraph.edges(): penalty_ij = 0 for u in nx.nodes(subgraph_dictionary[i]): for neighbor_u in nx.all_neighbors(G, u): if neighbor_u in nx.nodes(subgraph_dictionary[j]): if G.nodes[u]['color'] != G.nodes[neighbor_u]['color']: penalty_ij += ### FIX_ME else: penalty_ij += ### FIX_ME mergerGraph[i][j]['penalty'] = penalty_ij return mergerGraph

Finally, since our cut value now depends on the weight of the edges, we will need to edit the cutvalue function that comptues the cut of the graph based on the coloring of the nodes.

Exercise: Edit the FIX_ME line of the code block below.

# Exercise def cutvalue(G): """Returns the cut value of G based on the coloring of the nodes of G Parameters ---------- G: networkX.Graph Graph with weighted edges and with binary value colors assigned to the vertices Returns ------- int cut value of the graph determined by the vertex colors and edge weights """ cut = 0 for u, v in G.edges(): if G.nodes[u]['color'] != G.nodes[v]['color']: cut+=##FIX_ME return cut

4.4 Weighted Max Cut using a modified Divide-and-Conquer QAOA

If you have not already done so, download the Example-04.py from the repository and save it to your working directory. Add the modifications that were made in the exercises above to the Example-04.py which calls up the example graph from Lab 2 with random weights assigned to the vertices. In particular fill in your code between the lines # Edit the code above and # Edit the code below for the functions: hamiltonian_max_cut, merger_graph_penalties, and cutvalue. Make sure to save the file. Run the MPI call below to see how the algorithm performs. You may notice the results are not competitive with the classical methods, as is.

For the assessment, make modifications to the Example-04.py to improve performance by making some adjustments as discussed at the end of Lab 3. Here are a few recommendations:

  • Modify the layer count for the QAOA max cut (line 822) and the QAOA merger calls (line 505).

  • Try different seeds to generate different initial parameters for the optimizer for the QAOA for max cut (line 823) and for the merger stage (line 507). Better yet, replace the random intitial parameters of the optimizer with the optimal parameters found in earlier runs of the algorithm. We've added a print command to Example-04.py to view the optimal parameters of the max cut QAOA calls at each stage. For instance try initializing the optimzer with ( [-1.8964004059756836, 1.0646218219788401]*layer_count).

  • Swap out the COYBLA optimizer with another optimizer supported by CUDA Quantum on line 184 and line 520. Depending on your choice of optimizer you may need to add in a variable for the gradient and make adjustments to the vqe calls (lines 191 and 526).

  • Replace the QAOA kernel with a multi-angle kernel. In addition to editing the kernel_qaoa function (line 113), you will need to adjust the parameter_count variables (lines 181 and 340) accordingly.

Feel free to experiment with one or all of these suggestions, or try out your own ideas! You can also play around with different graph instances by editing the lines 709 to 750.

Important Before proceeding, you will need to switch to a runtime with access to a GPU. If you do restart your kernel, make sure to reload the packages below. If you are running on Google Colab and switch to a GPU runtime, you'll need to reinstall CUDA-Q by commenting out the indicated code.

# Instructions for Google Colab. You can ignore this cell if you already have cuda-q set up and are working in a GPU runtime # with all the necessary files # Run this cell in a GPU runtime #!pip install cudaq #!wget -q -O Example-04.py https://raw.githubusercontent.com/NVIDIA/cuda-q-academic/main/qaoa-for-max-cut/Example-04.py
#@title Execute this cell to reload the necessary packages import networkx as nx from networkx import algorithms from networkx.algorithms import community import cudaq from cudaq import spin from cudaq.qis import * import numpy as np import matplotlib.pyplot as plt from typing import List import sys
#@title Execute this cell to install mpi4py if necessary %pip install mpi4py
# MPI call print(sys.executable) python_path = sys.executable !mpiexec -np 4 --oversubscribe --allow-run-as-root {python_path} Example-04.py

4.5 Next

To learn more about CUDA Quantum, check out our online tutorials.

import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)