Path: blob/main/qaoa-for-max-cut/04_Assessment-Solution.ipynb
579 views
Divide-and-Conquer Implementation of QAOA
Lab 4 Assessment
4.1 Lab Description
Congratulations on making it this far! We hope you enjoyed conquering a large max cut problem while picking up a few skills along the way.
For this assessment, the challenge is to adapt the code that we have created for the max cut problem and apply it to the weighted max cut problem. As we described at the end of Lab 3, there are many options for coding QAOA that can improve performance and accuracy. We encourage you to experiment with at least one of these to achieve a max cut approximation of a weighted version of the example graph from Lab 2. We chose this moderately sized graph for the sake of time, but we do give you the option to experiment with other graphs.
The learning objectives of this tutorial are:
Execute the QAOA algorithm to find approximate max cuts of a given graph using CUDA Quantum
Understand the limitations of the QAOA algorithm for solving max cut in the NISQ era
Make adjustments to the divide-and-conquer QAOA algorithm through selection of initial parameter values, increased layers, choice of optimizer, or other methods
Simulate quantum circuits in parallel on multiple GPUs to speed up overall run time using CUDA Quantum
Let's get started!
4.2 Weighted Max Cut Problem
The weighted max cut problem is a variation of the max cut problem. The weighted version of the problem aims to identify a partition of a graph's nodes into two sets which maximizes the sum of the weights of the edges between the two sets. We continue with the notation established in the previous labs. The only difference between this problem and the max cut problem from before is that we now want to maximize:
where is the weight of the edge connecting vertex to . As before is the set of the edges of the graph, and and define a partition of the vertices of the graph.
4.3 Adapting our code from the previous labs
We can use most of the code that we've already developed. There are a few changes that need to be made at the divide, conquer, and merge stages of the QAOA divide-and-conquer algorithm.
4.3.1 Divide
Since we now have a weighted graph, we will want to take these weights into account when identifying the subgraph partition. We've made the adjustment to the subgraph_partition
function below. This may produce a different partitioning of our weighted graph than if we had ignored the weights.
4.3.2 Conquer
To adapt the dividie-and-conquer QAOA algorithm to handle a weighted graph, we will need to change the Hamiltonian function. We refer you to section 1.4.1 of Lab 1 to derive the Hamiltonian for the weighted max cut problem. Below we've copied and adapted the code from the hamiltonian_max_cut
function from the previous labs by adding a new function argument for the weights. You'll need to fix the indicated line of code to take into account the weights of the edges.
HINT: You'll need to consider the weight of each edge, which we have computed for you in the edge_weight
variable.
Exercise: Edit the line commented with ###FIX_ME###
Since we've changed the function arguments for the hamiltonian_max_cut
function, we've edited the code from the previous labs that calls this function.
4.3.3 Merge
The weights of the edges between subgraphs will impact the merger stage of the algorithm as well.
Exercise: Edit the code block by replacing FIX_ME
with the appropriate values to compute the penalties associated with each edge of the merger graph.
Finally, since our cut value now depends on the weight of the edges, we will need to edit the cutvalue
function that comptues the cut of the graph based on the coloring of the nodes.
Exercise: Edit the FIX_ME
line of the code block below.
4.4 Weighted Max Cut using a modified Divide-and-Conquer QAOA
For the sake of time, we have added the modifications that were made in the exercises above to the Example-04.py which calls up the example graph from Lab 2 with random weights assigned to the vertices. Run the MPI call below to see how the algorithm performs. You may notice the results are not competitive with the classical methods, as is.
For the assessment, make modifications to the Example-04.py to improve performance by making some adjustments as discussed at the end of Lab 3. Here are a few recommendations:
Modify the layer count for the QAOA max cut (line 822) and the QAOA merger calls (line 505).
Try different seeds to generate different initial parameters for the optimizer for the QAOA for max cut (line 823) and for the merger stage (line 507). Better yet, replace the random intitial parameters of the optimizer with the optimal parameters found in earlier runs of the algorithm. We've added a print command to Example-04.py to view the optimal parameters of the max cut QAOA calls at each stage. For instance try initializing the optimzer with (
[-1.8964004059756836, 1.0646218219788401]*layer_count
).Swap out the COYBLA optimizer with another optimizer supported by CUDA Quantum on line 184 and line 520. Depending on your choice of optimizer you may need to add in a variable for the gradient and make adjustments to the
vqe
calls (lines 191 and 526).Replace the QAOA kernel with a multi-angle kernel. In addition to editing the
kernel_qaoa
function (line 113), you will need to adjust the parameter_count variables (lines 181 and 340) accordingly.
Feel free to experiment with one or all of these suggestions, or try out your own ideas! You can also play around with different graph instances by editing the lines 709 to 750.
Important Before proceeding, you will need to switch to a runtime with access to a GPU. If you do restart your kernel, make sure to reload the packages below. If you are running on Google Colab and switch to a GPU runtime, you'll need to reinstall CUDA-Q by commenting out the indicated code.
4.5 Next
To learn more about CUDA Quantum, check out our online tutorials.