Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
quantum-kittens
GitHub Repository: quantum-kittens/platypus
Path: blob/main/notebooks/summer-school/2021/lec7.1.ipynb
3855 views
Kernel: Python 3

Quantum Kernels in Practice

In this lecture, Jen Glick explains how you can use quantum kernels. In machine learning and more specifically in classification problems, kernels are transformations that you can apply to a given dataset to highlight interesting features that enable a better distinction between possible existing classes. You can then construct the quantum counterpart and use quantum computers to compute them. The quantum kernel must be hard to estimate classically to gain a computational advantage (otherwise we could just do it on a classical computer), but simply being hard to simulate doesn't mean the kernel will be useful at all. One approach to using quantum kernels in practice is to design them to exploit structural insight that a classical computer is not able to perform with properties of entanglement or superposition. In fact, the DLOG Kernel can exploit group structure in the data with a proven speedup over any classical counterpart. In the rest of the lecture, Jen Glick presents and comments a simple example of quantum kernel on a dataset with group structure.

FAQ

What kind of practical data can have group structure? It is hard to think of a specific practical dataset. However, if the data has some kind of symmetry, it is very likely to have a group structure. This is a way of thinking in order to find a problem that will fit into the framework of quantum kernels exploiting group structures. The field and the idea is very early so anyone can find a dataset with structural properties.
How to know if a quantum kernel is hard to estimate classically or not before starting to work on it? Hard to estimate classically means you can prepare and execute the quantum circuit efficiently whereas a classical implementation will take an exponentially longer time to compute. There are known classes of quantum circuit that obey those properties. On the contrary, quantum circuits with a lot of entanglement or qubits do not necessarily mean the corresponding circuit is hard to estimate classically. Thus, the known classes are a good starting point in order to build a quantum kernel that will bring a speedup advantage.
How do we select the right quantum kernel for a particular problem on dataset? The million dollar question. A first approach can be the brute force using a bunch of different kernels and compare to find out how well they perform but it does not guarantee any meaningful results over classical kernels. Another approach is to find a way to optimize a given kernel according to the dataset. Tuning a kernel on a given dataset is directly link to the generalization bound it will perform on other datasets of similar problems.
Does the group structure only refer to training data or can be expected from testing data and other examples? In this framework, the assumption is that the full dataset has this underlying structure because the way kernels are designed is aimed at this structure. Indeed, training a kernel based on this assumption would lead to poor performances and other datasets with different group structure because it is applied to the specific training and testing.

Other resources