QML+ 2018 – Day 2

We’re back. There first two talks were quite great, and there was another that was interesting and is worth a mention.

Opening Talk: Artificial Intelligence & Quantum Computing

by Aske Plaat

Even though it had a reasonably uninspiring title, this talk was actually excellent, and should’ve, in fact, been the opening talk of the conference.

Aske introduced some motivation, and introduced some simplifying assumptions about AI to try and cut off typical arguments about what it means to be “intelligent”. He defined his working notion, which is “to be intelligent is to act intelligently”, which later was quite controversial to a number of people.

He had a unifying way of explaining why we’ve seen such a boom of AI recently, which is:

  1. Algorithms
  2. Data
  3. Speed

I think this is a nice way of phrasing it. He then dove into the various parts in more detail, starting with algorithms.

He introduced what he sees as two main camps of machine learning:

  • Connectionist AI, and,
  • Symbolic AI.

Connectionist AI, as he sees it, is the kind that we all know and love: Deep learning, neural networks, “bottom-up” reasoning, function approximation, etc.

Symbolic AI, as he sees it, is more related to philosophy, logics, ontologies, expert systems, planning, Q-Learning, and other kind of pre-defined “slow-thinking/high-level reasoning” ideas.

The main point he makes with the distinction is that maybe more merging between the two schools of thought needs to take place. He gives the example of AlphaGo as being a case where the two ideas merged. Another one I thought of is the idea of Algebraic Machine Learning which certainly has some grand claims, but is at least mildly interesting for it’s ideas.

He then made some comments about how speed is also relevant, and without it we wouldn’t see such a boom. Again this is of interest to quantum-computing types, because being faster than classical computers is fundamentally what the field is all about, and that’s where there’s been a lot of focus recently (i.e. quantum speedups over classical algorithms).

Aske also noted the abundance of benchmarks for classical machine learning, which became a theme for a few of the questions during question time. In particular we discussed who, if anyone, and how, if possible, to come up with some good benchmark datasets and problems for quantum machine learning. Presently, no-one has anything good along those lines.

He then noted some challenges in classical ML, and made the observation that simply achieving a speedup won’t solve these problems (for example, the adversial attacks, or the delayed credit assignment problem). The claim is that we need to put some effort into what truly quantum algorithms might look like.

The main thing I got out of the talk was the idea that we should be thinking about making new benchmarks for quantum machine learning.

Quantum-Assisted Machine Learnnig in Near-Term Quantum Devices

by Alejandro Perdomo-Ortiz

Alejandro is very experienced in this field, it turns out. He’s been leading a team at NASA working on QML for the last 5 years, and now has moved to Rigetti, where he’s conducting research on the frontiers of quantum machine learning (also, Rigetti has a quantum cloud service coming …)

At NASA his drive was to drive interest in the practical usage of the quantum devices that NASA had purchased (in particular the D-Wave).

He noted that quantum chemistry, and the simulation of quantum systems, was the most natural idea, and everyone should be looking at it. But furthermore, he was tasked with thinking of other problems that could be mapped to these particular optimisation devices. Naturally, one idea is just straightforwad discrete optimisation; finding some satisfying assignment of variables for the minimisation of some particular cost function. And he conducted some early work here mapping protein folding to a certain kind of optimisation problem.

He echo’d Aske’s thoughts and said that we should be focusing on designing new algorithms, over just simply speed.

One of the most memorably quotes from his talk was “Look for the intractable, the more intractable the better, for me”.

One thing he spent a bit of time on was using the D-Wave to again implement one of these Hopfield networks (he called it here a “fully-visible model”), on a simplified digit dataset. Turned out it worked! Which essentially demonstrated that it was indeed possible to map a ML problem onto the device, and then have the device learn it’s own weights (couplings, here) which would allow it to do well at generating new digits!

Following this work, they then observed that infact they could train an autoencoder entirely classically, and then use the embedding vectors for all the training data as-in the setup above to train a kind of hybrid generative system:

I must say that I found this both interesting and confusing. It’s interesting because it’s a great way to use the complicated device to do “real” work, even when it has alone a very small amount of input nodes (it was something like 46, here, for this device). But it’s also confusing because most of the “juice” in the network is in the classical weights, not in the embedding vector itself. When I asked Alejandro about this, he said that it was mainly a way to demonstrate the hybrid set up, and that over time the idea is to make more regions quantum, and see how that changes things. I find it very interesting to think about how one would even go about jointly training a hybrid quantum-classical system.

The next idea he covered was the learning of quantum circuits see also Differentiable Learning of Quantum Circuit Born Machine.

This I think is a particularly great idea, and his approach was to focus on generate certain kinds of entangled states, with great results. They managed to find a state that has more entanglement, compared to the standard one, with this scheme. They also made some interesting observations about the expressive power of the depth of the circuits and what kind of states they can possible prepare.

His final insights were:

  1. Focus on the hardest problems of interest to ML exports, as this will be the quickest path to demonstrating a quantum advantage in the near-term.
  2. Focus on novel hybrid quantum-classical approaches.

Lunch

For lunch, Gala and I decided to enjoy the beautiful park right next to the venue! By chance, there was a beer garden inside!

Machine learning for designing new quantum experiments

by Alexey Melnikov

This talk was essentially another kind of program-synthesis problem, but this time in the language of optical elements. The idea is that, given some set of optical elements, and some number of qubits, how can we find all the possible sets of operations that make entangled states?

There new idea is to use a Reinforcement-Learning-inspired framework called “Projective Simulation”. I must say that I found the framework a little odd, but they did get good results, and it’s available as a python library for you to experiment with!

The Quantum Way of Doing Computations: New Technologies for the Quantum Age

by Rainer Blatt

This talk was a bit oddly-placed. It was an overview of how quantum computing works, and an introduction to the trapped-ion style of quantum computing.

Panel

The last event of the day was a very large panel of most of the speakers (~10 people) with a bunch of questions prepared by the organisers that were aimed to be thought-provoking. The best comment that came out of the entire discussion was from Matthias Troyer:

“That’s how you get a quantum advantage with zero qubits”

He was describing the recent work by Ewin Tang: A quantum-inspired classical algorithm for recommendation systems (which we actually already covered here).

Open areas of investigation

Here’s the list of open/interesting topics from today:

  1. Quantum datasets; benchmark problems,
    • Detecting entanglement?
    • Creation of states?
    • Classifying entangled vs. non-entangled?
    • How to generalise across qubit sizes?
    • Reinforcement learning problems? Quantum games? Quantum chess? Communication games?
  2. What does truly quantum ML look like? Let’s stop trying to map classical algorithms to quantum ones, and just make up new ones
  3. This hybrid idea of linking parts of a network to qubits and parts being classical
    • How do you train these things?
  4. Near-term, if we want to do quantum ML, we should focus on what actual hardware will be used, and target our approaches to those models.

That’s it!

I hope you found this useful. I found most of these talks to be quite inspiring and full of ideas, and I’m really looking forward to the talks of Day 3!

See also:

Share this post

Share this post

Facebook
Google+
Twitter
LinkedIn
Reddit