QML+ 2018 – Day 4 & Final Thoughts

Well, it’s Thursday night, and I’ve just finished up my last day at the conference. Tomorrow, we’ll be heading on our (short) holiday! While QML+ formally has two more talks tomrorrow, they are less relevant to me personally, plus we need to get a head start to make it to some waterfalls!

Here’s my summary of the talks I attended today!

Opening Talk: ML+Q

by Matthias Troyer

With an amusingly-titled talk, Matthias is the master of classical simulation algorithms for quantum processes. He spends most of his time working on the software side, trying to demonstrate practical quantum speedups for optimisation problems.

As with most of the other talks, he described several pieces of work. The first was a neural network that could be used to learn a quantum wave function, and then used to find phases and amplitudes of given states, and compute other properties.

Their setup was the (seemingly-standard) Restricted Boltzmann Machine, where the input was whether-or-not there is a z-rotation on the given qubit, and the output being the inner product with some state .

But, non-standardly, the weights of this network are actually found using what they refer to as “reinforcement learning”, but is actually something called “Stochastic Reconfiguration”. Once they find initial values for the weights, by looking at the Hamiltonian of the particular system, they then fine-tuned, if they want to compute properties that depend on time. It’s a little bit involved, to say the least.

Anyway, having done this, they do acheieve some nice results. They are able to use their neural network to compute various properties quite well.

Later, they applied an RBM again, but without the weird “Stochastic Reconfiguration”, and were able to get very good results in learning quantum states.

He then spent a bit of time covering his work on quantum annealing. In particular, in that worked they observe that quantum annealers seem to be fated to always produce unfair samples of the potential states; i.e. not every state has equal probability to appear. Ingeniously, they came up with a classical simulation of quantum annealing that is actually faster and more accurate. Even more ingeniously, they show that infact they can implement the classical simulation as a quantum process, and again get a speedup, for a total of a quartic (2 times quadratic) speedup!

All this resulted in the numerical comment that if there is any quantum annealing problem you’re running classically for more than 1 day, you’ll go faster if you use a quantum annealer.

One of the interesting conclusions for this part of the work was, when you’re simulating adiabatically evolving something classically, sometimes it’s better, if you want tunneling, to evolve very fast (the adiabatic theorem would tell us we need to evolve slowly). He demonstrated this in an amusing way by saying that he could tunnel through the wall in room if we would just close our eyes for 30 seconds, instead of 30 microseconds.

His other summaries were:

  • Neural nets are great for learning efficient representations of wave functions; and probably more
  • Stoquastic quantum annealing can be well mimicked by a classical laptop
  • Sampling bias makes quantum annealers bad for this task
  • Gate-model quantum computers will accelerate quantum-inspired algorithms

Early Lunch/Hike

Unfortunately, a talk I was looking forward to, by Franceso Petruccione, probably related to this work was cancalled, so Gala and I went for a long hike instead. We almost got lost, found beautiful forests, found a beautiful field that reminded me of Jurassic Park, lost faith in ever seeing the bottom of the mountain again and eventually made it back to the lifts alive.

This impromptu adventure meant we just got back in time for the last talk of the day.

Quantum speedup in testing causal hypotheses

by Giulio Chiribella

This talk was essentially based around this paper.

The main point is to think about a framework for causal hypotheses, and then see how classical and quantum approaches compare. The setup as like so:

We think of curly-C as some kind of unknown process (for example, node.js, ha ha ha), and then ask ourselves: What is the causal relationship between B and A? And between C and A?

The setting Giulio proposes is that we want to be able to determine exactly, from a given set of hypotheses, which one is correct. Here, imagine the following:

  • Hypothesis 1: B is dependent on A, and C is uniformly random.
  • Hypothesis 2: C is dependent on A, and B is uniformly random.

The question is: Who can do better, as a function of the number of trials, to determine which hypothesis is right? In order to be able to make progress, we allow ourselves interventions; i.e. that we can feed data into , and then use that to make subsequent queries to curly-C.

For reasons I don’t really understand, in the paper they claim that classically, if the dimension of all variables is finite and fixed to , then, if B (or C) is dependent on A, then that means that the function mapping A to B is invertible. With such a constraint, it’s easy to see that it’s possible to determine the difference between the two hypothesis. The value of interest to them is the “discrimination rate”, as the number of experiments is performed. They find that it is . Quantumly, they find that they are able to differentiate the two hypothesis with discriminatio rate . This, in the theory they’ve developed, is exponentially better than the classical case. Great!

I left this talk a little bit confused, but at least vaugely interested in the idea of quantum causal modelling.

Open areas of investigation

The interesting things that came to me regarding today were:

  • How to combine SciNet with the neural-nets for determining wave functions?
  • What’s the relationship between tensor networks and these neural nets? and Conv nets?
    • Duality of Graphical Models and Tensor Networks
    • Neural-Network Quantum States, String-Bond States, and Chiral Topological States

Final thoughts on the conference

Overall, I’m inspired by quantum machine learning. I feel like there’s heaps of cool things to do.

Unfortunately, I’m disappointed by some things about this conference. Having come from so far away, and wanting to maximise my time the best way, I found it frustrating that even the titles for most of the talks weren’t known in advance.

I found the conference events and overall feeling to be very non-inclusive. There was lots of mention of people working in ML/QC as “guys”; there were lots of in-crowds and, while there was lots of talk of wanting to mix with the “machine learning crowd”, people were somewhat skeptical of me not being associated with any university, and attempts to organise people as either “machine learning/classical” and “quantum”. Further, there was also no mention of a code of conduct.

Sarah Moran, of Geek Girl Academy once gave a talk about “micro positive-actions” (or something, I can’t remember the name) but the ones that stuck out to me were:

  • Name tags, always (this conference did well at this),
  • Always leave a spot open in a talking circle,

These are great rules of thumb for any organisers to keep in mind. If you have more, please let me know!

Overall, it would be great to see these academic conferences put significant effort into making their conferences feel much more welcoming to all types of people.

That’s it!

This is my final report from Austria. I’ll see you when we’re back!

See also:

Share this post

Share this post

Facebook
Google+
Twitter
LinkedIn
Reddit