Recent Industry News – November 2016

Here’s a collection of some recent machine-learning, artificial-intelligence and software-engineering papers, posts, and press-releases that have caught our attention. If some of these posts weren’t written recently, then they were at least recently discovered by us!

Rewrite: Neural Style Transfer For Chinese Fonts

Rather than taking on the herculean challenge of fleshing out a design for 26,000 characters when creating a new Chinese-language font, why not let deep-learning do the heavy lifting for you? That is the premise behind “Rewrite” – a style transfer network that can learn a representation of the “radical” concept from Chinese glyphs, deconstruct an example font into the style of these radicals, then applying that style to the radicals of a “universal” character set… In practice, you now only need to design a few example characters in your new font, and the network extrapolates from these to the remaining characters!

… Shared by Noon

MLDB: The Machine Learning Database

MLDB is an open-source database designed for machine learning. You can install it wherever you want and send it commands over a RESTful API to store data, explore it using SQL, then train machine learning models and expose them as APIs.
This might be just what’s required to scale your training, inference, and data-engineering from proof-of-concept to operational.

… Shared by Noon

Surpassing Gradient Descent Provably

This paper introduces…
the Double Incremental Aggregated Gradient method (DIAG)
A new optimization technique that surpasses traditional SGD algorithms, not just experimentally, but provably.
The iterates of the proposed DIAG method uses averages of both iterates and gradients in oppose to classic incremental methods that utilize gradient averages but do not utilize iterate averages.
The paper is very heavy on theory, but if the claims pan-out then you may see DIAG appearing in the palette of “learning” options in the likes of TensorFlow some time soon.

… Shared by Noon


“DataEngConf is the first engineering conference that bridges the gap between data engineers and data scientists.”
The conference looks to be well attended with many interesting speakers and companies attending.

DataEngConf ran on November 3-4, 2016 in NYC.

… Shared by Noon

Progressive Neural Networks

The word “progressive” here refers to a progression of “network” columns where pre-learned features can be re-used by future network-iterations. The architecture addresses the persistent issue that modern networks have with “forgetting”. The architecture is demonstrated on several Atari games with network performance beating “single-column” style implementations. It is unclear what the penalty for this multi-column architecture is in terms of speed of inference and learning, but the convergence properties certainly make it worthy of closer inspection.

… Shared by Noon


The idea behind law-bot was to bring quasi-legal advice to the masses through an intelligent chat-bot. Sounds great! Unfortunately the execution at this point leaves a lot to be desired with the bot defaulting to requests for clarification and exhibiting regular displays of the limitations of its knowledge base. Of course, this doesn’t have to be a permanent state of affairs, and the application of bots to law is an interesting enough concept to warrant taking notice.

… Shared by Lyndon

Non-Maximum Suppression for Object Detection in Python

Although non-maximum-suppression is an older technique, and this post was from 2014, we recently found ourselves in need of such a capability for consolidation of bounding boxes in the object and pedestrian detection demonstrations that we have been putting together. This post describes the technique and provides python implementations that allow the reader to get up and running quickly with good intuition. If you find yourself bounding the same object multiple times, check out NMS.

… Shared by Amanda

Pano2CAD: Layout From A Single Panorama

Input a panoramic photo; output a 3D model. Simple!

There are several limitations as expected, the most striking being Manhattan room-geometry requirements, however this is certainly a step up from the previous Img2CAD work done in the problem domain.

… Shared by Noon

Xception: Deep Learning with Depthwise Separable Convolutions

Xception takes the observation that
A depthwise separable convolution can be understood as an Inception module with a maximally large number of towers…
and uses the correspondence to re-build Inception-V3 using DSCs. Performance on ImageNet is slightly improved, but the real payoff is for larger data-sets with more classes:
[Xception] significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes.
The takeaway for me is that this kind of analogous fast-and-loose reasoning provides good intuition for experimentation with “modded” architectures.

… Shared by Noon

Deep Learning Adversarial Examples – Clarifying Misconceptions

This post could also be titled “Eight Myths About Adversarial Examples”. There are some good tips in here regarding false-intuitions, especially in the context of security. A couple of these include:
Adversarial examples could easily be solved with standard regularization techniques…
An attacker must have access to the model to generate adversarial examples…
Someone should put together a list of “Myths X-Profession Believe about X-Domain”.

… Shared by Lyndon

“med_hack” – November 28-29 | General Assembly, Melbourne

“med_hack” is a two-day health-care hackathon running on November 28-29 at General Assembly, Melbourne. There are a wide variety of interesting sponsors and health-care industry luminaries participating.

… Shared by Noon

Bartosz Milewski – Category Theory 10.2: Monoid in the category of endofunctors

This is the final video of Bartosz Milewski’s “Category-Theory for Programmers”. The lecture series has been regularly released over the last year and the finale concludes by bringing together the concepts introduced in the previous lectures in order to provide a fully-grounded explanation of the now-famous pithy explanation of monads – “A monad is just a monoid in the category of endofunctors, what’s the problem?”

… Shared by Lyndon

Nightmare Machine by MIT Media Lab

Just in time for Halloween, Nightmare-Machine is offered up by MIT as a spooky alternative to Google’s Deep-Dream. It’s hard to glean the implementation-details from the stylish presentation of the style-transfer results, but it’s a fun diversion from the contrasting density of the usual torrent of deep-learning papers appearing with ever more regularity in recent times.

… Shared by Alice

Λ◦λ : Functional Lattice Cryptography

Homomorphic encryption is super-cool. Lattices are a super-cool way to implement homomorphic encryption. “Λ◦λ” is presented as a functional-framework for the development of this technique and the paper is extensive in background, theory, implementation and investigation of the implications. Although the paper emerged in the “pure” crypto-context the implications for block-chain style protocols are immediately perceptible. While limitations of plain-text dependent computation are stark, transport of the computation-function itself is also a matter of insecurity. The ability of this framework to not only operation in a “Functor” context – traditional homomorphic encryption, but also an “Applicative” context enables it to provide “better than end-to-end security”, where not only is the data, secure in others hands, computationally operable by third parties, but also operable by delegation without requiring the trust of the operator.

… Shared by Lyndon

Building a Deep Learning Powered GIF Search Engine

… Shared by Lyndon

Universal adversarial perturbations

The abstract says it better than I could:
Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasiimperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.
There are several very interesting implications here.

… Shared by Lyndon

Partnership on Artificial Intelligence to Benefit People and Society

A new partnership has been formed of some of the biggest players in AI. This was huge news in the scene recently as the participants have been very active in publishing new fundamental and applications research and code, and the partnership looks like it will only continue to increase this output.

The members of the “Partnership on Artificial Intelligence to Benefit People and Society” so far include: Amazon, DeepMind, Google, Facebook, IBM and Microsoft.

… Shared by Lyndon

DeepMind and Blizzard to release StarCraft II as an AI research environment

Sick of developing sophisticated RNNs trained with advanced reinforcement techniques only to have them playing “Space-Invaders”? Well now they can play Starcraft instead!

DeepMind and Blizzard have teamed up to release Starcraft II as an experimental AI research platform with a plethora of ecosystem support including competition. Sounds fun 🙂

LipNet: Sentence-level Lipreading

About all you need to hear to read the rest of this is…
LipNet achieves 93.4% accuracy
That’s shockingly good.


Where do we find these kind of things?

Share this post

Comments are closed.