My PhD

In three minutes

PhD thesis promo recorded on the regional final of "Ma thèse en 180" contest. The speech is in French, but it will take less than 180 seconds :)

10 Days Of Grad

Haskell Deep Learning Blog

Today we will talk about one of the most important deep learning architectures, the "master algorithm" in computer vision. That is how François Chollet, author of Keras, calls convolutional neural networks (CNNs). Convolutional network is an architecture that, like other artificial neural networks, has a neuron as its core building block. It is also differentiable, so the network is conveniently trained via backpropagation. The distinctive feature of CNNs, however, is the connection topology, resulting in sparsely connected convolutional layers with neurons sharing their weights.

CONTINUE READING

Which purpose do neural networks serve for? Neural networks are learnable models. Their ultimate goal is to approach or even surpass human cognitive abilities. As Richard Sutton puts it, 'The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective'. In his essay, Sutton argues that only models without encoded human-knowledge can outperform human-centeric approaches. Indeed, neural networks are general enough and they leverage computation.

CONTINUE READING

Now that we have seen how neural networks work, we realize that understanding of the gradients flow is essential for survival. Therefore, we will revise our strategy on the lowest level. However, as neural networks become more complicated, calculation of gradients by hand becomes a murky business. Yet, fear not young padawan, there is a way out! I am very excited that today we will finally get acquainted with automatic differentiation, an essential tool in your deep learning arsenal.

CONTINUE READING

In the previous article, we have introduced the concept of learning in a single-layer neural network. Today, we will learn about the benefits of multi-layer neural networks, how to properly design and train them. Sometimes I discuss neural networks with students who have just started discovering machine learning techniques: "I have built a handwritten digits recognition network. But my accuracy is only Y." "It seems to be much less than state-of-the-art", I contemplate.

CONTINUE READING

Neural networks is a topic that recurrently appears throughout my life. Once, when I was a BSc student, I got obsessed with the idea to build an "intelligent" machine1. I spent a couple of sleepless nights thinking. I read a few essays shedding some light on this philosophical subject, among which the most prominent, perhaps, stand Marvin Minsky's writings2. As a result, I came across neural networks idea. It was 2010, and deep learning was not nearly as popular as it is now3.

CONTINUE READING

Selected Publications

Neural networks are currently transforming the field of computer algorithms, yet their emulation on current computing substrates is highly inefficient. Reservoir computing was successfully implemented on a large variety of substrates and gave new insight in overcoming this implementation bottleneck. Despite its success, the approach lags behind the state of the art in deep learning. We therefore extend time-delay reservoirs to deep networks and demonstrate that these conceptually correspond to deep convolutional neural networks. Convolution is intrinsically realized on a substrate level by generic drive-response properties of dynamical systems. The resulting novelty is avoiding vector-matrix products between layers, which cause low efficiency in today's substrates. Compared to singleton time-delay reservoirs, our deep network achieves accuracy improvements by at least an order of magnitude in Mackey-Glass and Lorenz timeseries prediction.
In PRL, 2019

Photonic delay systems have revolutionized the hardware implementation of Recurrent Neural Networks and Reservoir Computing in particular. The fundamental principles of Reservoir Computing strongly benefit a realization in such complex analog systems. Especially delay systems, potentially providing large numbers of degrees of freedom even in simple architectures, can efficiently be exploited for information processing. The numerous demonstrations of their performance led to a revival of photonic Artificial Neural Network. Today, an astonishing variety of physical substrates, implementation techniques as well as network architectures based on this approach have been successfully employed. Important fundamental aspects of analog hardware Artificial Neural Networks have been investigated, and multiple high-performance applications have been demonstrated. Here, we introduce and explain the most relevant aspects of Artificial Neural Networks and delay systems, the seminal experimental demonstrations of Reservoir Computing in photonic delay systems, plus and the most recent and advanced realizations.
JAP, 2018

A chimera state is a rich and fascinating class of self-organized solutions developed in high-dimensional networks. Necessary features of the network for the emergence of such complex but structured motions are non-local and symmetry breaking coupling. An accurate understanding of chimera states is expected to bring important insights on deterministic mechanism occurring in many structurally similar high-dimensional dynamics such as living systems, brain operation principles and even turbulence in hydrodynamics. Here we report on a powerful and highly controllable experiment based on an optoelectronic delayed feedback applied to a wavelength tuneable semiconductor laser, with which a wide variety of chimera patterns can be accurately investigated and interpreted. We uncover a cascade of higher-order chimeras as a pattern transition from N to N+1 clusters of chaoticity. Finally, we follow visually, as the gain increases, how chimera state is gradually destroyed on the way to apparent turbulence-like system behaviour.
In Nat. Commun., 2015

Recent Publications

. Coupled Nonlinear Delay Systems As Deep Convolutional Neural Networks. In PRL, 2019.

Preprint Project Publication

. Stochastic Computing for Hardware Implementation of Binarized Neural Networks. IEEE Access, 2019.

Preprint PDF Project

. Efficient Design of Hardware-Enabled Reservoir Computing in FPGAs. JAP, 2018.

Project

. Tutorial: Photonic Neural Networks in Delay Systems. JAP, 2018.

Project Publication

. Spatio-temporal complexity in dual delay nonlinear laser dynamics: chimeras and dissipative solitons. Chaos, 2018.

Preprint Project

. Laser chimeras as a paradigm for multistable patterns in complex systems. In Nat. Commun., 2015.

PDF

Projects

Energy-efficient AI

We bring AI to the edge, to battery-powered devices and away from the cloud

Bioinspired computing

Bioinspired computing aims at producing more efficient hardware in terms of speed and energy consumption

Delay Differential Equations

A fast and flexible library solving delay differential equations.

HMEP

Multi expression programming is a genetic programming variant encoding multiple solutions in the same chromosome.