My PhD

In three minutes

PhD thesis promo recorded on the regional final of "Ma thèse en 180" contest. The speech is in French, but it will take less than 180 seconds :)

10 Days Of Grad

Haskell Deep Learning Blog

More Posts

Last week Apple has acquired XNOR.ai startup for amazing $200 million. The startup is known for promoting binarized neural network algorithms to save the energy and computational resources. That is definitely a way to go for mobile devices, and Apple just acknowledged that it is a great deal for them too. I feel now is a good time to explain what binarized neural networks are so that you can better appreciate their value for the industry.

CONTINUE READING

Today we will talk about one of the most important deep learning architectures, the "master algorithm" in computer vision. That is how François Chollet, author of Keras, calls convolutional neural networks (CNNs). Convolutional network is an architecture that, like other artificial neural networks, has a neuron as its core building block. It is also differentiable, so the network is conveniently trained via backpropagation. The distinctive feature of CNNs, however, is the connection topology, resulting in sparsely connected convolutional layers with neurons sharing their weights.

CONTINUE READING

Which purpose do neural networks serve for? Neural networks are learnable models. Their ultimate goal is to approach or even surpass human cognitive abilities. As Richard Sutton puts it, 'The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective'. In his essay, Sutton argues that only models without encoded human-knowledge can outperform human-centeric approaches. Indeed, neural networks are general enough and they leverage computation.

CONTINUE READING

Now that we have seen how neural networks work, we realize that understanding of the gradients flow is essential for survival. Therefore, we will revise our strategy on the lowest level. However, as neural networks become more complicated, calculation of gradients by hand becomes a murky business. Yet, fear not young padawan, there is a way out! I am very excited that today we will finally get acquainted with automatic differentiation, an essential tool in your deep learning arsenal.

CONTINUE READING

In the previous article, we have introduced the concept of learning in a single-layer neural network. Today, we will learn about the benefits of multi-layer neural networks, how to properly design and train them. Sometimes I discuss neural networks with students who have just started discovering machine learning techniques: "I have built a handwritten digits recognition network. But my accuracy is only Y." "It seems to be much less than state-of-the-art", I contemplate.

CONTINUE READING

Selected Publications

Neural networks are currently transforming the field of computer algorithms, yet their emulation on current computing substrates is highly inefficient. Reservoir computing was successfully implemented on a large variety of substrates and gave new insight in overcoming this implementation bottleneck. Despite its success, the approach lags behind the state of the art in deep learning. We therefore extend time-delay reservoirs to deep networks and demonstrate that these conceptually correspond to deep convolutional neural networks. Convolution is intrinsically realized on a substrate level by generic drive-response properties of dynamical systems. The resulting novelty is avoiding vector-matrix products between layers, which cause low efficiency in today's substrates. Compared to singleton time-delay reservoirs, our deep network achieves accuracy improvements by at least an order of magnitude in Mackey-Glass and Lorenz timeseries prediction.
In PRL, 2019

Photonic delay systems have revolutionized the hardware implementation of Recurrent Neural Networks and Reservoir Computing in particular. The fundamental principles of Reservoir Computing strongly benefit a realization in such complex analog systems. Especially delay systems, potentially providing large numbers of degrees of freedom even in simple architectures, can efficiently be exploited for information processing. The numerous demonstrations of their performance led to a revival of photonic Artificial Neural Network. Today, an astonishing variety of physical substrates, implementation techniques as well as network architectures based on this approach have been successfully employed. Important fundamental aspects of analog hardware Artificial Neural Networks have been investigated, and multiple high-performance applications have been demonstrated. Here, we introduce and explain the most relevant aspects of Artificial Neural Networks and delay systems, the seminal experimental demonstrations of Reservoir Computing in photonic delay systems, plus and the most recent and advanced realizations.
JAP, 2018

We demonstrate for a nonlinear photonic system that two highly asymmetric feedback delays can induce a variety of emergent patterns which are highly robust during the system's global evolution. Explicitly, two-dimensional chimeras and dissipative solitons become visible upon a space-time transformation. Switching between chimeras and dissipative solitons requires only adjusting two system parameters, demonstrating self-organization exclusively based on the system's dynamical properties. Experiments were performed using a tunable semiconductor laser's transmission through a Fabry-Perot resonator resulting in an Airy function as nonlinearity. Resulting dynamics were band-pass filtered and propagated along two feedback paths whose time delays differ by two orders of magnitude. An excellent agreement between experimental results and theoretical model given by modified Ikeda equations was achieved.
Chaos, 2018

A chimera state is a rich and fascinating class of self-organized solutions developed in high-dimensional networks. Necessary features of the network for the emergence of such complex but structured motions are non-local and symmetry breaking coupling. An accurate understanding of chimera states is expected to bring important insights on deterministic mechanism occurring in many structurally similar high-dimensional dynamics such as living systems, brain operation principles and even turbulence in hydrodynamics. Here we report on a powerful and highly controllable experiment based on an optoelectronic delayed feedback applied to a wavelength tuneable semiconductor laser, with which a wide variety of chimera patterns can be accurately investigated and interpreted. We uncover a cascade of higher-order chimeras as a pattern transition from N to N+1 clusters of chaoticity. Finally, we follow visually, as the gain increases, how chimera state is gradually destroyed on the way to apparent turbulence-like system behaviour.
In Nat. Commun., 2015

Recent Publications

. In-Memory Resistive RAM Implementation of Binarized Neural Networks for Medical Applications. At DATE, 2020.

Project

. Digital Biologically Plausible Implementation of Binarized Neural Networks with Differential Hafnium Oxide Resistive Memory Arrays. In Front. Neurosci., 2020.

Preprint PDF Project Project Publication

. Coupled Nonlinear Delay Systems As Deep Convolutional Neural Networks. In PRL, 2019.

Preprint Project Publication

. Stochastic Computing for Hardware Implementation of Binarized Neural Networks. IEEE Access, 2019.

Preprint PDF Project

. Efficient Design of Hardware-Enabled Reservoir Computing in FPGAs. JAP, 2018.

Project

. Tutorial: Photonic Neural Networks in Delay Systems. JAP, 2018.

Project Publication

. Spatio-temporal complexity in dual delay nonlinear laser dynamics: chimeras and dissipative solitons. Chaos, 2018.

Preprint Project Project

. Laser chimeras as a paradigm for multistable patterns in complex systems. In Nat. Commun., 2015.

PDF Project

. Virtual Chimera States for Delayed-Feedback Systems. In PRL, 2013.

Preprint Project

Projects

Energy-efficient AI

We bring AI to the edge, to battery-powered devices and away from the cloud.

Syncronization patterns

Chimera states and dissipative solitons as syncronization pattens for optical memory applications.

Bio-inspired computing

Conceiving next-generation computing principles inspired by biological systems such as the human brain.

Delay Differential Equations

A fast and flexible library solving delay differential equations.

HMEP

Multi expression programming is a genetic programming variant encoding multiple solutions in the same chromosome.