Neural networks are currently transforming the field of computer algorithms, yet their emulation on current computing substrates is highly inefficient. Reservoir computing was successfully implemented on a large variety of substrates and gave new insight in overcoming this implementation bottleneck. Despite its success, the approach lags behind the state of the art in deep learning. We therefore extend time-delay reservoirs to deep networks and demonstrate that these conceptually correspond to deep convolutional neural networks. Convolution is intrinsically realized on a substrate level by generic drive-response properties of dynamical systems. The resulting novelty is avoiding vector-matrix products between layers, which cause low efficiency in today's substrates. Compared to singleton time-delay reservoirs, our deep network achieves accuracy improvements by at least an order of magnitude in Mackey-Glass and Lorenz timeseries prediction.