12/8 Deep Learning for COVID-19/Fox
Deep Learning for Time Series illustrated by COVID-19 Infection studies
We show that one can study several sets of sequences or time-series in terms of an underlying evolution operator which can be learned with a deep learning network. We use the language of geospatial time series as this is a common application type but the series can be any sequence and the sequences can be in any collection (bag) - not just Euclidean space-time -- as we just need sequences labeled in some way and having properties consequent of this label (position in abstract space). This problem has been successfully tackled by deep learning in many ways and in many fields. The most advanced work is probably in Natural Language processing and transportation (ride-hailing). The second case with traffic and the number of people needing rides is a geospatial problem with significant constraints from spatial locality. As in many problems, the data here is typically space-time-stamped events but these can be converted into spatial time series by binning in space and time.
Comparing deep learning for such time series with coupled ordinary differential equations used to describe multi-particle systems, motivates the introduction of an evolution operator that describes the time dependence of complex systems. With an appropriate training process, we interpret deep learning applied to spatial time series as a particular approach to finding the time evolution operator for the complex system giving rise to the spatial time series. Whimsically we view this training process as determining hidden variables that represent the theory (as in Newton’s laws) of the complex system.
We formulate this problem in general and present an open-source package FFFFWNPF as a Jupyter notebook for training and inference using either recurrent neural networks or a variant of the transformer (multi-headed attention) approach. This assumes an outside data engineering step that can prepare data for ingest into FFFFWNPF.
We present the approach and a comparison of transformer and LSTM networks for time series of COVID infection and fatality data from 314 cities as well as hydrology from 671 locations. The paper concludes with a discussion of future applications including earthquake science, logistics (job scheduling), and epidemiology as well as other important neural networks -- graphs, convolutional, convLSTM. We expect to use this technology with MLPerf datasets. We intend to understand how complex systems of different types (different membership linkages) are described by different types of deep learning operators. Geometric structure in space and multi-scale behavior in both time and space will be important. We anticipate that the current forecasting formulation is easily extended to sequence to sequence problems.
Fox received a Ph.D. in Theoretical Physics from Cambridge University, where he was Senior Wrangler. He is now a distinguished professor of Engineering, Computing, and Physics at Indiana University, where he is the director of the Digital Science Center. He previously held positions at Caltech, Syracuse University, and Florida State University after being a postdoc at the Institute for Advanced Study at Princeton, Lawrence Berkeley Laboratory, and Peterhouse College Cambridge. He has supervised the Ph.D. of 73 students and published around 1500 papers (550 with at least ten citations) in physics and computing with a hindex of 83 and over 39000 citations. He received the High-Performance Parallel and Distributed Computing (HPDC) Achievement Award and the ACM - IEEE CS Ken Kennedy Award for Foundational contributions to parallel computing in 2019. He is a Fellow of APS (Physics) and ACM (Computing) and works on the interdisciplinary interface between computing and applications. He is involved in several projects to enhance the capabilities of Minority Serving Institutions. He has experience in online education and its use in MOOCs for areas like Data and Computational Science. He is active in the Industry consortium MLPerf.