Abstract: A wide variety of problems in machine learning involve sequence-to-sequence transformations, i.e., nonlinear operators that map an input sequence to an output sequence. Traditionally, such input-output maps have been modeled using discrete-time recurrent neural nets. However, there has been a recent shift in sequence-to-sequence modeling from recurrent network architectures to

]]>
Add to Calendar

When:

January 18, 2022 @ 12:00 pm – 12:45 pm

2022-01-18T12:00:00-05:00

2022-01-18T12:45:00-05:00

Abstract: A wide variety of problems in machine learning involve sequence-to-sequence transformations, i.e., nonlinear operators that map an input sequence to an output sequence. Traditionally, such input-output maps have been modeled using discrete-time recurrent neural nets. However, there has been a recent shift in sequence-to-sequence modeling from recurrent network architectures to autoregressive convolutional network architectures. These temporal convolutional nets (TCNs) allow for easily parallelizable training and operation, while still achieving competitive performance. In this talk, I will discuss TCNs from the perspective of universal approximation for causal and time-invariant input-output maps that have approximately finite memory. I will present quantitative approximation rates for deep TCNs with rectified linear unit (ReLU) activation functions. While sounding rather modern, these results can be traced back to seminal work by Irwin Sandberg, which I will review and place in the current context. By way of examples, I will show how to apply these results to input-output maps with incrementally stable nonlinear state-space realizations. Parts of this talk are based on joint work with Joshua Hanson.

Bio: Maxim Raginsky received the B.S. and M.S. degrees in 2000 and the Ph.D. degree in 2002 from Northwestern University, all in Electrical Engineering. He has held research positions with Northwestern, the University of Illinois at Urbana-Champaign (where he was a Beckman Foundation Fellow from 2004 to 2007), and Duke University. In 2012, he has returned to the UIUC, where he is currently an Associate Professor with the Department of Electrical and Computer Engineering and the Coordinated Science Laboratory. He also holds a courtesy appointment with the Department of Computer Science. Prof. Raginsky’s interests cover probability and stochastic processes, deterministic and stochastic control, machine learning, optimization, and information theory. Much of his recent research is motivated by fundamental questions in modeling, learning, and simulation of nonlinear dynamical systems, with applications to advanced electronics, autonomy, and artificial intelligence.

]]>