TRIPODS Winter School & Workshop- Jason Lee

/ January 4, 2021/

When:
January 13, 2021 @ 1:30 pm – 2:15 pm
2021-01-13T13:30:00-05:00
2021-01-13T14:15:00-05:00

Title- Provable Representation Learning in Deep Learning

Abstract- Deep representation learning seeks to learn a data representation that transfers to downstream tasks. In this talk, we study two forms of representation learning: supevised pre-training and self-supervised learning. Supervised pre-training uses a large labled source dataset to learn a representation, then trains a classifier on top of the representation. We prove that supervised pre-training can pool the data from all source tasks to learn a good representation which transfers to downstream tasks with few labeled examples. Self-supervised learning creates auxilary pretext tasks that do not require labeled data to learn representations. These pretext tasks are created solely using input features, such as predicting a missing image patch, recovering the color channels of an image, or predicting missing words. Surprisingly, predicting this known information helps in learning a representation effective for downtream tasks. We prove that under a conditional independence assumption, self-supervised learning provably learns representations.

Share this Post