Keynotes

ylc-thumb.jpeg

Yann LeCun

Sliver Professor of Computer Science and Neural Science
The Courant Institute of Mathematical Sciences and Center for Neural Science
New York University
http://yann.lecun.com
Title: Learning Feature Hierarchies Slides
Abstract:
Intelligent perceptual tasks such as vision and audition require the construction of good internal representations. Machine Learning has been very successful for producing classifiers, but the next big challenge for ML is to devise learning algorithms that can learn features and internal representations automatically.
Theoretical and empirical evidence suggest that the perceptual world is best represented by a multi-stage hierarchy in which features in successive stages are increasingly global, invariant, and abstract. An important question is to devise "deep learning" methods for multi-stage architecture than can automatically learn feature hierarchies from labeled and unlabeled data.
We will demonstrate the use of deep learning methods, based on unsupervised sparse coding, to train convolutional network (ConvNets). ConvNets are biologically-inspired architectures consisting of multiple stages of filter banks, interspersed with non-linear operations, and spatial pooling operations.
A number of applications will be shown through videos and live demos, including a category-level object recognition system that can be trained on the fly, a pedestrian detector, a system that recognizes human activities in videos, and a trainable vision system for off-road mobile robot navigation. Specialized hardware architecture that implement these algorithms will also be described.


Bio:
Yann LeCun is Silver Professor of Computer Science and Neural Science at New York University. He is also a Fellow of the NEC Research Institute (now NEC Labs America) in Princeton, NJ. He received a Diplôme d'Ingénieur from the Ecole Superieure d'Ingénieur en Electrotechnique et Electronique (ESIEE), Paris in 1983, a Diplôme d'Etudes Approfondies (DEA) from Université Pierre et Marie Curie, Paris in 1984, and a PhD in Computer Science from the same university in 1987. His PhD thesis was entitled "Modeles connexionnistes de l'apprentissage" (connexionist learning models) and introduced an early version of the back-propagation algorithm for gradient-based machine learning.




flach.gif

Peter A. Flach

Professor of Artificial Intelligence
Intelligent Systems Laboratory
Department of Computer Science University of Bristol
http://www.cs.bris.ac.uk/~flach/
Title: Scale Matters: on the many uses of calibration in machine learning Slides

Abstract:
Calibration is the process of adjusting measurements to a standard scale. In machine learning it is most commonly understood in relation to the class probability estimates of a probabilistic classifier: we say that a classifier is well-calibrated if among all instances receiving a probability estimate p for a particular class, the proportion of instances having the class in question is approximately p. The advantage of a well-calibrated classifier is that near-optimal decision thresholds can be directly derived from the operating condition (class and cost distribution). In this talk I explore various methods for classifier calibration, including the isotonic regression method that relates to ROC analysis. I will discuss how these methods can be applied to single features, resulting in a very general framework in which features carry class information and categorical features can be turned into real-valued ones and vice versa. I will also discuss an alternative notion of calibration whereby a classifier's score quantifies the proportion of positive predictions it makes at that threshold. I will introduce the ROL curve, a close companion of ROC curves that allow to quantify the loss at a particular predicted positive rate. Rate-calibrated classifiers have an expected loss that is linearly related to AUC, which vindicates AUC as a coherent measure of classification performance (contrary to recent claims in the literature).


Bio:
Peter Flach is professor of Artificial Intelligence at the University of Bristol. He has published widely on inductive logic programming, multi-relational data mining, and machine learning. He was PC co-chair of ILP'99 and ECML'01, is on the steering committee of the recent ECML/PKDD conferences, and is a regular PC member of all major machine learning and data mining conferences including ICML, ECML/PKDD, ILP, ICDM, SDM, and PAKDD. Prof Flach is the editor-in-chief of Machine Learning, and serves on the editorial boards of Journal of Machine Learning Research, Journal of Artificial Intelligence Research, and Artificial Intelligence Communications.




oren-etzioni-large.jpg
Oren Etzioni
Professor of Computer Science & Engineering
University of Washington
http://www.cs.washington.edu/homes/etzioni/

Title: Open Information Extraction at Web Scale Slides
Abstract:

Information Extraction (IE) is the task of mapping natural-language sentences to machine-processable representations of the sentences' content. IE is often formulated as a machine-learning problem where an extractor for a particular relation (e.g., seminar speaker) is learned from labeled training examples. My talk will describe Open IE---an approach to scaling IE to the Web where the set of potential relations is not known in advance making a standard machine learning approach impossible. I will describe various bootstrapping approaches that enables us to utilize machine learning at Web scale.


Bio:
Oren Etzioni is the Washington Research Foundation Entrepreneurship Professor at the University of Washington's Computer Science Department. He received his bachelor's degree in Computer Science from Harvard University in June 1986 where he was the first Harvard student to "major" in Computer Science. Etzioni received his Ph.D. from Carnegie Mellon University in January 1991, and joined the University of Washington's faculty in February 1991, where he is now a Professor of Computer Science. Etzioni received a National Young Investigator Award in 1993, and was selected as a AAAI Fellow a decade later. In 2007, he received the Robert S. Engelmore Memorial Award. He is the founder and director of the University of Washington's Turing Center.
Etzioni is the author of over 100 technical papers in a wide range of conferences including AAAI, ACL, CIDR, COLING, EMNLP, FOCS, HLT, ICML, IJCAI, ISWC, IUI, KDD, KR, SIGIR, and WWW. He is a founder of three companies and a Venture Partner at the Madrona Venture Group. His work has been featured in the New York Times, Wall Street Journal, NPR, SCIENCE, The Economist, TIME Magazine, Business Week, Newsweek, Discover Magazine, Forbes Magazine, Wired, NBC Nightly News, and even Pravda.