Human computation is the study of systems where humans perform a major part of the computation or are an integral part of the overall computational system. With the growth of the web, human computation systems, e.g., Games With A Purpose (the ESP Game), crowdsourcing marketplaces (Amazon Mechanical Turk, oDesk), and identity verification tasks (reCaptcha), can now leverage the abilities of an unprecedented number of people to solve complex problems that are beyond the scope of existing AI algorithms.
The first half of the tutorial is an extended version of the AAAI 2011 tutorial, which highlights the core research questions in human computation, discusses the design of mechanisms, algorithms and interfaces for tackling each of these questions, and outlines the emerging topics and recent developments in the field. The second half of the tutorial provides a broader perspective of human computation, by drawing connections to other models of computation that involve humans in the loop, such as interactive machine learning, learning by demonstration/imitation, mixed initiative learning, complementary computing, and active learning from human oracles, etc. We expect participants to leave with a bird's eye view of the research landscape, as well as some tools to begin their own investigations.
Note: All attendants will receive a free copy of Edith's book, Human Computation.
Edith Law is a CRCS postdoctoral fellow at the School of Engineering and Applied Sciences at Harvard University. She graduated from Carnegie Mellon University in 2012 with Ph.D. in Machine Learning, where she studied human computation systems that harness the joint efforts of machines and humans.
She is a Microsoft Graduate Research Fellow, co-authored the book "Human Computation" in the Morgan & Claypool Synthesis Lectures on Artificial Intelligence and Machine Learning, co-organized the Human Computation Workshop (HCOMP) Series at KDD an AAAI from 2009 to 2012, and helped create the first AAAI International Conference on Human Computation. Her work on games with a purpose and large-scale collaborative planning has received best paper honorable mentions at CHI.
Automatically building the Encyclopedia of the World, that contains not only high-level information such as found in Wikipedia, but also particular facts such as "Who appeared in a concert in the Hollywood Bowl last night?" or "What causes tumors to shrink?" is a challenging problem, which was never solved despite many have worked on it. This tutorial will provide an overview of state-of-the-art and recent methods for information gathering, sifting and organization that can rapidly, accurately and completely cover any area of interest mining unstructured texts on the Web and social media. The algorithms automatically acquire semantic classes, instances and a diverse set of relations that interlink them. The tutorial will also teach the audience about the strengths, limitations and challenges of the field and will touch upon natural language applications, which use the extracted knowledge.
Zornitsa Kozareva is a Research Assistant Professor at the University of Southern California and a Research Scientist in the Natural Language group at the Information Sciences Institute. She received her PhD with Cum Laude from the University of Alicante, Spain. Her research interests lie in Web-based knowledge acquisition, text mining, lexical semantics, ontology population and multilingual information extraction. Zornitsa has co-organized four SemEval scientific challenges on topics like paraphrases, sentiment analysis, causality detection and relation identification. She was the leader of the team that won the answer validation challenge (AVE-2006) for French and Italian, and a member of the team that won the Spanish Geographic Information Retrieval (GeoClef-2006) challenge.