Users: Can’t Work With Them, Can’t Work Without Them?
Alistair Moffat
Full Professor, The University of Melbourne, Australia
Thursday, July 7, 2:30 PM
Online
Shane Culpepper
If we could design the ideal IR “effectiveness” experiment (as distinctfrom an IR “efficiency” experiment), what would it look like?It would probably be a lab-based observational study involving multiple search systems masked behind a uniform interface, and with hundreds (or thousands) of users each progressing some “real” search activity they were interested in. And we’d plan to (non-intrusively, somehow) capture per-snippet, per-document, per-SERP, and per-session annotations and satisfaction responses. The collected data could then be compared against a range of measured “task completion quality” indicators, and also against search effectiveness metric scores computed from the elements containedin the SERPs that were served by the systems. That’s a tremendously big ask! So we often use offline evaluation techniques instead, employing test collections, static qrels sets, and effectiveness metrics. We abstract the user into a deterministic evaluation script, supposing for pragmatic reasons that we know what query they would issue, and at the same time assuming thatwe can apply an effectiveness metric to calculate how much usefulness (or satisfaction) they will derive from any given SERP. The great advantage of this approach is that aside from the process of collecting the qrels, it is free of the need for users, meaning that it is repeatable. Indeed, we often do repeat, iterating to set parameters (and to rectify programming errors). Then, once metric scores have been computed, we carry out one or more paired statistical tests and draw conclusions as to relative system effectiveness.
Alistair Moffat has been a faculty member at the University of Melbourne since 1986, and been active in the area of information retrieval since 1990. During that period he has coauthored two books (Managing Gigabytes, 1994 and 1999; Compression and Coding Algorithms, 2002) and more than 250 research papers on topics spanning text and integer compression, index representations and structures, processing algorithms for top-𝑘 queries, and mecha- nisms for retrieval effectiveness evaluation; a body of work for which he was inducted into the SIGIR Academy in 2021. During his four decades as an academic Alistair has also introduced more than 25,000 undergraduates to the joys and frustrations of programming.
Searching for a New and Better Future of Work
Jaime Teevan
Chief Scientist & Technical Fellow, Microsoft
Tuesday, July 12, 9:30 AM
Room: «Teatro Francisco de Rojas», Floor: 2. Overflow Room: «Sala de Columnas»
Ben Carterette
Search engines were one of the first intelligent cloud-based applications that people used to get things done, and they have since become an extremely important productivity tool. This is in part because much of what a person is doing when they search is thinking. Search engines do not merely support that thinking, however, but can also actually shape it. For example, some search results are more likely to spur learning than others and influence a person’s future queries. This means that as information retrieval researchers the new approaches that we develop can actively shape the future of work.
The world is now in the middle of the most significant change to work practices in a generation, and it is one that will make search technology even more central to work in the years to come. For the past several millennia, space was the primary technology that people used to get things done. The coming Hybrid Work Era, however, will be shaped by digital technology. The recent rapid shift to remote work significantly accelerated the digital transformation already underway at many companies, and new types of work-related data are now being generated at an unprecedented rate. For example, the use of meeting recordings in Microsoft Stream has more than doubled from March 2020 to February 2022.
Knowledge exists in this newly captured data, but figuring out how to make sense of it is overwhelming. The information retrieval community knows how to process large amounts of data, build usable intelligent systems, and learn from behavioral data, and we have a unique opportunity right now to apply this expertise to new corpora and in new scenarios in a meaningful way. In this talk I will give an overview of what research tells us about emerging work practices, and explore how the SIGIR community can build on these findings to help create a new – and better – future of work.
Jaime Teevan is Chief Scientist and Technical Fellow at Microsoft, where she is responsible for driving research-backed innovation in the company’s core products. Jaime is an advocate for finding smarter ways for people to make the most of their time, and believes in the positive impact that breaks and recovery have on productivity. She leads Microsoft's New Future of Work initiative, which brings researchers from Microsoft, LinkedIn, and GitHub together to study how the pandemic has changed the way people work. Previously she was Technical Advisor to CEO Satya Nadella and led the Productivity team at Microsoft Research. Jaime was recently inducted into the SIGIR and SIGCHI Academies, and has received numerous awards for her research, including the TR35, Borg Early Career, Karen Spärck Jones, and SIGIR Test of Time awards. She holds a Ph.D. in AI from MIT and a B.S. from Yale, and is affiliate faculty at the University of Washington.
Few-shot Information Extraction is Here: Pre-train, Prompt and Entail
Eneko Agirre
Full Professor, HiTZ Center, University of the Basque Country, Spain
Wednesday, July 13, 1:30 PM
Room: «Teatro Francisco de Rojas», Floor: 2. Overflow Room: «Sala de Columnas»
Julio Gonzalo
Deep Learning has made tremendous progress in Natural Language Processing (NLP), where large pre-trained language models (PLM) fine-tuned on the target task have become the predominant tool. More recently, in a process called prompting, NLP tasks are rephrased as natural language text, allowing us to better exploit linguistic knowledge learned by PLMs and resulting in significant improvements. Still, PLMs have limited inference ability. In the Textual Entailment task, systems need to output whether the truth of a certain textual hypothesis follows from the given premise text. Manually annotated entailment datasets covering multiple inference phenomena have been used to infuse inference capabilities to PLMs.
This talk will review these recent developments, and will present an approach that combines prompts and PLMs fine-tuned for textual entailment that yields state-of-the-art results on Information Extraction (IE) using only a small fraction of the annotations. The approach has additional benefits, like the ability to learn from different schemas and inference datasets. These developments enable a new paradigm for IE where the expert can define the domain-specific schema using natural language and directly run those specifications, annotating a handful of examples in the process. A user interface based on this new paradigm will also be presented.
Beyond IE, inference capabilities could be extended, acquired and applied from other tasks, opening a new research avenue where entailment and downstream task performance improve in tandem.
Eneko Agirre is a Professor of Informatics and Head of HiTZ Basque Center of Language Technnology at the University of the Basque Country, UPV/EHU, in San Sebastian, Spain. He has been active in Natural Language Processing and Computational Linguistics for decades. He received the Spanish Informatics Research Award in 2021, and is one of the 74 fellows of the Association of Computational Linguistics (ACL). He was President of ACL's SIGLEX, member of the editorial board of Computational Linguistics, Journal of Artificial Intelligence Research and Action editor for the Transactions of the ACL. He is co-founder of the Joint Conference on Lexical and Computational Semantics (*SEM). Recipient of three Google Research Awards and five best paper awards and nominations. Dissertations under his supervision received best PhD awards by EurAI, the Spanish NLP society and the Spanish Informatics Scientific Association. He has over 200 publications across a wide range of NLP and AI topics. His research spans topics such as Word Sense Disambiguation, Semantic Textual Similarity, Unsupervised Machine Translation and resources for Basque. Most recently his research focuses on inference and deep learning language models.
Intelligent Conversational Agents for Ambient Computing
Ruhi Sarikaya
Director, Alexa Intelligent Decisions organization, Amazon
Thursday, July 14, 2:30 PM
Room: «Teatro Francisco de Rojas», Floor: 2. Overflow Room: «Sala de Columnas»
Gabriella Kazai
We are in the midst of an AI revolution. Three primary disruptive changes set off this revolution: 1) increase in compute power, mobile internet, and advances in deep learning. The next decade is expected to be about the proliferation of Internet-of-Things (IoT) devices and sensors, which will generate exponentially larger amounts of data to reason over and pave the way for ambient computing. This will also give rise to new forms of interaction patterns with these systems. Users will have to interact with these systems under increasingly richer context and in real-time. Conversational AI has a critical role to play in this revolution, but only if it delivers on its promise of enabling natural, frictionless, and personalized interactions in any context the user is in, while hiding the complexity of these systems through ambient intelligence. However, current commercial conversational AI systems are trained primarily with a supervised learning paradigm, which is difficult, if not impossible, to scale by manually annotating data for increasingly complex sets of contextual conditions. Inherent ambiguity in natural language further complicates the problem. We need to devise new forms of learning paradigms and frameworks that will scale to this complexity. In this talk, we present some early steps we are taking with Alexa, Amazon’s Conversational AI system, to move from supervised learning to self-learning methods, where the AI relies on customer interactions for supervision in our journey to ambient intelligence.
Ruhi Sarikaya has been a Director at Amazon Alexa since 2016. He built and is leading the Intelligence Decisions organization, which is one of the three pillars of Alexa AI. With his team, he has been building core AI capabilities around ranking, relevance, natural language understanding, dialog management, contextual understanding, personalization, self-learning, metrics and analytics for Alexa. Prior to that, he was a principal science manager and the founder of the language understanding and dialog systems group at Microsoft between 2011 and 2016. His group has built the language understanding and dialog management capabilities of Cortana, Xbox One, and the underlying platform. Before Microsoft, he was a research staff member and team lead in the Human Language Technologies Group at the IBM T.J. Watson Research Center for ten years. He received his BS degree from Bilkent University, MS degree from Clemson University, and Ph.D. degree from Duke University, all in electrical and computer engineering. He has published over 130 technical papers in refereed journal and conference proceedings and is the inventor of over 80 issued/pending patents. Dr. Sarikaya has served in the IEEE SLTC, the general co-chair of IEEE SLT'12, publicity chair of IEEE ASRU'05, and associate editor of IEEE Trans. on Audio, Speech and Language Processing and IEEE Signal Processing Letters. He has given keynotes in major AI, Web and language technology conferences. Dr. Sarikaya is an IEEE Fellow.