After the tutotials, the some presenters made their materials available, those include:
Morning of 7th July 2014
Please note: This is part 1 of a 2-part tutorial. The companion session will run in the afternoon. Each session will be self-contained; you can therefore register for this session independently, or register for both.
All research projects begin with a goal, for instance to describe search behavior, to predict when a person will enter a second query, or to show that a particular IR system performs better than another. The research goal suggests different research approaches, such as a field study, lab study, or an online experiment. In this tutorial, we will provide an overview of different goals of research, common evaluation approaches used to address these goals, and constraints that each approach entails, in particular providing an understanding of the benefits and limitations of these research approaches.
This first part of a two-part tutorial will investigate experimental approaches to research, specifically controlled, laboratory studies (smaller scale user studies) and field experiments (i.e., A/B testing). These types of studies seek to describe, predict and explain, and are essential for explanatory research that seeks to evaluate causal models. In both laboratory and field experiments the groups studied can be characterized as ‘treatment groups,’ because they are constructed as the researcher brings participants together so that their behavior can be studied under conditions that the researcher manipulates or creates. Using our own research as anchors, in addition to illustrations from the literature, we will describe the available observational approaches, revealing the difficult choices and trade-offs researchers make when designing and conducting IR studies, including benefits and limitations of each.
![]() | Diane Kelly is an Associate Professor at the School of Information and Library Science at the University of North Carolina at Chapel Hill. Her research and teaching interests are in interactive information search and retrieval, information search behavior, and research methods. Kelly is the recipient of the 2013 British Computer Society’s IRSG Karen Spärck Jones Award, the 2009 ASIST/Thomson Reuters Outstanding Information Science Teacher Award and the 2007 SILS Outstanding Teacher of the Year Award. She is the current ACM SIGIR Treasurer. Kelly received a Ph.D. from Rutgers University and B.A. from the University of Alabama. |
![]() | Filip Radlinski is an Applied Researcher at Microsoft in Cambridge, UK contributing to the Bing search engine. He is also an honorary lecturer at University College London, UK. His research interests include information retrieval, machine learning, online evaluation and understanding user behavior. He received the best paper award at WSDM 2013 and KDD 2005. He previously presented tutorials at SIGIR 2011 and ECIR 2013 on Practical Online Retrieval Evaluation. Radlinski received a Ph.D. in computer science from Cornell University and a B.S. from the Australian National University. |
![]() | Jaime Teevan is a Senior Researcher at Microsoft Research and an Affiliate Assistant Professor at the University of Washington. Working at the intersection of information retrieval, human computer interaction, and social media, she studies people’s information seeking activities. Jaime was named a Technology Review (TR35) Young Innovator for her research on personalized search and developed the first personalized search algorithm used by Bing. She has presented several tutorials on large-scale log analysis, including at HCIC 2010 and CHI 2011, and wrote a book chapter on the topic. Jaime received a Ph.D. from MIT and a B.S. from Yale University. |
Morning of 7th July 2014
Large-scale web search engines rely on massive compute infrastructures to be able to cope with the continuous growth of the Web and their user bases. In such search engines, achieving scalability and efficiency requires making careful architectural design choices while devising algorithmic performance optimizations. Unfortunately, most details about the internal functioning of commercial web search engines remain undisclosed due to their financial value and the high level of competition in the search market. The main objective of this tutorial is to provide an overview of the fundamental scalability and efficiency challenges in commercial web search engines, bridging the existing gap between the industry and academia.
![]() | Berkant Barla Cambazoglu received his BS, MS, and PhD degrees, all in computer engineering, from the Computer Engineering Department of Bilkent University in 1997, 2000, and 2006, respectively. He has then worked as a postdoctoral researcher in the Biomedical Informatics Department of the Ohio State University. He is currently employed as a senior researcher in Yahoo Labs, where he is heading the web retrieval group. He has many papers published in prestigious journals including IEEE TPDS, JPDC, Inf. Syst., ACM TWEB, and IP&M, as well as top-tier conferences, such as SIGIR, CIKM, WSDM, WWW, and KDD. |
![]() | Ricardo Baeza-Yates is VP of Yahoo Labs for Europe and Latin America, leading the labs at Barcelona, Spain and Santiago, Chile. Until 2005, he was the director of the Center for Web Research at the Department of Computer Science of the Engineering School of the University of Chile, and ICREA Professor at the Dept. of Information and Communication Technologies of University Pompeu Fabra in Barcelona, Spain. He is co-author of the bestseller textbook Modern Information Retrieval by Addison-Wesley, first published in 1999 with a second edition in 2011, as well as co-author of the second edition of the Handbook of Algorithms and Data Structures, Addison-Wesley, 1991; and co-editor of Information Retrieval: Algorithms and Data Structures, Prentice-Hall, 1992, among more than 400 other publications. He has been PC-Chair of the most important conferences in the field of Web Search and Web Mining. He has given tutorials in most major conferences many times, including SIGIR, WWW and VLDB. He has won several awards and is, both, ACM and IEEE Fellow. |
Morning of 7th July 2014
The past 20 years have seen a great improvement in the rigor of information retrieval experimentation, due primarily to two factors: high-quality, public, portable test collections such as those produced by TREC (the Text REtrieval Conference), and the increased practice of statistical hypothesis testing to determine whether measured improvements can be ascribed to something other than random chance. Together these create a very useful standard for reviewers, program committees, and journal editors; work in information retrieval (IR) increasingly cannot be published unless it has been evaluated using a well-constructed test collection and shown to produce a statistically significant improvement over a good baseline.
But, as the saying goes, any tool sharp enough to be useful is also sharp enough to be dangerous. Statistical tests of significance are widely misunderstood. Most researchers and developers treat them as a "black box": evaluation results go in and a p-value comes out. Because significance is such an important factor in determining what research directions to explore and what is published, using p-values obtained without thought can have consequences for everyone doing research in IR. Ioannidis has argued that the main consequence in the biomedical sciences is that most published research findings are false; could that be the case in IR as well?
This tutorial will help researchers and developers gain a better understanding of how tests work and how they should be interpreted so that they can both use them more effectively in their day-to-day work as well as better understand how to interpret them when reading the work of others. It is for both new and experienced researchers and practitioners in IR, anyone who wishes to perform tests and also desires a deeper understanding of what they are and how to interpret the information they provide.
![]() | Ben Carterette is an Assistant Professor of Computer and Information Sciences at the University of Delaware in Newark, Delaware, USA. His research primarily focuses on evaluation in Information Retrieval, including test collection construction, evaluation measures, and statistical testing. He has published over 60 papers in venues such as ACM TOIS, SIGIR, CIKM, WSDM, ECIR, and ICTIR, winning three Best Paper Awards for his work on evaluation. In addition, he has co-organized four workshops on IR evaluation and co-coordinated five TREC tracks. Most recently he served as General Co-Chair for WSDM 2014. |
Morning of 7th July 2014
Speech media forms an increasingly important component of online archives ranging from professional broadcasts content to recordings of lectures and enterprise materials to personal archives. Speech search aims to enable effect search of this recordings. This tutorial will introduce the background to the challenges of speech search, the speech processing and interaction technologies underlying effective speech search systems and examples of prototype systems. The tutorial will examine the reasons that speech search does not reduce to application of information retrieval to speech recognition transcripts, including the relationship between textual and spoken content, and its impact on the technologies of speech search. The tutorial will provide background of interest to researchers beginning work in this area, but also more experienced researchers working on projects in which speech search is a component, and faculty members or those in industry who wish to understand the topic better in order to advise students or manage projects from a more informed perspective.
![]() | Gareth Jones is a Principal Investigator in the CNGL Centre for Global Intelligent Content, and a Faculty member of the School of Computing at Dublin City University (DCU). He has been working on projects involving speech search for more than 20 years beginning with the groundbreaking Video Mail Retrieval using Voice project at the University of Cambridge, which received Best Paper Awards at ACM SIGIR and ACM Multimedia. More recently he co-authored a volume on Spoken Content Retrieval in the Foundations and Trends in Information Retrieval series. In 2010, he co-founded the MediaEval Multimedia Evaluation Benchmark. Within MediaEval he has been co-coordinator of tasks in Rich Speech Retrieval and Search and Hyperlinking for multimedia content including internet archives and broadcast television materials. He is also a co-organisr of the NTCIR 11 Spoken Query and Spoken Document Retrieval task. |
Morning of 7th July 2014
Axiomatic approaches have been recently shown to be an effective way of modeling the relevance, diagnosing deficiencies of information retrieval (IR) models and improving them. The basic idea is to model the relevance more directly with formally defined mathematical constraints on retrieval functions. These constraints enable analytical comparison of retrieval functions to assess their effectiveness and provide guidance on developing more effective retrieval functions. While the existing work have mostly explored this approach for optimizing retrieval models, the basic idea of axiomatic analysis is quite general and can be potentially useful for optimizing models and methods in many other problem domains.
The tutorial aims to promote axiomatic thinking, a systematic way to think about heuristics, identify the weakness of existing methods, and optimize the existing methods accordingly. We will introduce the basic idea and methodology of applying axiomatic thinking to develop effective retrieval models, summarize the research work done in this area, and discuss promising future research challenges and opportunities.
The tutorial should appeal to those who work on ranking problems in general and those who are interested in applying axiomatic analysis to optimize specific methods in many IR applications. Attendees will be assumed to know the basic concepts of IR.
![]() | Hui Fang is an Assistant Professor in the Department of Electrical and Computer Engineering at University of Delaware (UD), where she is also affiliated with the Department of Computer and Information Sciences, Institute for Financial Services Analytics, and Center for Bioinformatics and Computational Biology. She received her Ph.D. degree from University of Illinois at Urbana-Champaign in 2007. Her primary research interest is information retrieval, with focus on axiomatic retrieval models, enterprise search, search diversification and efficiency. She received the ACM SIGIR 2004 Best Paper Award, HP Labs Innovation Research Awards, and College of Engineering Excellence in Teaching Award at UD. |
![]() | ChengXiang Zhai is a Professor and Willett Faculty Scholar of Computer Science at the University of Illinois at Urbana-Champaign (UIUC), where he is also affiliated with the Graduate School of Library and Information Science, Institute for Genomic Biology, and Department of Statistics. His research interests are in information retrieval, text mining, and their applications. He is an ACM Distinguished Scientist and received numerous awards, including multiple best paper awards, Alfred P. Sloan Research Fellowship, the Presidential Early Career Award for Scientists and Engineers (PECASE), and the Rose Award for Teaching Excellence from the College of Engineering at UIUC. |
Afternoon of 7th July 2014
Please note: This is part 2 of a 2-part tutorial. The companion session will run in the morning. Each session will be self-contained; you can therefore register for this session independently, or register for both.
All research projects begin with a goal, for instance to describe search behavior, to predict when a person will enter a second query, or to show that a particular IR system performs better than another. The research goal suggests different research approaches, such as a field study, lab study, or an online experiment. In this tutorial, we will provide an overview of different goals of research, common evaluation approaches used to address these goals, and constraints that each approach entails, in particular providing an understanding of the benefits and limitations of these research approaches.
This second part of a two-part tutorial will investigate observational approaches to research, specifically field observations (where people know they are being studied) and large-scale log analysis. These types of studies primarily focus on description and prediction, and emphasize naturalness; as such, they focus on real groups of people in natural settings and seek to understand phenomena in context. Using our own research as anchors, in addition to illustrations from the literature, we will describe the available observational approaches, revealing the difficult choices and trade-offs researchers make when designing and conducting IR studies, including benefits and limitations of each.
![]() | Diane Kelly is an Associate Professor at the School of Information and Library Science at the University of North Carolina at Chapel Hill. Her research and teaching interests are in interactive information search and retrieval, information search behavior, and research methods. Kelly is the recipient of the 2013 British Computer Society’s IRSG Karen Spärck Jones Award, the 2009 ASIST/Thomson Reuters Outstanding Information Science Teacher Award and the 2007 SILS Outstanding Teacher of the Year Award. She is the current ACM SIGIR Treasurer. Kelly received a Ph.D. from Rutgers University and B.A. from the University of Alabama. |
![]() | Filip Radlinski is an Applied Researcher at Microsoft in Cambridge, UK contributing to the Bing search engine. He is also an honorary lecturer at University College London, UK. His research interests include information retrieval, machine learning, online evaluation and understanding user behavior. He received the best paper award at WSDM 2013 and KDD 2005. He previously presented tutorials at SIGIR 2011 and ECIR 2013 on Practical Online Retrieval Evaluation. Radlinski received a Ph.D. in computer science from Cornell University and a B.S. from the Australian National University. |
![]() | Jaime Teevan is a Senior Researcher at Microsoft Research and an Affiliate Assistant Professor at the University of Washington. Working at the intersection of information retrieval, human computer interaction, and social media, she studies people’s information seeking activities. Jaime was named a Technology Review (TR35) Young Innovator for her research on personalized search and developed the first personalized search algorithm used by Bing. She has presented several tutorials on large-scale log analysis, including at HCIC 2010 and CHI 2011, and wrote a book chapter on the topic. Jaime received a Ph.D. from MIT and a B.S. from Yale University. |
Afternoon of 7th July 2014
Evaluation metrics are not merely a tool to assess and compare Information Retrieval systems. In the space of solutions to a problem, they work like a GPS that tells researchers where is the final destination, providing the operational definition of what systems should do. Remarkably,IR researchers can choose among a set of over one hundred metrics, all pointing at different places in the map. And a wrong choice may imply falling off a cliff.
In this tutorial we will present, review, and compare the most popular evaluation metrics for some of the most salient information related tasks, covering: (i) Information Retrieval, (ii) Clustering, and (iii) Filtering. The tutorial will make a special emphasis on the specification of constraints for suitable metrics in each of the three tasks, and on the systematic comparison of metrics according to how they satisfy such constraints. This comparison provides criteria to select the most adequate metric or set of metrics for each specific information access task. The last part of the tutorial will investigate the challenge of combining and weighting metrics.
![]() | Enrique Amigo (UNED, Madrid, Spain) is associate professor at UNED and member of the nlp.uned.es research group. He has published several papers in venues such as SIGIR, ACL, EMNLP, Journal of Articial Intelligence Research, Information Retrieval Journal, etc. on evaluation methodologies and metrics for Information Retrieval, Text Summarization, Machine Translation, Text Clustering, Document Filtering, etc. He is currently co-organizing RepLab 2014, a competitive evaluation exercise for Online Reputation Management Systems. |
![]() | Julio Gonzalo (UNED, Madrid, Spain) is head of nlp.uned.es, the UNED research group in Natural Language Processing and Information Retrieval. He has recently been CLEF 2011 general co-chair, area chair for EACL 2012 and senior PC member of ECIR 2014. His research interests include Online Reputation Management technologies, Evaluation Methodologies and Metrics in Information Access, Entity-Oriented and Semantic Search, and Cross-Language and Interactive IR. |
![]() | Stefano Mizzaro is associate professor at Udine University since 2006, and in 2014 he is spending his sabbatical at RMIT in Melbourne. Although, broadly speaking, his current research interests include IR, digital libraries, and mobile contextual information access, he has been focusing on IR evaluation for the last 20 years. He has been working on foundations, user evaluation, novel effectiveness metrics, and analysis of test collection data. He published almost 100 refereed papers, several as a single author, he received two international awards for two best papers, and he had an active role in several research projects. |
Afternoon of 7th July 2014
http://www.dynamic-ir-modeling.org/
Dynamic aspects of Information Retrieval (IR), including changes found in data, users and systems, are increasingly being utilized in search engines and information filtering systems. Examples include large datasets containing sequential data capturing document dynamics and modern IR systems observing user dynamics through interactivity. Existing IR techniques are limited in their ability to optimize over changes, learn with minimal computational footprint and be responsive and adaptive. In this tutorial will provide a comprehensive and up-to-date introduction to Dynamic Information Retrieval Modeling, the statistical modeling of IR systems that can adapt to change. It is a natural follow-up to previous statistical IR modeling tutorials with a fresh look on state-of-the-art dynamic retrieval models and their applications including session search and online advertising. The tutorial will cover techniques ranging from classic relevance feedback to the latest applications of partially observable Markov decision processes (POMDPs) and a handful of useful algorithms and tools for solving IR problems incorporating dynamics.
![]() | Dr. Grace Hui Yang is an Assistant Professor in the Department of Computer Science at Georgetown University. Grace's current research interests include session search, search engine evaluation, privacy-preserving information retrieval, and information organization. Prior to this, she conducted research on question answering, ontology construction, near-duplicate detection, multimedia information retrieval and opinion and sentiment detection. The results of her research have been published in SIGIR, CIKM, ACL, TREC, ECIR, and WWW since 2002. Grace co-chairs the SIGIR 2013 and SIGIR 2014 Doctoral Consortium, and serves as an area chair for SIGIR and program committee members in SIGIR, ACL, EMNLP, CIKM, WSDM, and KDD. Home page: http://www.cs.georgetown.edu/~huiyang |
![]() | Marc Sloan is a senior graduate student completing his PhD thesis in University College London. His background is in information retrieval, machine learning and financial computing and his thesis is on dynamic IR. His research interests in information retrieval include applying reinforcement learning techniques such as multi-armed bandits and POMDPs to IR learning systems over time, contextual session search and query suggestion. Marc has published and presented reinforcement learning in IR research at conferences and workshops such as WWW, WSCD and the Large-scale Online Learning and Decision Making Workshop. Homepage: http://mediafutures.cs.ucl.ac.uk/people/MarcSloan/ |
![]() | Dr. Jun Wang is Senior Lecturer (Associate Professor) in Computer Science, University College London and the Founding Director of the MSc/MRes Web Science & Big Data Analytics programme. His main research interests are statistical modelling of information retrieval, collaborative filtering and computational advertising. He was a recipient of the SIGIR Doctoral Consortium Award in 2006, the Microsoft Beyond Search award in 2007 and won the Best Paper Prizes in ECIR09 and ECIR12. Jun’s recent service includes (Senior) PCs for SIGIR, CIKM, WSDM, and RecSys, and presenting an ECIR2011 tutorial on Risk Management in Information Retrieval, a CIKM2011 tutorial on statistical IR modeling and a CIKM2013 tutorial on Computational Advertising. Home page: http://www.cs.ucl.ac.uk/staff/J.Wang/ |
Afternoon of 7th July 2014
Retrievability is an important and interesting indicator that can be used in a number of ways to analyse search systems and document collections. Rather than focusing on relevance, retrievability examines what is retrieved, how often it is retrieved, and whether a user is likely to retrieve it or not. This is central to Information Retrieval because a document needs to be retrieved, before it can be judged for relevance.
In this tutorial, we shall define the concept of retrievability along with a number of retrievability measures, how it can be estimated and how it can be used for analysis. Since retrieval precedes relevance, we shall also provide an overview of how retrievability relates to effectiveness - describing some of the insights that researchers have discovered thus far. We shall also show how retrievability relates to efficiency, and how the theory of retrievability can be used to improve both effectiveness and efficiency. We shall then provide an overview of some of the different applications of retrievability such as measuring Search Engine Bias and undertaking Corpus Profiling. As part of the tutorial, participants are invited to bring along their own research topics where we will discuss how to apply retrievability in such contexts, where the aim is to construct the basis of an experimental analysis. The final session of the day will wrap up with an overview of challenges and opportunities that retrievability affords.
This tutorial is ideal for: (i) researchers curious about retrievability and wanting to see how it can impact their research, (ii) researchers who would like to expand their set of analysis techniques, and/or (iii) researchers who would like to use retrievability to perform their own analysis.
![]() | Dr. Leif Azzopardi is a Senior Lecturer within the School of Computing Science at the University of Glasgow. He received his Ph.D. in Computing Science from the University of Paisley / West Scotland in 2006, and he received a First Class Honours Degree in Information Science from the University of Newcastle, Australia, 2001. In 2010, he received a Post-Graduate Certificate in Academic Practice and has been lecturing at the University of Glasgow ever since. His research focuses on building formal models for Information Retrieval and Interactive Information Retrieval, often drawing upon different disciplines for inspiration such as Quantum Mechanics, Operations Research, Microeconomics, Transportation Planning and Gamification. In 2008, he took concepts from Transportation Planning and applied them to Information Retrieval to develop the concept of Retrievability. Since then, he has published over 20 papers on the topic, given numerous invited talks on Retrievability throughout the world. |