The Best Paper Award is presented to the individual(s) judged by an awards committee to have written the best paper appearing in the annual conference proceedings.
|Should I Follow the Crowd? A Probabilistic Analysis of the Effectiveness of Popularity in Recommender Systems|
|BitFunnel: Revisiting Signatures for Search|
Frank E. Pollick
|Understanding Information Need: an fMRI Study|
Franco Maria Nardini
|QuickScorer: A Fast Algorithm to Rank Documents with Additive Ensembles of Regression Trees
For a paper that develops a novel representation of binary regression trees based on bitvectors and an algorithm to perform a fast interleaved traversal of the trees in a cache-aware fashion, and demonstrates significant efficiency gains on publically available learning-to-rank data sets with various models that use ensembles of regression trees.
|Partitioned Elias-Fano indexes
For a paper that significantly improves on an already excellent compression technique, while preserving query time efficiency, as well as exhibiting the best compression ratio/processing speed trade-off.
|2013||Ryen W. White||Beliefs and Biases in Web Search
For a paper which explores the impact of pre-conceived biases when searching in the health domain using a combination of surveys, human labeling of search results, and large scale search log analysis.
|Time-based calibration of effectiveness measures
Selected for proposing a novel evaluation framework for IR where cumulative gain is computed with consideration of the time spent by users on examining search results, which enables better modeling of user effort in quantitative IR evaluation.
|Find It If You Can: A Game for Modeling Different Types of Web Search Success Using Interaction Data|
|2010||Ryen W. White
|Assessing the Scenic Route: Measuring the Value of Search Trails in Web Logs|
|Sources of evidence for vertical selection|
|Algorithmic Mediation for Collaborative Exploratory Search|
|Studying the Use of Popular Destinations to Enhance Web Search Interaction|
|Minimal Test Collections for Retrieval Evaluation|
|Learning to Estimate Query Difficulty (Including Applications to Missing Content Detection and Distributed Information Retrieval)|
|A Formal Study of Information Retrieval Heuristics|
|2003||Ian Ruthven||Re-examining the potential effectiveness of interactive query expansion|
|Novelty and redundancy detection in adaptive filtering|
|Temporal Summaries of News Topics|
|IR evaluation methods for retrieving highly relevant documents.|
|Cross-language information retrieval based on automatic mining of parallel texts from the web|
|1998||Warren Greiff||A theory of term weighting based on exploratory data analysis|
|Feature selection, perceptron learning, and a usability case study for text categorization|
Karen Spärck Jones
|Retrieving spoken documents by combining multiple index sources|
|IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models|
Heng Tao Shen
|Classification by Retrieval: Binarizing Data and Classifiers|
|2016||Ido Guy||Searching by Talking: Analysis of Voice Queries on Mobile Web Search|
|Discrete Collaborative Filtering|
|The Benefits of Magnitude Estimation for Relevance Assessment|
|An Eye-Tracking Study of Query Reformulation|
|Incorporating Non-sequential Behavior into Click Models|
|2014||Leif Azzopardi||Modelling interaction with economic models of search|