The annual SIGIR conference is the major international forum for the presentation of new research results, and the demonstration of new systems and techniques, in the broad field of information retrieval (IR). The 45th ACM SIGIR conference, will be run as a hybrid conference – in person in Madrid, Spain, with support for remote participation, from July 11th to 15th, 2022. We welcome contributions related to any aspect of information retrieval and access, including theories, foundations, algorithms, evaluation, analysis, and applications. The conference and program chairs invite those working in areas related to IR to submit high-impact original papers for review.
This year we continue a special track for resource papers, separate from regular full and short papers. The conference and program chairs invite those working in areas related to IR to submit original resource papers, which will be presented as posters at the conference. Resources include, but are not restricted to, information retrieval test collections, labelled datasets for machine learning, and software tools and services for information access. Submissions will be peer reviewed, and accepted resource papers will be published in the main conference proceedings.
Time zone: Anywhere on Earth (AoE)
Resource paper abstracts due: February 14, 2022
Resource papers due: February 21, 2022
Resource papers notification: April 4, 2022
Camera Ready for accepted Resource papers: April 24, 2022
Resource paper authors are required to submit an abstract by midnight February 14, 2022 AoE. Paper submission (deadline: midnight February 14, 2022 AoE) is not possible without a submitted abstract. Immediately after the abstract deadline, PC Chairs will desk reject submissions that lack informative titles and abstracts (“placeholder abstracts”).
What is a resource?
The Resources Track seeks submissions from both academia and industry that describe resources available to the community, the process and methodology of building those resources, and/or the lessons learned. Resources include, but are not restricted to:
- test collections for information retrieval and access tasks;
- documentation of designs and protocols of evaluation tasks (e.g., novel task designs implemented at evaluation forums);
- labelled datasets for machine learning;
- software tools and services for information retrieval and access; and
- software tools and services for evaluating and analyzing information retrieval and access systems.
Resource submissions are welcome in any of the areas related to any aspects of IR, as identified in the Call for Full Papers.
Authors should be aware that the resource paper track will use a set of review criteria that is different from the regular full and short papers. This is outlined below.
- What is new about this resource?
- Does the resource represent an incremental advance or something more dramatic?
- Is the resource available to the reviewer at the time of review?
- Are there discrepancies between what is described and what is available?
- If the resource is data collected from people, do appropriate human subjects control board procedures appear to have been followed?
- Is the resource well documented? What level of expertise do you expect is required to make use of the resource?
- Are there tutorials or examples? Do they resemble actual uses or are they toy examples?
- If the resource is data, are appropriate tools provided for loading that data?
- If the resource is data, are the provenance (source, preprocessing, cleaning, aggregation) stages clearly documented?
- What IR research activity is enabled by the availability of this resource?
- Does the resource advance a well-established research area or a brand new one?
- Do you expect that this resource will be useful for a long time, or will it need to be curated or updated? If the latter, is that planned?
- How large is the (anticipated) research user community? Will that grow or shrink in the next few years?
The resource paper review process is single-blind, which means that reviewers are aware of the names and affiliations of paper authors. Therefore, unlike other paper tracks in SIGIR 2022, there is no need to hide author information in the submission. Authors should carefully go through ACM’s authorship policy before submitting a paper. Submissions which violate the preprint policy, length, or formatting requirements or are plagiarized are subject to desk-rejection by the chairs. It is also NOT permitted to double submit the content to both resource track and other track(s) of SIGIR 2022 (e.g. a resource paper for building Dataset A and a full paper containing the construction process of Dataset A in the experiment section).
To support identification of reviewers with conflicts of interest, the full author list must be specified at submission time. Authors should note that changes to the author list after the submission deadline are not allowed without permission from the PC Chairs.
Resource papers must describe original work that has not been previously published, not accepted for publication elsewhere, and not simultaneously submitted or currently under review in another journal or conference.
Submissions of full research papers must be in English, in PDF format, and be at most 9 pages (including figures, tables, proofs, appendixes, acknowledgments, and any content except references) in length, with unrestricted space for references, in the current ACM two-column conference format. Suitable LaTeX, Word, and Overleaf templates are available from the ACM Website (use “sigconf” proceedings template for LaTeX and the Interim Template for Word). ACM’s CCS concepts and keywords are not required for review but may be required if accepted and published by the ACM.
For LaTeX, the following should be used:
Submissions should be submitted electronically via EasyChair to the resource track:
At least one author of each accepted paper is required to register for, and present the work at the conference.
Desk Rejection Policy
Submissions that violate the preprint policy, length, or formatting requirements, or are determined to violate ACM’s policies on academic dishonesty, including plagiarism, author misrepresentation, falsification, etc., are subject to desk rejection by the chairs. Any of the following may result in desk rejection:
- Figures, tables, proofs, appendixes, acknowledgements, or any other content after page 9 of the submission.
- Formatting not in line with the guidelines provided above.
- Addition of authors after abstract submission.
- Content that has been determined to have been copied from other sources.
- Any form of academic fraud or dishonesty.
- Lack of topical fit for SIGIR.
Resource Paper Chairs
- Jeff Dalton, University of Glasgow, UK
- Noriko Kando, National Institute of Informatics, Japan
For any questions about full paper submissions you may contact the Program Chairs by email to sigir22-resource AT easychair DOT org.