Workshop Co-Chairs :
IMPORTANT: The Workshop on Next-Generation Test Collections has been cancelled
The SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation (CSE2010) solicits submissions on topics including but are not limited to the following areas:
The workshop especially calls for innovative solutions in the area of search evaluation involving significant use of a crowdsourcing platform such as Amazon's Mechanical Turk, Crowdflower, LiveWork, etc. Novel applications of crowdsourcing are of particular interest. This includes but is not restricted to the following tasks:
For example, does the inherent geographic dispersal of crowdsourcing enable better assessment of a query's local intent, its local-specific facets, or diversity of returned results? Could crowd-sourcing be employed in near real-time to better assess query intent for breaking news and relevant information?
Most Innovative Awards Sponsored by Microsoft Bing
As further incentive to participation, authors of the most novel and innovative crowdsourcing-based search evaluation techniques (e.g. using Amazon's Mechanical Turk, Livework, Crowdflower, etc.) will be recognized with “Most Innovative Awards” as judged by the workshop organizers. Selection will be based on the creativity, originality, and potential impact of the described proposal, and we expect the winners to describe risky, ground-breaking, and unexpected ideas. The provision of awards is thanks to generous support from Microsoft Bing, and the number and nature of the awards will depend on the quality of the submissions and overall availability of funds. All valid submissions to the workshop will be considered for the awards.
Submissions should report new (unpublished) research results or ongoing research. Long paper submissions (up to 8 pages) will be primarily target oral presentations. Short papers submissions can be up to 4 pages long, and will primarily target poster presentations. Papers should be formatted in double-column ACM SIG proceedings format (http://www.acm.org/sigs/publications/proceedings-templates). Papers must be submitted as PDF files. Submissions should not be anonymized.
Email the organizers at firstname.lastname@example.org
Current search systems are not adequate for individuals with specific needs: children, older adults, people with visual or motor impairments, and people with intellectual disabilities or low literacy. Search services are typically created for average users (young or middle-aged adults without physical or mental disabilities) and information retrieval methods are based on their perception of relevance as well. The workshop will be the first to raise the discussion on how to make search engines accessible for different types of users, including those with problems in reading, writing or comprehension of complex content. Search accessibility means that people whose abilities are considerably different from those that average users have will be able to successfully use search systems.
The objective of the workshop is to provide a forum and initiate collaborations between academics and industrial practitioners interested in making search more usable for users in general and for users with specific needs in particular. We encourage presentation and participation from researchers working at the intersection of information retrieval, natural language processing, human-computer interaction, ambient intelligence and related areas.
The workshop will be a mix of oral presentations for long papers (maximum of 8 pages), a session for posters (maximum of 2 pages) and a panel discussion. All submissions will be reviewed by at least two PC members. Workshop proceedings will be available at the workshop.
Desktop search refers to the process of searching within one’s personal space of information. The information searched during a desktop search can include content that resides on one's personal computer (e.g., documents, emails, visited Web pages, and multimedia files), and may extend to content on other personal devices, such as music players and mobile phones. Despite recent research interest, desktop search is under-explored compared to other search domains such as the web, semi-structured data, or flat text.
Problems with existing desktop search tools include performance issues, an over-reliance on good query formulation, and a failure to fit within the user’s work flow or the user’s mental model. Evaluation of desktop search tools is difficult. There are no established or standardized baselines or evaluation metrics, and no commonly available test collections. Privacy concerns, the challenges of working with personal collections, and the individual differences in behaviour between users all must be addressed to advance research in this domain.
This workshop will bring together academics and industrial practitioners interested in desktop search with the goal of fostering collaborations and addressing the challenges faced in this area. The workshop will be structured to encourage group discussion and active collaboration among attendees. We encourage participation from people in the fields of information retrieval, personal information management, natural language processing, human-computer interaction, and related areas.
This workshop aims to explore the use of Simulation of Interactions to enable automated evaluation of Interactive Information Retrieval Systems and Applications.
Standard test collections only enable a very limited type of interaction to be evaluated (i.e. query - response). This is largely due to the high costs involved in going beyond this limited interaction and problems associated with replicability and repeatability of experiments.
Arguably, Simulation of Interaction provides a cost-effective way to construct and repeat evaluations of interactive systems and applications. This powerful automated evaluation technique provides a high degree of control and ensures that experiments can be replicated — but we need your help in developing “standardized” methodologies for simulations, techniques for simulations, models and methods for simulations, measures of performance given simulations, and more.
Sign up to this workshop shop if you are interested in Interactive IR retrieval and the modeling of users, systems, interactions and behaviors and how they can be simulated (or not) within automated evaluation methodologies for IR. The workshop is going to be lively and very interactive (both online and offline) compromising of discussions and debates all aimed at producing valuable community resources and references on simulation in IR.
Understanding the user's intent or information need that underlies a query has long been recognized as a crucial part of effective information retrieval. Despite this, retrieval models, in general, have not focused on explicitly representing intent, and query processing has been limited to simple transformations such as stemming or spelling correction. With the recent availability of large amounts of data about user behavior and queries in web search logs, there has been an upsurge in interest in new approaches to query understanding and representing intent.
This workshop has the goal of bringing together the different strands of research on query understanding, increasing the dialogue between researchers working in this relatively new area, and developing some common themes and directions, including definitions of tasks and evaluation methodology. We hope the workshop could bring together researchers from IR, ML, NLP, and other areas of computer and information science who are working on or interested in this area, and provide a forum for them to identify the issues and the challenges, to share their latest research results, to express a diverse range of opinions about this topic, and to discuss future directions.
This workshop aims to bring together both experienced and young researchers from distributed IR, including work on P2P search and efficiency of distributed systems for information processing. This edition of the workshop will favor novel, perhaps even outrageous ideas as opposed to finished research work, thus strongly encouraging the submission of position papers in addition to research papers. Position papers are important to foster discussion upon controversial and intriguing ideas on new ways of building distributed infrastructures for information processing.
This workshop has been cancelled
Over the last 15 years, Information Retrieval research corpora have experienced more than a thousand-fold increase in size: from the 1990s TIPSTER collections of hundreds of thousands of full-text articles to the 2009 ClueWeb collection of over a billion web pages, researchers are now working with a nearly unimaginable amount of text. The standard evaluation methodology—the Cranfield paradigm of calculating evaluation measures using test collections—has struggled to keep up, as research shows that even test collections for terabyte-sized corpora suffer from unforeseen judgment bias and reusability challenges.
This workshop invites cutting-edge research on tackling the problem of building test collections at the multi-terabyte scale that are realistic, fair, and reusable. The goal of the workshop is to map out the critical research questions that need to be asked and the types of collections we need to consider building in order to answer them.
Modern information retrieval systems facilitate information access at unprecedented scale and level of sophistication. However, in many cases the underlying representation of text remains quite simple, often limited to using a weighted bag of words. Over the years, several approaches to automatic feature generation have been proposed (such as Latent Semantic Indexing, Explicit Semantic Analysis, Hashing, and Latent Dirichlet Allocation), yet their application in large scale systems still remains the exception rather than the rule. On the other hand, numerous studies in NLP and IR resort to manually crafting features, which is a laborious and expensive process. Such studies often focus on one specific problem, and consequently many features they define are task- or domain-dependent. Consequently, little knowledge transfer is possible to other problem domains. This limits our understanding of how to reliably construct informative features for new tasks.
An area of machine learning concerned with feature generation (or constructive induction) studies methods that endow computers with the ability to modify or enhance the representation language. Feature generation techniques search for new features that describe the target concepts better than the attributes supplied with the training instances. Complementary to feature generation, the issue of feature selection arises. It aims to retain only the most informative features, e.g., in order to reduce noise and to avoid overfitting, and is essential when numerous features are automatically constructed.
We believe that much can be done in the quest for automatic feature generation for text processing, for example, using large-scale knowledge bases as well as the sheer amounts of textual data easily accessible today. The purpose of this workshop is to bring together researchers from many related areas (including information retrieval, machine learning, statistics, and natural language processing) to address these issues and seek cross-pollination among the different fields.
The aim of the workshop is to bring together a group of leaders in information retrieval and language modeling to discuss the challenges in information retrieval and how language modeling approaches may help address some of these challenges. At the workshop we will focus on the use of n-gram models to further research in areas such as document representation and content analysis (e.g., clustering, classification, information extraction), query analysis (e.g. query suggestion, query reformulation), retrieval models and ranking, spelling; and the access to n-grams as an enabler of experimental design. Often discussed in the research community is the lack of large scale dataset and benchmarks to run experiments. This workshop will address this issue by bringing together the community of researchers who use n-grams, already made available by Yahoo and Google/LDC along, along with a new Web N-gram service where Microsoft Research, in partnership with Microsoft Bing, is providing the research community access to petabytes of Web N-gram via a cloud-based platform.
The Web N-gram services, currently in Beta at http://research.microsoft.com/web-ngram, will be made available to the participants of the workshop, with properties as follows:
In this workshop, we encourage researchers to use the Microsoft Web N-gram service to explore novel applications of language models (e.g. long tail effects) and use of these data for enhancing the search experience (e.g. use of anchor text as a proxy to queries). We will also consider other applications such as machine translation, speech.
We also encourage research and experiments using or comparing different n-grams data sets to ultimately help create at the workshop a useful n-gram baseline for the research community, in terms of n-gram attributes such as size, access, content and model types needed for researchers.