CLEF promotes the systematic evaluation of information access systems, primarily through experimentation on shared tasks.
Ten labs are offered at CLEF 2013.
Nine labs will follow a "campaign-style" evaluation practice
for specific information access problems in the tradition of past CLEF campaign tracks:
- CHiC - Cultural Heritage in CLEF a benchmarking activity to investigate systematic and large-scale evaluation of cultural heritage digital libraries and information access systems
- CLEFeHealth - CLEF eHealth Evaluation Lab a benchmarking activity aiming at developing processing methods and resources to enrich difficult-to-understand health text as well as their evaluation setting
- CLEF-IP - Retrieval in the Intellectual Property Domain a benchmarking activity to investigate IR techniques in the patent domain
- ImageCLEF - Cross Language Image Annotation and Retrieval a benchmarking activity on the experimental evaluation of image classification and retrieval, focusing on the combination of textual and visual evidence
- INEX - INitiative for the Evaluation of XML retrieval builds evaluation benchmarks for search with rich structure - such as document structure, semantic metadata, entities, or genre/topical structure - as of increasing importance on the web and in professional search.
- PAN - Uncovering Plagiarism, Authorship, and Social Software Misuse a benchmarking activity on uncovering plagiarism, authorship and social software misuse
- QA4MRE - Question Answering for Machine Reading Evaluation a benchmarking activity on the evaluation of machine reading systems through question answering and reading comprehension tests
- QALD-3 - Question Answering over Linked Data a benchmarking activity on question answering over linked data
- RepLab 2013 second CLEF lab on Online Reputation Management
One lab will be run as a workshop organized as speaking and discussion session to explore issues of evaluation methodology, metrics, and processes in
information access and closely related fields:
- CLEF-ER - Entity Recognition @ CLEF workshop on multilingual annotation of named entities and terminology resources acquisition