Author(s)
Ionescu Bogdan; Müller Henning; Péteri Renaud; Ben Abacha Asma; Datla Vivek; Hasan Sadid A.; Demner-Fushman Dina; Kozlovski Serge; Liauchuk Vitali; Cid Yashin Dicente; Kovalev Vassili; Pelka Obioma; Friedrich Christoph M.; de Herrera Alba Garcia Seco; Ninh Van-Tu; Le Tu-Khiem; Zhou Liting; Piras Luca; Riegler Michael; Halvorsen Pal; Tran Minh-Triet; Lux Mathias; Gurrin Cathal; Dang-Nguyen Duc-Tien; Chamberlain Jon; Clark Adrian; Campello Antonio; Fichou Dimitri; Berari Raul; Brie Paul; Dogariu Mihai; Stefan Liviu Daniel; Constantin Mihai Gabriel
Institution
Univ Politehn, Bucharest, Romania; Univ Appl Sci Western Switzerland HES SO, Sierre, Switzerland; Univ La Rochelle, La Rochelle, France; Natl Lib Med, Bethesda, MD USA; Philips Res Cambridge, Cambridge, MA, USA; CVS Health, Monroeville, PA, USA; United Inst Informat Problems, Minsk, BELARUS; Univ Warwick, Coventry, W Midlands, England; Univ Appl Sci & Arts Dortmund, Dortmund, Germany; Univ Essex, Colchester, Essex, England; Dublin City Univ, Dublin, Ireland; Pluribus One, Cagliari, Italy; Univ Cagliari, Cagliari, Italy; Univ Oslo, Oslo, Norway; Univ Sci, Ho Chi Minh City, Vietnam; Klagenfurt Univ, Klagenfurt, Austria; Univ Bergen, Bergen, Norway; Wellcome Trust Res Labs, London, England; TeleportHQ, Cluj Napoca, Romania
Abstract
This paper presents an overview of the ImageCLEF 2020 lab that was organized as part of the Conference and Labs of the Evaluation Forum - CLEF Labs 2020. ImageCLEF is an ongoing evaluation initiative (first run in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval of visual data with the aim of providing information access to large collections of images in various usage scenarios and domains. In 2020, the 18th edition of ImageCLEF runs four main tasks: (i) a medical task that groups three previous tasks, i.e., caption analysis, tuberculosis prediction, and medical visual question answering and question generation, (ii) a lifelog task (videos, images and other sources) about daily activity understanding, retrieval and summarization, (iii) a coral task about segmenting and labeling collections of coral reef images, and (iv) a new Internet task addressing the problems of identifying hand-drawn user interface components. Despite the current pandemic situation, the benchmark campaign received a strong participation with over 40 groups submitting more than 295 runs.
Access
Closed Access
Type of Publication
Book Chapter
Publisher
Springer, Lecture Notes in Computer Science, Experimental IR Meets Multilinguality, Multimodality, and Interaction