Home/EAD2019 Challenge review by travel grant recipient Maxime Kayser
EAD2019 Challenge review by travel grant recipient Maxime Kayser
Maxime Kayser has written a great blog about Sharib Ali's EAD2019 Challenge as ISBI in Venice earlier this year.
Endoscopic Artefact Detection (EAD) challenge recap by Maxime Kayser
The Endoscopic Artefact Detection (EAD) challenge aims to address the issue of image artefacts in endoscopic video frames. The challenge provided an extensive dataset of more than 2000 images with around 18,000 annotated artefacts, making it unprecedented in both scope and diversity and hence enabling new breakthroughs in automated detection systems for endoscopy. Sponsored by Cancer UK Research and MedIAN, the challenge was held as a workshop at the IEEE International Symposium on Biomedical Imaging (ISBI 19).
The training data consisted of both bounding box annotations and segmentation masks and participants could decide whether they only participate in the object detection, segmentation or both sub-challenges. For object detection, there were over 2000 training images and the model was then evaluated based on its performance on around 370 test images. The dataset was extremely diverse, and images were selected to include different lightning modes as well as different kinds of endoscopic interventions.
I stumbled across the EAD challenge when I was looking for a research question for my bachelor thesis in the field of machine learning for medical applications. The challenge was a perfect match, as it addressed a pressing issue in endoscopy, provided extensive training data, and allowed me to directly compare the effectiveness of my work with other participants.
Prior to this challenge my machine learning experience was limited to three data science projects with structured data, where I applied basic models such as linear regression, boosting trees or random forests. Hence this challenge provided an ideal opportunity for me to familiarize with deep learning and computer vision.
To prepare for this task, I watched the publicly available CS231n: Convolutional Neural Networks for Visual Recognition lecture slides from Stanford University. They provided a comprehensive overview of the most up-to-date methodologies in deep learning-based computer vision and allowed me to jump right into the EAD challenge. I have only participated in the object detection sub-challenge.
My workflow consisted mainly of doing literature research on the current state-of-the-art in object detection and, more specifically, in endoscopic object detection. I concluded to go with RetinaNet, a simple one-stage object detection framework. Part of the reason for choosing this algorithm was the abundant availability of resources and information on it. I built my model off a Keras codebase from Github.
Deep learning requires extensive computing resources, which can become quite expensive. Luckily, Google Colab now provides free K80 GPU for anyone and I solely relied on that. The only limitation is that runtimes are limited to 12 hours and that you cannot let training process run in the background (their algorithm will detect if you are away from the screen for a long time and will disconnect your training process). However, I managed to build an amateurish system of automatic clickers and page reloaders that allowed me to run my training process even when away from keyboard. This allowed me to save a significant amount time, e.g. by training my models overnight. The 12 hours runtime limitation was not a major issue as my models didn’t need to run much longer than 10 hours. This was due to the fact that the training set was limited to around 2000 images and my model weights were already pre-trained on larger datasets such as Image1kNet or MS COCO.
One of the great things about the EAD challenge was that the organizers were constantly trying to improve the challenge and were open for dialogue, allowing the participants to provide their own input. As such, things like the scoring metric have evolved over the course of the challenge. Whilst this required adjustments from the participants, it has ultimately improved the challenge and increased the usability of our contributions. The organizers have also published extensive code to help us visualize and evaluate our data and they’ve been extremely helpful and fast whenever participants had any sorts of trouble.
The EAD challenge was held as a workshop at the ISBI conference in Venice. This provided a great opportunity for participants of the challenge to meet each other, share their experiences, exchange ideas and last but not least explore beautiful Venice. As I came in third in the object detection challenge, I was one of the three participants that was asked to give an in-depth presentation of their work. I was extremely grateful that my travels to Venice were financially supported by MedIAN.
The workshop was a very rewarding experience. I was able to share my findings and learn about other interesting approaches from my competitors. We had interesting discussions on the necessity of this research problem, and I look forward to further investigating the problem in the context of my bachelor thesis at the Technical University Munich.
All in all, the EAD challenge has provided an excellent platform for diving deeper into medical computer vision as well as making a tangible impact. I am extremely grateful to everyone involved in organizing and sponsoring this challenge. By providing these extensive datasets, a competition leaderboard, and exhaustive support, they created an ideal environment for its participants to learn, connect, and contribute to the advancement of automated detection in endoscopy.
#machinelearning #endoscopy #detection #artefact #venice #ISBI