This chapter gives a brief overview of the VISCERAL Registration System that is used for all the VISCERAL Benchmarks and is released as open source on GitHub. The system can be accessed by both participants and administrators, reducing the direct participant–organizer interaction and handling the documentation available for each of the benchmarks organized by VISCERAL. Also, the upload of the VISCERAL usage and participation agreements is integrated, as well as the attribution of virtual machines that allow participation in the VISCERAL Benchmarks. In the second part, a summary of the various steps in the continuous evaluation chain mainly consisting of the submission, algorithm execution and storage as well as the evaluation of results is given. The final part consists of the cloud infrastructure detail, describing the process of defining requirements, selecting a cloud solution provider, setting up the infrastructure and running the benchmarks. This chapter concludes with a short experience report outlining the encountered challenges and lessons learned.
[1]
Henning Müller,et al.
Gesture Interaction for Content--based Medical Image Retrieval
,
2014,
ICMR.
[2]
Allan Hanbury,et al.
Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool
,
2015,
BMC Medical Imaging.
[3]
Allan Hanbury,et al.
VISCERAL: Towards Large Data in Medical Imaging - Challenges and Directions
,
2012,
MCBR-CDS.
[4]
Allan Hanbury,et al.
Overview of the VISCERAL Retrieval Benchmark 2015
,
2015,
MRDM@ECIR.
[5]
Allan Hanbury,et al.
Bringing the Algorithms to the Data: Cloud-Based Benchmarking for Medical Image Analysis
,
2012,
CLEF.
[6]
Jimmy J. Lin,et al.
Report on the Evaluation-as-a-Service (EaaS) Expert Workshop
,
2015,
SIGIR Forum.