MEMO: A Multimodal EMA and OCTA Retinal Image Dataset


Chiao-Yi Wang, Faranguisse Kakhi Sadrieh, Yi-Ting Shen, Shih-En Chen, Sarah Kim,

Victoria Chen, Achyut Raghavendra, Dongyi Wang, Osamah Saeedi, and Yang Tao

University of Maryland (College Park) and University of Marylad (Baltimore)


The MEMO dataset is established for the development and evaluation of multimodal retinal image registration with both large vessel segmentation ground truth and registration ground turth.

Dataset Description

The MEMO dataset contains 30 pairs of EMA and OCTA images. For each image pair, 6 corresponding point pairs were manually annotated. In addition, each EMA image comes with a carefully annotated vessel segmentation mask.

The following raw images are included for each image pair:

For our own experiments, we scaled all OCTA SVP images to 256 * 256 and the EMA stacked images were scaled based on same sclaing factor. Therefore, we also provide the following scaled images:

Six corresponding point pairs of every EMA and OCTA pair are provided in RegistrationGT.csv. In this file, the pixel coordinate (x and y) of corresponding point pairs are listed. For example, the pixel coordinate (OCTA_x1, OCTA_y1) on OCTA image is matched to the pixel coordinate (EMA_x1, EMA_y1) on EMA stacked image. All the pixel coordinate are based on the scaled images.

.

Dataset example figure


Dataset Download:

MEMO Dataset

Paper:

MEMO: Dataset and Methods for Robust Multimodal Retinal Image Registration with Large or Small Vessel Density Differences

Citation:

If you use this dataset for your research, please cite our paper.

@misc{wang2023memo, title={MEMO: Dataset and Methods for Robust Multimodal Retinal Image Registration with Large or Small Vessel Density Differences}, author={Chiao-Yi Wang and Faranguisse Kakhi Sadrieh and Yi-Ting Shen and Shih-En Chen and Sarah Kim and Victoria Chen and Achyut Raghavendra and Dongyi Wang and Osamah Saeedi and Yang Tao}, year={2023}, eprint={2309.14550}, archivePrefix={arXiv}, primaryClass={eess.IV} }

Contact Info:

Chiao-Yi Wang (cyiwang@umd.edu)