The MEMO dataset is established for the development and evaluation of multimodal retinal image registration with both large vessel segmentation ground truth and registration ground turth.
Dataset Description
The MEMO dataset contains 30 pairs of EMA and OCTA images. For each image pair, 6 corresponding point pairs were manually annotated. In addition, each EMA image comes with a carefully annotated vessel segmentation mask.
The following raw images are included for each image pair:
- An EMA stacked image (Folder path: /MEMO/Original EMA stack)
- A folder for EMA sequence images (Under preparation)
- A folder for three OCTA projection images, including SVP, ICP and DCP layer projection images (Folder path: /MEMO/Original OCTA)
For our own experiments, we scaled all OCTA SVP images to 256 * 256 and the EMA stacked images were scaled based on same sclaing factor. Therefore, we also provide the following scaled images:
- Scaled EMA stacked image (Folder path: /MEMO/Scaled EMA stack)
- Scaled OCTA SVP image (Folder path: /MEMO/Scaled OCTA SVP)
- A vessel segmentation mask of EMA image (Folder path: /MEMO/Scaled EMA stack seg ground truth)
Six corresponding point pairs of every EMA and OCTA pair are provided in RegistrationGT.csv. In this file, the pixel coordinate (x and y) of corresponding point pairs are listed. For example, the pixel coordinate (OCTA_x1, OCTA_y1) on OCTA image is matched to the pixel coordinate (EMA_x1, EMA_y1) on EMA stacked image. All the pixel coordinate are based on the scaled images.
.Dataset Download:
Paper:
Citation:
If you use this dataset for your research, please cite our paper.
@misc{wang2023memo, title={MEMO: Dataset and Methods for Robust Multimodal Retinal Image Registration with Large or Small Vessel Density Differences}, author={Chiao-Yi Wang and Faranguisse Kakhi Sadrieh and Yi-Ting Shen and Shih-En Chen and Sarah Kim and Victoria Chen and Achyut Raghavendra and Dongyi Wang and Osamah Saeedi and Yang Tao}, year={2023}, eprint={2309.14550}, archivePrefix={arXiv}, primaryClass={eess.IV} }
Contact Info:
Chiao-Yi Wang (cyiwang@umd.edu)