UROP Proceedings 2022-23

School of Engineering Department of Computer Science and Engineering 104 Multimodal Learning for Cancer Diagnosis and Prognosis Supervisor: CHEN, Hao / CSE Student: KWOK, Sze Heng Douglas / MATH-CS Course: UROP1100, Spring Multimodal learning applies machine learning tools to learn from training data from a combination of modalities (e.g., images, genomic data). Currently, in multimodal learning research, there is often the assumption that data from all modalities are always available. However, this is often not the case in realworld situations where multimodal datasets have missing modalities, which hinders the performance of multimodal models - researchers refer to this problem as the Missing Modality Problem. Throughout this semester, I investigated numerous methods to tackle the Missing Modality Problem by reviewing research papers. To showcase my learning progress, I will reflect on how I have learned about the Missing Modality Problem and provide reviews of the literature I have read throughout this project. Multimodal Learning for Cancer Diagnosis and Prognosis Supervisor: CHEN, Hao / CSE Student: LIU, Yang / COMP Course: UROP1100, Fall Breast cancer is prevalence in women. An early detection of breast cancer can greatly increase the cure rate and survival rate. Computer-aided diagnosis is frequently employed and within this process image segmentation plays a significant role. This report introduces the topic of automatic image segmentation of ultrasound images and DCE-MRI images, including summarizing pubic dataset, discussing some related work and a case study using Attention-Unet to segment breast tumor for BUSI dataset. Deep Learning for Ophthalmology Image Analysis Supervisor: CHEN, Hao / CSE Student: CHEN, Siyu / ISD Course: UROP3100, Fall Deep learning method application on high-quality (HQ) medical images have shown promising results, however, medical images usually suffer from image degradations which caused a totally different situations compared with HQ images. In clinical practice, researchers intend to filter out low-quality (LQ) images to increase the performance of deep learning method while ignoring significant value of those LQ images. In this work, we raised the problem of image quality-aware diagnosis (IQAD) aiming to use LQ images and image quality labels to achieve better performance on medical images. And we proposed a new method, noted as Meta-knowledge Assistance Network (MACNet) for this problem which outperformed other methods.