School of Engineering Department of Computer Science and Engineering 103 Deep Learning for Medical Image Analysis Supervisor: CHEN Hao / CSE Student: ZHOU Taichang / COMP Course: UROP1100, Summer The explainable AI is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI, which uncovers the 'black box' in the deep learning model. As deep learning models are widely used these days, those models do make sense in many areas. However, for some specific area, i.e., medical image analysis and financial investment, the requirement for explainable models are growing because of their severe outcome caused by mistakes, if any. Here, we will introduce some common post-hoc methods for CNNmodels used in XAI, including both gradient based and non-gradient based, local and global. Data-Efficient, Domain Generalizable and Interpretable Deep Learning Supervisor: CHEN Hao / CSE Student: CHENG Yuhan / COSC Course: UROP1000, Summer Deep learning generates a computing model by constructing an intricate neural network. It requires a mass of data to train. However, it is impossible to visit all domains of data so the model cannot work well enough. Domain generalization (DG) was born to solve this issue. It is aimed at achieving generalization for a deep learning model in the unseen dataset by using available source data. DG has been applied widely in computer vision, medical image analysis, natural language processing, and so on. In this project, we mainly focus on medical image analysis, especially diabetic retinopathy images. This report presents some basic knowledge of domain generalization and introduces several commonly used deep learning methods. In addition, this report records my learning progress in this project during this summer and some following work that needs to be done.