School of Engineering Department of Computer Science and Engineering 109 Neural Rendering Supervisor: CHEN Qifeng / CSE Student: KIM Hyeonjae / COSC Course: UROP1100, Spring In a recent couple of years, a variety of methodologies to represent the surrounding scenes in a continuous manner have been proposed. Especially, neural-based implicit representation has received tremendous attention due to its potential impact on super-resolution and other downstream tasks. In this work, we specifically focus on learning a non-discrete (i.e., continuous) representation of images using multilayer perceptron (MLPs) with nontrivial activation layers. To be specific, influenced by the recent research on the implicit neural representation of images, we propose a way to improve the Local Implicit Image Function (LIIF) by adopting the Sinusoidal Representation Networks (SIREN). By experimenting with the CelebAHQ dataset, we show this simple modification leads to significant improvement in performance both in qualitative and quantitative ways. Neural Rendering Supervisor: CHEN Qifeng / CSE Student: PHAM Trung Kien / DSCT TOH Magdalene Youjun / COMP ZHAO Jiachen / COSC Course: UROP1100, Spring UROP1100, Spring UROP1100, Spring Video inpainting by internal learning is predicting the texture of masked region of a sequence of frames of a video only depending on the coherence and consistency among frames. Our goal is to improve the performance of inpainting model working on a special type of video, stereo video. To achieve this goal, new losses, named disparity loss and edge loss, are proposed and effective methods, such as mask augmentation, are used. Our model is trained on different segments of stereo video and obtains visually pleasing results. Compared to previous model trained only on single view side, our model trained on both view sides outperforms and the improvement is satisfying.