UROP Proceedings 2020-21

School of Engineering Department of Computer Science and Engineering 117 Deep Video Super-Resolution Supervisor: CHEN Qifeng / CSE Student: LI Kwan To / COMP Course: UROP1100, Spring We present a simple deep learning model for image background removal. By incorporating a hint background image, we are able to get reasonable good performance for the segmentation task. The technique could be useful for extracting foreground person implemented in virtual background functionality in conferencing software. We adopted a variant of the famous U-Net which is widely used for segmentation tasks in the literature. Experiments are done to carefully tune the hyperparameters and explore regularization techniques to improve accuracy. Our approach strikes a balance between low-quality simple segmentation and high-quality background matting. Deep Video Super-Resolution Supervisor: CHEN Qifeng / CSE Student: ZHOU Ji / CPEG Course: UROP2100, Spring In this term the project focuses on deep learning methods for high dynamic range (HDR) image and video synthesis. HDR is a new technology used to capture and reproduce scenes with high dynamic range. By applying deep learning methods, we might be able to reconstruct HDR images or videos without HDR raw materials. To achieve this target, we start from multi-view HDR image synthesis, then to single image reconstruction and finally go to HDR video synthesis, based on the sequence I learned in this semester. This report will introduce basic concepts of deep HDR imaging of dynamic scenes and HDR image reconstruction from a single exposure, which come from image synthesis theories. Then we move to HDR video reconstruction and our process dealing with real world benchmark datasets. Deep Video Super-Resolution Supervisor: CHEN Qifeng / CSE Student: WU Chi-hsuan / DSCT Course: UROP1100, Summer Low-resolution (LR) textual images are often generated after file compression. Recognizing these blurry texts in LR images are challenging especially when the neighboring texts could not provide enough information. In order to recover a high-resolution (HR) image given a LR image, super-resolution (SR) techniques were introduced. Although SR techniques have been developed for years, research in SR techniques on documentary textual images are still insufficient. In this report, we created a documentary textual image dataset and test the SR effect on these images with a scene text recognition (STR) model. Our goal is to observe the effectiveness of SR on documentary textual images using scene text SR model, and provide a reference for future textual images SR studies.

RkJQdWJsaXNoZXIy NDk5Njg=