Applied Sciences | |
Image-To-Image Translation Using a Cross-Domain Auto-Encoder and Decoder | |
YongSuk Choi1  Jaechang Yoo1  Heesong Eom1  | |
[1] Department of Computer Science, Hanyang University, Seoul 04763, Korea; | |
关键词: image-to-image translation; encoder-decoder; deep learning; feature mapping layer; | |
DOI : 10.3390/app9224780 | |
来源: DOAJ |
【 摘 要 】
Recently, several studies have focused on image-to-image translation. However, the quality of the translation results is lacking in certain respects. We propose a new image-to-image translation method to minimize such shortcomings using an auto-encoder and an auto-decoder. This method includes pre-training two auto-encoders and decoder pairs for each source and target image domain, cross-connecting two pairs and adding a feature mapping layer. Our method is quite simple and straightforward to adopt but very effective in practice, and we experimentally demonstrated that our method can significantly enhance the quality of image-to-image translation. We used the well-known cityscapes, horse2zebra, cat2dog, maps, summer2winter, and night2day datasets. Our method shows qualitative and quantitative improvements over existing models.
【 授权许可】
Unknown