期刊论文详细信息
Computational Visual Media
Co-occurrence based texture synthesis
Hadar Averbuch-Elor1  Anna Darzi2  Ashutosh Taklikar2  Shai Avidan2  Itai Lang2 
[1] Cornell-Tech, Cornell University, 10044, NYC, NY, USA;Tel Aviv University, 6997801, Tel Aviv, Israel;
关键词: co-occurrence;    texture synthesis;    deep learning;    generative adversarial networks (GANs);   
DOI  :  10.1007/s41095-021-0243-7
来源: Springer
PDF
【 摘 要 】

As image generation techniques mature, there is a growing interest in explainable representations that are easy to understand and intuitive to manipulate. In this work, we turn to co-occurrence statistics, which have long been used for texture analysis, to learn a controllable texture synthesis model. We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images while having local, interpretable control over texture appearance. To encourage fidelity to the input condition, we introduce a novel differentiable co-occurrence loss that is integrated seamlessly into our framework in an end-to-end fashion. We demonstrate that our solution offers a stable, intuitive, and interpretable latent representation for texture synthesis, which can be used to generate smooth texture morphs between different textures. We further show an interactive texture tool that allows a user to adjust local characteristics of the synthesized texture by directly using the co-occurrence values.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO202203043645054ZK.pdf 17440KB PDF download
  文献评价指标  
  下载次数:21次 浏览次数:14次