P Isola, JY Zhu, T Zhou, AA Efros - arXiv preprint arXiv:1611.07004, 2016 - arxiv.org

被引用次数:349  https://arxiv.org/pdf/1611.07004

pytorch实现:https://github.com/sunshineatnoon/Paper-Implementations

Image-to-Image Translation with Conditional Adversarial Networks_201611

图1:图像处理,图形和视觉中的许多问题就是把一幅输入图像转换为输出图像。在每一个情形下,我们使用相同的结构和目标,并且简单地训练在不同的数据上。

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

图4:说明了不同的loss损失函数下,有着不同的输出结果。其中L1能够看出,保留相当多的低频信息。

Image-to-Image Translation with Conditional Adversarial Networks_201611

图5:可以看出,如果生成模型拥有着类似UNet的skip connections,那么可以看出输出结果将有更高质量的结果。

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

图6:说明patch大小的变化。

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

图10:把cGAN应用到语义分割上。

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

Image-to-Image Translation with Conditional Adversarial Networks_201611

相关文章:

  • 2021-09-07
  • 2021-10-05
  • 2021-06-04
  • 2021-08-06
  • 2021-09-05
  • 2021-10-31
  • 2021-05-18
猜你喜欢
  • 2021-09-18
  • 2022-12-23
  • 2021-07-01
  • 2021-06-22
  • 2022-01-21
相关资源
相似解决方案