3. LADN
3.1. Problem Formulation
定义domain,X⊂RH×W×3为before-makeup faces,Y⊂RH×W×3为after-makeup faces
数据集包括{xi}i=1,⋯M,xi∈X以及{yi}i=1,⋯N,yi∈Y
goal:学习化妆的mapping function ΦY:xi,yj→y~i,以及卸妆的mapping function ΦX:yj→x~j
值得注意的是,makeup的过程是需要reference image作为condition,而makeup removal的过程不需要condition
3.2. Network Architecture
核心idea:separate the makeup style latent variable from non-makup features (identity, facial structure, head pose, etc.) and generate new images through recombination of these latent variables.
因此本文借鉴了一个disentanglement framework DRIT

定义:attribute space A that captures the makeup style latent、content space S which includes the non-makeup features
网络:2个domain的content encoders {EXc,EYc},style encoders {EXa,EYa}(上标为a,其实就是attribute encoders),generators {GX,GY}
使用Encoder网络分别对xi和yj提取attribute and content features
EXa(xi)=AiEYa(yj)=AjEXc(xi)=CiEXc(yj)=Cj
然后送入generators生成de-makeup result x~j和makeup transfer result y~i
GX(Ai,Cj)=x~jGY(Aj,Ci)=y~i(1)
The encoders and decoders are designed with a U-Net structure, The latent variables A, C are concatenated at the bottleneck and skip connections are used between the content encoder and generator. This structure can help retain more identity details from the source in the generated image.
对于2个domain,设置2个判别器{DX,DY},从而有adversarial loss Ldomainadv=LXadv+LYadv
LXadv=Ex∼PX[logDX(x)]+Ex~∼GX[log(1−DX(x~))]LYadv=Ey∼PY[logDY(y)]+Ey~∼GY[log(1−DY(y~))](2)
3.3. Local Style Discriminator
对于每一个unpaired的样本(xi,yj),人工生成一个synthetic ground truth W(xi,yj),方法是warping and blending yj onto xi according
to their facial landmarks
当然warping result是有artifacts的,需要使用网络来fix
注:这个生成的代价是否太大了,假设X有1k幅图像,Y有1k幅图像,那么两两交叉,需要生成100万幅图像
好处是:Although the synthetic results cannot serve as the real ground truth of the final results, they can provide guidance to the makeup transfer network on what the generated results should look like.