The following table shows the calculated values for both latent-guided and reference-guided synthesis. In the paper, we reported the average of values from 10 measurements using different seed numbers. Note that the evaluation metrics are calculated using random latent vectors or reference images, both of which are selected by the seed number. Python main.py -mode eval -num_domains 3 -w_hpf 0 \ checkpoint_dir expr/checkpoints/celeba_hq \ Python main.py -mode eval -num_domains 2 -w_hpf 1 \ The TensorFlow implementation of StarGAN v2 by our team member junho can be found at clovaai/stargan-v2-tensorflow. Teaser videoĬlick the figure to watch the teaser video. The code, pre-trained models, and dataset are available at clovaai/stargan-v2. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter- and intra-domain variations. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, diversity, and scalability. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Existing methods address either of the issues, having limited diversity or multiple models for all domains. (* indicates equal contribution)Ībstract: A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Yunjey Choi*, Youngjung Uh*, Jaejun Yoo*, Jung-Woo Ha StarGAN v2: Diverse Image Synthesis for Multiple Domains StarGAN v2 - Official PyTorch Implementation
0 Comments
Leave a Reply. |