HyperAI
HyperAI超神经
首页
资讯
最新论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
首页
SOTA
图像生成
Image Generation On Celeba Hq 256X256
Image Generation On Celeba Hq 256X256
评估指标
FID
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
FID
Paper Title
Repository
LFM
5.26
Flow Matching in Latent Space
-
DC-VAE
15.81
Dual Contradistinctive Generative Autoencoder
-
UNCSN++ (RVE) + ST
7.16
Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation
-
VAEBM
20.38
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
-
RDUOT
5.6
A High-Quality Robust Diffusion Framework for Corrupted Dataset
-
DDGAN
7.64
Tackling the Generative Learning Trilemma with Denoising Diffusion GANs
-
WaveDiff
5.94
Wavelet Diffusion Models are fast and scalable Image Generators
-
LDM-4
5.11
High-Resolution Image Synthesis with Latent Diffusion Models
-
DDMI
8.73
DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations
-
VQGAN+Transformer
10.2
Taming Transformers for High-Resolution Image Synthesis
-
BOSS
-
Bellman Optimal Stepsize Straightening of Flow-Matching Models
-
RDM
3.15
Relay Diffusion: Unifying diffusion process across resolutions for image synthesis
-
Dual-MCMC EBM
15.89
Learning Energy-based Model via Dual-MCMC Teaching
-
StyleSwin
3.25
StyleSwin: Transformer-based GAN for High-resolution Image Generation
-
LSGM
7.22
Score-based Generative Modeling in Latent Space
-
Joint-EBM
9.89
Learning Joint Latent Space EBM Prior Model for Multi-layer Generator
-
RNODE
-
How to train your neural ODE: the world of Jacobian and kinetic regularization
-
Diffusion-JEBM
8.78
Learning Latent Space Hierarchical EBM Diffusion Models
-
GLOW
68.93
Glow: Generative Flow with Invertible 1x1 Convolutions
-
0 of 19 row(s) selected.
Previous
Next