HyperAI
HyperAI超神经
首页
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
首页
SOTA
Atari 游戏
Atari Games On Atari 2600 Bowling
Atari Games On Atari 2600 Bowling
评估指标
Score
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Score
Paper Title
Repository
Advantage Learning
57.41
Increasing the Action Gap: New Operators for Reinforcement Learning
-
GDI-I3
201.9
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
-
DDQN (tuned) hs
69.6
Deep Reinforcement Learning with Double Q-learning
-
DNA
181
DNA: Proximal Policy Optimization with a Dual Network Architecture
-
A3C LSTM hs
41.8
Asynchronous Methods for Deep Reinforcement Learning
-
CGP
85.8
Evolving simple programs for playing Atari games
-
GDI-H3
205.2
Generalized Data Distribution Iteration
-
IMPALA (deep)
59.92
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
-
Duel noop
65.5
Dueling Network Architectures for Deep Reinforcement Learning
-
ASL DDQN
62.4
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
-
DQN noop
50.4
Deep Reinforcement Learning with Double Q-learning
-
Ape-X
17.6
Distributed Prioritized Experience Replay
-
IQN
86.5
Implicit Quantile Networks for Distributional Reinforcement Learning
-
Duel hs
65.7
Dueling Network Architectures for Deep Reinforcement Learning
-
QR-DQN-1
77.2
Distributional Reinforcement Learning with Quantile Regression
-
A3C FF hs
35.1
Asynchronous Methods for Deep Reinforcement Learning
-
DDQN (tuned) noop
68.1
Dueling Network Architectures for Deep Reinforcement Learning
-
RUDDER
179
RUDDER: Return Decomposition for Delayed Rewards
-
Persistent AL
71.59
Increasing the Action Gap: New Operators for Reinforcement Learning
-
Gorila
54
Massively Parallel Methods for Deep Reinforcement Learning
-
0 of 44 row(s) selected.
Previous
Next
Atari Games On Atari 2600 Bowling | SOTA | HyperAI超神经