Giraffe: Using Deep Reinforcement Learning to Play Chess

Type: Preprint

Publication Date: 2015-01-01

Citations: 68

DOI: https://doi.org/10.48550/arxiv.1509.01549

Locations

  • arXiv (Cornell University) - View
  • DataCite API - View

Similar Works

Action Title Year Authors
+ Deep Pepper: Expert Iteration based Chess agent in the Reinforcement Learning Setting 2018 G. Vijay Krishna
Kyle Goyette
Ahmad Chamseddine
Breandan Considine
+ PDF Chat DeepChess: End-to-End Deep Neural Network for Automatic Learning in Chess 2016 Omid E. David
Nathan S. Netanyahu
Lior Wolf
+ PDF Chat Learning to Play 7 Wonders Duel Without Human Supervision 2024 Giovanni Paolini
Lorenzo Moreschini
Francesco Veneziano
Alessandro Iraci
+ The Game of Tetris in Machine Learning 2019 Simón Algorta
Özgür Şimşek
+ FlapAI Bird: Training an Agent to Play Flappy Bird Using Reinforcement Learning Techniques 2020 Tai Anh Vu
Leon King Tran
+ KnightCap: A chess program that learns by combining TD(lambda) with game-tree search 1999 Jonathan Baxter
Andrew Tridgell
Lex Weaver
+ Self-Play Learning Without a Reward Metric 2019 Dan Schmidt
Nick Moran
Jonathan S. Rosenfeld
Jonathan Rosenthal
Jonathan S. Yedidia
+ Thinking Fast and Slow with Deep Learning and Tree Search 2017 Thomas Anthony
Tian Zheng
David Barber
+ Thinking Fast and Slow with Deep Learning and Tree Search 2017 Thomas Anthony
Tian Zheng
David Barber
+ Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm 2017 David Silver
Thomas Hubert
Julian Schrittwieser
Ioannis Antonoglou
Matthew Lai
Arthur Guez
Marc Lanctot
Laurent Sifre
Dharshan Kumaran
Thore Graepel
+ Neural Networks for Chess 2022 Dominik Klein
+ PDF Chat Grandmaster-Level Chess Without Search 2024 Anian Ruoss
Grégoire Delétang
Sourabh Medapati
Jordi Grau-Moya
Li Kevin Wenliang
Elliot Catt
John Reid
Tim Genewein
+ Creating Pro-Level AI for a Real-Time Fighting Game Using Deep Reinforcement Learning 2019 In-Seok Oh
Seungeun Rho
Sangbin Moon
Seong‐Ho Son
Hyoil Lee
Jinyun Chung
+ Creating Pro-Level AI for a Real-Time Fighting Game Using Deep Reinforcement Learning 2019 In-Seok Oh
Seungeun Rho
Sangbin Moon
Seong‐Ho Son
Hyoil Lee
Jinyun Chung
+ PDF Chat Learning to Play the Chess Variant Crazyhouse Above World Champion Level With Deep Neural Networks and Human Data 2020 Johannes Czech
Moritz Willig
Alena Beyer
Kristian Kersting
Johannes Fürnkranz
+ PDF Chat Mastering Chinese Chess AI (Xiangqi) Without Search 2024 Yu Chen
Juntong Lin
Zhou Shu
+ Neural Auto-Curricula 2021 Xidong Feng
Oliver Slumbers
Ziyu Wan
Bo Liu
Stephen McAleer
Ying Wen
Jun Wang
Yaodong Yang
+ Neural Auto-Curricula. 2021 Xidong Feng
Oliver Slumbers
Ziyu Wan
Bo Liu
Stephen McAleer
Ying Wen
Jun Wang
Yaodong Yang
+ PDF Chat Checkmating One, by Using Many: Combining Mixture of Experts with MCTS to Improve in Chess 2024 Felix Helfenstein
Jannis Blüml
Johannes Czech
Kristian Kersting
+ Towards Playing Full MOBA Games with Deep Reinforcement Learning 2020 Deheng Ye
Gui-Bin Chen
Wen Zhang
Sheng Chen
Bo Yuan
Bo Liu
Jia Chen
Zhao Liu
Fuhao Qiu
Hongsheng Yu

Works Cited by This (1)

Action Title Year Authors
+ Do Deep Nets Really Need to be Deep? 2013 Jimmy Ba
Rich Caruana