Travelled to:
1 × Italy
Collaborated with:
Peter I. Cowling K.Hofmann Nick Sephton Daniel Kudenko Dino Stephen Ratcliffe Hanting Xie A.Zolotas N.D.Matragkas D.S.Kolovos R.F.Paige Anastasija Anspoka Jeff Rollason V.J.Hodge Nicholas H. Slaven Luke Harries Sebastian Lee Jaroslaw Rzepecki Daniel Hernandez Kevin Denamganaï Y.Gao Peter York Spyridon Samothrakis James Alfred Walker Nikolaos Goumagias Alberto Nucciarelli Ignazio Cabras Kiran Jude Fernandes 0001 F.Li
Talks about:
predict (2) learn (2) play (2) data (2) represent (1) generalis (1) framework (1) benchmark (1) reinforc (1) intellig (1)
Person: Sam Devlin
DBLP: Devlin:Sam
Contributed to:
Wrote 8 papers:
- ECMFA-2015-ZolotasMDKP #flexibility #modelling #type inference
- Type Inference in Flexible Model-Driven Engineering (AZ, NDM, SD, DSK, RFP), pp. 75–91.
- CIG-2014-DevlinCKGNCFL #game studies
- Game intelligence (SD, PIC, DK, NG, AN, IC, KJF0, FL), pp. 1–8.
- CIG-2015-XieDKC #data transformation #predict #representation
- Predicting player disengagement and first purchase with event-frequency based data representation (HX, SD, DK, PIC), pp. 230–237.
- AIIDE-2016-DevlinASCR #game studies #monte carlo
- Combining Gameplay Data with Monte Carlo Tree Search to Emulate Human Play (SD, AA, NS, PIC, JR), pp. 16–22.
- CIG-2016-SephtonCDHS #android #mining #predict #using
- Using association rule mining to predict opponent deck content in android: Netrunner (NS, PIC, SD, VJH, NHS), pp. 1–8.
- CoG-2019-HarriesLRHD #3d #benchmark #learning #metric #named
- MazeExplorer: A Customisable 3D Benchmark for Assessing Generalisation in Reinforcement Learning (LH, SL, JR, KH, SD), pp. 1–4.
- CoG-2019-HernandezDGYDSW #framework #self
- A Generalized Framework for Self-Play Training (DH, KD, YG, PY, SD, SS, JAW), pp. 1–8.
- CoG-2019-RatcliffeHD #optimisation #performance #policy
- Win or Learn Fast Proximal Policy Optimisation (DSR, KH, SD), pp. 1–4.