Travelled to:
1 × Canada
1 × Slovenia
7 × USA
Collaborated with:
E.Wefald A.Y.Ng A.Zimdars N.C.Oza B.N.Grosof M.S.Braverman D.Harada R.Musick J.Catlett Y.Erol L.Li B.Ramsundar
Talks about:
learn (3) reinforc (2) theoret (2) reward (2) decis (2) metareason (1) experiment (1) decomposit (1) comparison (1) transform (1)
Person: Stuart J. Russell
DBLP: Russell:Stuart_J=
Facilitated 1 volumes:
Contributed to:
Wrote 10 papers:
- ICML-c3-2013-ErolLRR #parametricity
- The Extended Parameter Filter (YE, LL, BR, SJR), pp. 1103–1111.
- ICML-2003-RussellZ #learning
- Q-Decomposition for Reinforcement Learning Agents (SJR, AZ), pp. 656–663.
- KDD-2001-OzaR #online
- Experimental comparisons of online and batch versions of bagging and boosting (NCO, SJR), pp. 359–364.
- ICML-2000-NgR #algorithm #learning
- Algorithms for Inverse Reinforcement Learning (AYN, SJR), pp. 663–670.
- ICML-1999-NgHR #policy #theory and practice
- Policy Invariance Under Reward Transformations: Theory and Application to Reward Shaping (AYN, DH, SJR), pp. 278–287.
- ICML-1993-MusickCR #database #induction #scalability
- Decision Theoretic Subsampling for Induction on Large Databases (RM, JC, SJR), pp. 212–219.
- KR-1989-RussellW
- Principles of Metareasoning (SJR, EW), pp. 400–411.
- ML-1989-GrosofR #bias #declarative
- Declarative Bias for Structural Domains (BNG, SJR), pp. 480–482.
- ML-1989-WefaldR #adaptation #learning
- Adaptive Learning of Decision-Theoretic Search Control Knowledge (EW, SJR), pp. 408–411.
- ML-1988-BravermanR #bound
- Boundaries of Operationality (MSB, SJR), pp. 221–234.