Travelled to:
1 × Australia
1 × Finland
1 × Germany
1 × Italy
1 × United Kingdom
7 × USA
Collaborated with:
∅ C.Reddy A.Fern T.G.Dietterich S.Natarajan S.Ray M.Chisholm S.Seri D.Ok S.Mahadevan B.K.Natarajan J.R.Doppa C.Parker S.Roncagliolo N.Mehta A.Wilson E.Altendorf A.C.Restificar
Talks about:
learn (15) reinforc (5) rule (4) structur (3) hierarch (3) theori (3) model (3) decomposit (2) approxim (2) search (2)
Person: Prasad Tadepalli
DBLP: Tadepalli:Prasad
Contributed to:
Wrote 18 papers:
- ICML-2012-DoppaFT #predict
- Output Space Search for Structured Prediction (JRD, AF, PT), p. 107.
- ICML-2008-MehtaRTD #automation
- Automatic discovery and transfer of MAXQ hierarchies (NM, SR, PT, TGD), pp. 648–655.
- ICML-2007-ParkerFT #learning #performance #query #retrieval
- Learning for efficient retrieval of structured data with noisy queries (CP, AF, PT), pp. 729–736.
- ICML-2007-WilsonFRT #approach #learning #multi
- Multi-task reinforcement learning: a hierarchical Bayesian approach (AW, AF, SR, PT), pp. 1015–1022.
- ICML-2005-NatarajanT #learning #multi
- Dynamic preferences in multi-criteria reinforcement learning (SN, PT), pp. 601–608.
- ICML-2005-NatarajanTADFR #first-order #learning #modelling #probability
- Learning first-order probabilistic models with combining rules (SN, PT, EA, TGD, AF, ACR), pp. 609–616.
- ICML-2002-ChisholmT #learning #random
- Learning Decision Rules by Randomized Iterative Local Search (MC, PT), pp. 75–82.
- ICML-2002-SeriT #learning #modelling
- Model-based Hierarchical Average-reward Reinforcement Learning (SS, PT), pp. 562–569.
- ICML-1998-ReddyT #first-order #learning #source code
- Learning First-Order Acyclic Horn Programs from Entailment (CR, PT), pp. 472–480.
- ICML-1997-ReddyT #learning #using
- Learning Goal-Decomposition Rules using Exercises (CR, PT), pp. 278–286.
- ICML-1997-TadepalliD #learning
- Hierarchical Explanation-Based Reinforcement Learning (PT, TGD), pp. 358–366.
- ICML-1996-ReddyTR #composition #empirical #learning
- Theory-guided Empirical Speedup Learning of Goal Decomposition Rules (CR, PT, SR), pp. 409–417.
- ICML-1996-TadepalliO #approximate #domain model #learning #modelling #scalability
- Scaling Up Average Reward Reinforcement Learning by Approximating the Domain Models and the Value Function (PT, DO), pp. 471–479.
- ICML-1993-Tadepalli #bias #learning #query
- Learning from Queries and Examples with Tree-structured Bias (PT), pp. 322–329.
- ML-1991-Tadepalli #learning
- Learning with Incrutable Theories (PT), pp. 544–548.
- ML-1989-Tadepalli #approximate
- Planning Approximate Plans for Use in the Real World (PT), pp. 224–228.
- ML-1988-MahadevanT #learning #on the
- On the Tractability of Learning from Incomplete Theories (SM, PT), pp. 235–241.
- ML-1988-NatarajanT #framework #learning
- Two New Frameworks for Learning (BKN, PT), pp. 402–415.