Travelled to:
1 × Australia
1 × Canada
1 × China
1 × Finland
1 × Germany
1 × Slovenia
2 × France
8 × USA
Collaborated with:
A.Beygelzimer B.Zadrozny S.Kakade S.Dasgupta L.Li N.Abe A.L.Strehl M.Kääriäinen A.Banerjee ∅ A.Agarwal M.Dudík R.Salakhutdinov T.Zhang J.Wortman M.Balcan L.v.Ahn N.J.Hopper M.J.Kearns M.Zinkevich M.W.Seeger N.Megiddo S.Thrun D.Fox J.O'Sullivan R.Caruana A.Blum K.Chang A.Krishnamurthy H.D.III K.Q.Weinberger A.Dasgupta A.J.Smola J.Attenberg E.Wiewiora M.L.Littman V.Dani T.P.Hayes D.Hsu S.Kale R.E.Schapire
Talks about:
learn (16) model (5) activ (4) reinforc (3) explor (3) bound (3) algorithm (2) approxim (2) summari (2) perform (2)
Person: John Langford
DBLP: Langford:John
Contributed to:
Wrote 27 papers:
- ICML-2015-ChangKADL #education #learning
- Learning to Search Better than Your Teacher (KWC, AK, AA, HDI, JL), pp. 2058–2066.
- ICML-c2-2014-AgarwalHKLLS #algorithm #performance
- Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits (AA, DH, SK, JL, LL, RES), pp. 1638–1646.
- ICML-2011-DudikLL #evaluation #learning #policy #robust
- Doubly Robust Policy Evaluation and Learning (MD, JL, LL), pp. 1097–1104.
- ICML-2009-BeygelzimerDL #learning
- Importance weighted active learning (AB, SD, JL), pp. 49–56.
- ICML-2009-BeygelzimerLZ #machine learning #reduction #summary #tutorial
- Tutorial summary: Reductions in machine learning (AB, JL, BZ), p. 12.
- ICML-2009-DasguptaL #learning #summary #tutorial
- Tutorial summary: Active learning (SD, JL), p. 18.
- ICML-2009-LangfordSZ #learning #modelling
- Learning nonlinear dynamic models (JL, RS, TZ), pp. 593–600.
- ICML-2009-WeinbergerDLSA #learning #multi #scalability
- Feature hashing for large scale multitask learning (KQW, AD, JL, AJS, JA), pp. 1113–1120.
- KDD-2009-BeygelzimerL #learning
- The offset tree for learning with partial labels (AB, JL), pp. 129–138.
- ICML-2008-LangfordSW
- Exploration scavenging (JL, ALS, JW), pp. 528–535.
- ICML-2006-BalcanBL #learning
- Agnostic active learning (MFB, AB, JL), pp. 65–72.
- ICML-2006-BeygelzimerKL #nearest neighbour
- Cover trees for nearest neighbor (AB, SK, JL), pp. 97–104.
- ICML-2006-StrehlLWLL #learning
- PAC model-free reinforcement learning (ALS, LL, EW, JL, MLL), pp. 881–888.
- KDD-2006-AbeZL #detection #learning
- Outlier detection by active learning (NA, BZ, JL), pp. 504–509.
- ICML-2005-BeygelzimerDHLZ #classification #fault #reduction
- Error limiting reductions between classification tasks (AB, VD, TPH, JL, BZ), pp. 49–56.
- ICML-2005-KaariainenL #bound #comparison #fault
- A comparison of tight generalization error bounds (MK, JL), pp. 409–416.
- ICML-2005-LangfordZ #classification #learning #performance
- Relating reinforcement learning performance to classification performance (JL, BZ), pp. 473–480.
- STOC-2005-AhnHL
- Covert two-party computation (LvA, NJH, JL), pp. 513–522.
- KDD-2004-AbeZL #learning #multi
- An iterative method for multi-class cost-sensitive learning (NA, BZ, JL), pp. 3–11.
- KDD-2004-BanerjeeL #clustering #evaluation
- An objective evaluation criterion for clustering (AB, JL), pp. 515–520.
- ICML-2003-KakadeKL #metric
- Exploration in Metric State Spaces (SK, MJK, JL), pp. 306–312.
- ICML-2002-KakadeL #approximate #learning
- Approximately Optimal Approximate Reinforcement Learning (SK, JL), pp. 267–274.
- ICML-2002-Langford #bound #testing
- Combining Trainig Set and Test Set Bounds (JL), pp. 331–338.
- ICML-2002-LangfordZK #analysis #trade-off
- Competitive Analysis of the Explore/Exploit Tradeoff (JL, MZ, SK), pp. 339–346.
- ICML-2001-LangfordSM #bound #classification #predict
- An Improved Predictive Accuracy Bound for Averaging Classifiers (JL, MWS, NM), pp. 290–297.
- ICML-2000-OSullivanLCB #algorithm #named #robust
- FeatureBoost: A Meta-Learning Algorithm that Improves Model Robustness (JO, JL, RC, AB), pp. 703–710.
- ICML-1999-ThrunLF #learning #markov #modelling #monte carlo #parametricity #probability #process
- Monte Carlo Hidden Markov Models: Learning Non-Parametric Models of Partially Observable Stochastic Processes (ST, JL, DF), pp. 415–424.