BibSLEIGH
BibSLEIGH corpus
BibSLEIGH tags
BibSLEIGH bundles
BibSLEIGH people
CC-BY
Open Knowledge
XHTML 1.0 W3C Rec
CSS 2.1 W3C CanRec
email twitter
Used together with:
learn (11)
bound (8)
bandit (8)
no (7)
minim (7)

Stem regret$ (all stems)

38 papers:

VLDBVLDB-2015-AslayLB0L #social
Viral Marketing Meets Social Advertising: Ad Allocation with Minimum Regret (ÇA, WL, FB, AG, LVSL), pp. 822–833.
VLDBVLDB-2015-FaulknerBL #query
k-Regret Queries with Nonlinear Utilities (TKF, WB, AL), pp. 2098–2109.
STOCSTOC-2015-ZhuLO #matrix #multi
Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates (ZAZ, ZL, LO), pp. 237–245.
ICMLICML-2015-Bou-AmmarTE #learning #policy #sublinear
Safe Policy Search for Lifelong Reinforcement Learning with Sublinear Regret (HBA, RT, EE), pp. 2361–2369.
ICMLICML-2015-CarpentierV #infinity
Simple regret for infinitely many armed bandits (AC, MV), pp. 1133–1141.
ICMLICML-2015-HugginsT
Risk and Regret of Hierarchical Bayesian Learners (JH, JT), pp. 1442–1451.
ICMLICML-2015-KomiyamaHN #analysis #multi #probability #problem
Optimal Regret Analysis of Thompson Sampling in Stochastic Multi-armed Bandit Problem with Multiple Plays (JK, JH, HN), pp. 1152–1161.
ICMLICML-2015-LakshmananOR #bound #learning
Improved Regret Bounds for Undiscounted Continuous Reinforcement Learning (KL, RO, DR), pp. 524–532.
VLDBVLDB-2014-ChesterTVW #set
Computing k-Regret Minimizing Sets (SC, AT, SV, SW), pp. 389–400.
STOCSTOC-2014-DekelDKP
Bandits with switching costs: T2/3 regret (OD, JD, TK, YP), pp. 459–467.
STOCSTOC-2014-FriggstadS #algorithm #approximate #bound
Approximation algorithms for regret-bounded vehicle routing and applications to distance-constrained vehicle routing (ZF, CS), pp. 744–753.
ICMLICML-c1-2014-CombesP #algorithm #bound
Unimodal Bandits: Regret Lower Bounds and Optimal Algorithms (RC, AP), pp. 521–529.
ICMLICML-c2-2014-DworkinKN
Pursuit-Evasion Without Regret, with an Application to Trading (LD, MK, YN), pp. 1521–1529.
ICMLICML-c2-2014-KricheneDB #convergence #learning #on the
On the convergence of no-regret learning in selfish routing (WK, BD, AMB), pp. 163–171.
CHICHI-2013-SleeperCKUACS #quote #twitter
“I read my Twitter the next morning and was astonished”: a conversational perspective on Twitter regrets (MS, JC, PGK, BU, AA, LFC, NMS), pp. 3277–3286.
ICMLICML-c1-2013-HallW #modelling #online #programming
Dynamical Models and tracking regret in online convex programming (ECH, RW), pp. 579–587.
ICMLICML-c1-2013-MaillardNOR #bound #learning #representation
Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning (OAM, PN, RO, DR), pp. 543–551.
CAVCAV-2013-EssenJ #program repair
Program Repair without Regret (CvE, BJ), pp. 896–911.
SIGMODSIGMOD-2012-NanongkaiLSM #interactive
Interactive regret minimization (DN, AL, ADS, KM), pp. 109–120.
ICMLICML-2012-BowlingZ #on the
On Local Regret (MB, MZ), p. 56.
ICMLICML-2012-DekelTA #adaptation #learning #online #policy
Online Bandit Learning against an Adaptive Adversary: from Regret to Policy Regret (OD, AT, RA), p. 227.
ICMLICML-2012-FreitasSZ #bound #exponential #process
Exponential Regret Bounds for Gaussian Process Bandits with Deterministic Observations (NdF, AJS, MZ), p. 125.
ICMLICML-2012-LanctotGBB #game studies #learning
No-Regret Learning in Extensive-Form Games with Imperfect Recall (ML, RGG, NB, MB), p. 135.
KDDKDD-2012-OuyangG #adaptation #named #performance
NASA: achieving lower regrets and faster rates via adaptive stepsizes (HO, AGG), pp. 159–167.
ICMLICML-2011-Scott #bound #classification
Surrogate losses and regret bounds for cost-sensitive classification with example-dependent costs (CS), pp. 153–160.
VLDBVLDB-2010-NanongkaiSLLX #database
Regret-Minimizing Representative Databases (DN, ADS, AL, RJL, J(X), pp. 1114–1124.
ICMLICML-2010-SrinivasKKS #design #optimisation #process
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design (NS, AK, SK, MWS), pp. 1015–1022.
STOCSTOC-2009-Even-DarMN #convergence #game studies #on the
On the convergence of regret minimization dynamics in concave games (EED, YM, UN), pp. 523–532.
STOCSTOC-2009-KleinbergPT #game studies #learning #multi
Multiplicative updates outperform generic no-regret learning in congestion games: extended abstract (RK, GP, ÉT), pp. 533–542.
ICMLICML-2009-ReidW #bound
Surrogate regret bounds for proper losses (MDR, RCW), pp. 897–904.
KDDKDD-2009-Delage #library #online #ranking
Regret-based online ranking for a growing digital library (ED), pp. 229–238.
MLDMMLDM-2009-Calliess #on the
On Fixed Convex Combinations of No-Regret Learners (JPC), pp. 494–504.
RecSysRecSys-2009-ViappianiB #recommendation #set
Regret-based optimal recommendation sets in conversational recommender systems (PV, CB), pp. 101–108.
STOCSTOC-2008-BlumHLR
Regret minimization and the price of total anarchy (AB, MH, KL, AR), pp. 373–382.
ICMLICML-2008-GordonGM #game studies #learning
No-regret learning in convex games (GJG, AG, CM), pp. 360–367.
ICMLICML-2005-ChangK #learning
Hedged learning: regret-minimization with learning experts (YHC, LPK), pp. 121–128.
ICMLICML-2001-JafariGGE #equilibrium #game studies #learning #nash #on the
On No-Regret Learning, Fictitious Play, and Nash Equilibrium (AJ, AG, DG, GE), pp. 226–233.
ICMLICML-1998-Cesa-BianchiF #bound #finite #multi #problem
Finite-Time Regret Bounds for the Multiarmed Bandit Problem (NCB, PF), pp. 100–108.

Bibliography of Software Language Engineering in Generated Hypertext (BibSLEIGH) is created and maintained by Dr. Vadim Zaytsev.
Hosted as a part of SLEBOK on GitHub.