Proceedings of the 36th International Conference on Machine Learning
BibSLEIGH corpus
BibSLEIGH tags
BibSLEIGH bundles
BibSLEIGH people
EDIT!
CC-BY
Open Knowledge
XHTML 1.0 W3C Rec
CSS 2.1 W3C CanRec
email twitter

Kamalika Chaudhuri, Ruslan Salakhutdinov
Proceedings of the 36th International Conference on Machine Learning
ICML, 2019.

KER
DBLP
Scholar
?EE?
Full names Links ISxN
@proceedings{ICML-2019,
	editor        = "Kamalika Chaudhuri and Ruslan Salakhutdinov",
	ee            = "http://proceedings.mlr.press/v97/",
	publisher     = "{PMLR}",
	series        = "{Proceedings of Machine Learning Research}",
	title         = "{Proceedings of the 36th International Conference on Machine Learning}",
	volume        = 97,
	year          = 2019,
}

Contents (774 items)

ICML-2019-AbbatiWO0SB
AReS and MaRS Adversarial and MMD-Minimizing Regression for SDEs (GA, PW, MAO, AK0, BS, SB), pp. 1–10.
ICML-2019-AbelsRLNS #learning #multi
Dynamic Weights in Multi-Objective Deep Reinforcement Learning (AA, DMR, TL, AN, DS), pp. 11–20.
ICML-2019-Abu-El-HaijaPKA #architecture #graph #higher-order #named
MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing (SAEH, BP, AK, NA, KL, HH, GVS, AG), pp. 21–29.
ICML-2019-AcharyaCT
Communication-Constrained Inference and the Role of Shared Randomness (JA, CLC, HT), pp. 30–39.
ICML-2019-AcharyaSFS #communication #distributed #learning #sublinear
Distributed Learning with Sublinear Communication (JA, CDS, DJF, KS), pp. 40–50.
ICML-2019-AcharyaS #communication #complexity #estimation
Communication Complexity in Locally Private Distribution Estimation and Heavy Hitters (JA, ZS), pp. 51–60.
ICML-2019-AdamsJWS #fault #learning #metric #modelling
Learning Models from Data with Measurement Error: Tackling Underreporting (RA, YJ, XW, SS), pp. 61–70.
ICML-2019-AdelW #approach #learning #named #visual notation
TibGM: A Transferable and Information-Based Graphical Model Approach for Reinforcement Learning (TA, AW), pp. 71–81.
ICML-2019-AdigaKMRV
PAC Learnability of Node Functions in Networked Dynamical Systems (AA, CJK, MM, SSR, AV), pp. 82–91.
ICML-2019-Agarwal #automation
Static Automatic Batching In TensorFlow (AA), pp. 92–101.
ICML-2019-AgarwalBCHSZZ #adaptation #performance
Efficient Full-Matrix Adaptive Regularization (NA, BB, XC, EH, KS, CZ, YZ), pp. 102–110.
ICML-2019-AgarwalBHKS #online
Online Control with Adversarial Disturbances (NA, BB, EH, SMK, KS), pp. 111–119.
ICML-2019-AgarwalDW #algorithm
Fair Regression: Quantitative Definitions and Reduction-Based Algorithms (AA, MD, ZSW), pp. 120–129.
ICML-2019-AgarwalLS0 #learning
Learning to Generalize from Sparse and Underspecified Rewards (RA, CL, DS, MN0), pp. 130–140.
ICML-2019-AgrawalTHB #interactive #kernel #performance
The Kernel Interaction Trick: Fast Bayesian Discovery of Pairwise Interactions in High Dimensions (RA, BLT, JHH, TB), pp. 141–150.
ICML-2019-AhmedR0S #comprehension #optimisation #policy
Understanding the Impact of Entropy on Policy Optimization (ZA, NLR, MN0, DS), pp. 151–160.
ICML-2019-AivodjiAFGHT #named
Fairwashing: the risk of rationalization (UA, HA, OF, SG, SH, AT), pp. 161–170.
ICML-2019-AkimotoSYUSN #adaptation #architecture #probability
Adaptive Stochastic Natural Gradient Method for One-Shot Neural Architecture Search (YA, SS, NY, KU, SS, KN), pp. 171–180.
ICML-2019-AkrourP0N #algorithm #approximate #policy
Projections for Approximate Policy Iteration Algorithms (RA, JP, JP0, GN), pp. 181–190.
ICML-2019-AlaaS #modelling #validation
Validating Causal Inference Models via Influence Functions (AMA, MvdS), pp. 191–201.
ICML-2019-AlbuquerqueMDCF #generative #multi #network
Multi-objective training of Generative Adversarial Networks with multiple discriminators (IA, JM, TD, BC, THF, IM), pp. 202–211.
ICML-2019-AletJVRLK #adaptation #graph #memory management #network
Graph Element Networks: adaptive, structured computation and memory (FA, AKJ, MBV, AR, TLP, LPK), pp. 212–222.
ICML-2019-AllenH #comprehension #towards #word
Analogies Explained: Towards Understanding Word Embeddings (CA, TMH), pp. 223–231.
ICML-2019-AllenSST #infinity #learning #prototype
Infinite Mixture Prototypes for Few-shot Learning (KRA, ES, HS, JBT), pp. 232–241.
ICML-2019-Allen-ZhuLS #convergence #learning
A Convergence Theory for Deep Learning via Over-Parameterization (ZAZ, YL, ZS), pp. 242–252.
ICML-2019-AlviRCRO #optimisation
Asynchronous Batch Bayesian Optimisation with Improved Local Penalisation (ASA, BXR, JPC, SJR, MAO), pp. 253–262.
ICML-2019-AminKMV #bound #difference #privacy #trade-off
Bounding User Contributions: A Bias-Variance Trade-off in Differential Privacy (KA, AK, AMM, SV), pp. 263–271.
ICML-2019-AnconaOG #algorithm #approximate #network #polynomial
Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Value Approximation (MA, , MHG), pp. 272–281.
ICML-2019-AndertonA #approach #scalability
Scaling Up Ordinal Embedding: A Landmark Approach (JA, JAA), pp. 282–290.
ICML-2019-AnilLG #approximate #sorting
Sorting Out Lipschitz Function Approximation (CA, JL, RBG), pp. 291–301.
ICML-2019-AntelmiARL #analysis #multi #semistructured data
Sparse Multi-Channel Variational Autoencoder for the Joint Analysis of Heterogeneous Data (LA, NA, PR, ML), pp. 302–311.
ICML-2019-ArazoOAOM #modelling
Unsupervised Label Noise Modeling and Loss Correction (EA, DO, PA, NEO, KM), pp. 312–321.
ICML-2019-AroraDHLW #analysis #fine-grained #network #optimisation
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks (SA, SSD, WH, ZL0, RW), pp. 322–332.
ICML-2019-AssadiBM #composition #distributed #random
Distributed Weighted Matching via Randomized Composable Coresets (SA, MB, VSM), pp. 333–343.
ICML-2019-AssranLBR #distributed #learning #probability
Stochastic Gradient Push for Distributed Deep Learning (MA, NL, NB, MR), pp. 344–353.
ICML-2019-AstudilloF #optimisation
Bayesian Optimization of Composite Functions (RA, PIF), pp. 354–363.
ICML-2019-AtasuM #approximate #distance
Linear-Complexity Data-Parallel Earth Mover's Distance Approximations (KA, TM), pp. 364–373.
ICML-2019-AwanKRS #exponential #functional
Benefits and Pitfalls of the Exponential Mechanism with Applications to Hilbert Spaces and Functional PCA (JA, AK, MR, ABS), pp. 374–384.
ICML-2019-AydoreTV #probability
Feature Grouping as a Stochastic Regularizer for High-Dimensional Structured Data (SA, BT, GV), pp. 385–394.
ICML-2019-AyedLC #behaviour #modelling #process #statistics
Beyond the Chinese Restaurant and Pitman-Yor processes: Statistical Models with double power-law behavior (FA, JL, FC), pp. 395–404.
ICML-2019-BackursIOSVW #clustering #scalability
Scalable Fair Clustering (AB, PI, KO, BS, AV, TW), pp. 405–413.
ICML-2019-BalajiHCF #approach #statistics
Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs (YB, HH, RC, SF), pp. 414–423.
ICML-2019-BalcanKT
Provable Guarantees for Gradient-Based Meta-Learning (MFB, MK, AT), pp. 424–433.
ICML-2019-BalduzziGB0PJG #game studies #learning #symmetry
Open-ended learning in symmetric zero-sum games (DB, MG, YB, WC0, JP, MJ, TG), pp. 434–443.
ICML-2019-BalinAZ #feature model #re-engineering
Concrete Autoencoders: Differentiable Feature Selection and Reconstruction (MFB, AA, JYZ), pp. 444–453.
ICML-2019-BansalLRSW #higher-order #logic #machine learning #named #proving #theorem proving
HOList: An Environment for Machine Learning of Higher Order Logic Theorem Proving (KB, SML, MNR, CS, SW), pp. 454–463.
ICML-2019-BapstSDSKBH #physics
Structured agents for physical construction (VB, ASG, CD, KLS, PK, PWB, JBH), pp. 464–474.
ICML-2019-BaranchukPSB #graph #learning #similarity
Learning to Route in Similarity Graphs (DB, DP, AS, AB), pp. 475–484.
ICML-2019-BarrosPW #memory management #personalisation #recognition
A Personalized Affective Memory Model for Improving Emotion Recognition (PVAB, GIP, SW), pp. 485–494.
ICML-2019-BartlettGHV #adaptation
Scale-free adaptive planning for deterministic dynamics & discounted rewards (PLB, VG, JH, MV), pp. 495–504.
ICML-2019-BasuGLS #classification #streaming
Pareto Optimal Streaming Unsupervised Classification (SB0, SG, BL, SS), pp. 505–514.
ICML-2019-BateniCEFMR #category theory #optimisation
Categorical Feature Compression via Submodular Optimization (MB, LC, HE, TF, VSM, AR), pp. 515–523.
ICML-2019-BatsonR #named #self
Noise2Self: Blind Denoising by Self-Supervision (JB, LR), pp. 524–533.
ICML-2019-BeatsonA #optimisation #performance #random
Efficient optimization of loops and limits with randomized telescoping sums (AB, RPA), pp. 534–543.
ICML-2019-BeckerPGZTN #network
Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces (PB, HP, GHWG, CZ, CJT, GN), pp. 544–552.
ICML-2019-Becker-Ehmck0S #linear
Switching Linear Dynamics for Variational Bayes Filtering (PBE, JP0, PvdS), pp. 553–562.
ICML-2019-BehpourLZ #learning #predict #probability
Active Learning for Probabilistic Structured Prediction of Cuts and Matchings (SB, AL, BDZ), pp. 563–572.
ICML-2019-BehrmannGCDJ #network
Invertible Residual Networks (JB, WG, RTQC, DD, JHJ), pp. 573–582.
ICML-2019-BelilovskyEO #learning
Greedy Layerwise Learning Can Scale To ImageNet (EB, ME, EO), pp. 583–593.
ICML-2019-BenyahiaYBJDSM #multi
Overcoming Multi-model Forgetting (YB, KY, KBS, MJ, ACD, MS, CM), pp. 594–603.
ICML-2019-BenzingGMMS #approximate #learning #realtime
Optimal Kronecker-Sum Approximation of Real Time Recurrent Learning (FB, MMG, AM, AM, AS), pp. 604–613.
ICML-2019-BertranMPQRRS #obfuscation
Adversarially Learned Representations for Information Obfuscation and Inference (MB, NM, AP, QQ, MRDR, GR, GS), pp. 614–623.
ICML-2019-BeygelzimerPSTW #algorithm #classification #linear #multi #performance
Bandit Multiclass Linear Classification: Efficient Algorithms for the Separable Case (AB, DP, BS, DT, CYW, CZ), pp. 624–633.
ICML-2019-BhagojiCMC #learning #lens
Analyzing Federated Learning through an Adversarial Lens (ANB, SC, PM, SBC), pp. 634–643.
ICML-2019-BianB0
Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference (YAB, JMB, AK0), pp. 644–653.
ICML-2019-BibautMVL #evaluation #learning #performance
More Efficient Off-Policy Evaluation through Regularized Targeted Learning (AB, IM, NV, MJvdL), pp. 654–663.
ICML-2019-BiettiMCM #kernel #network
A Kernel Perspective for Regularizing Deep Neural Networks (AB, GM, DC, JM), pp. 664–674.
ICML-2019-BlauM #trade-off
Rethinking Lossy Compression: The Rate-Distortion-Perception Tradeoff (YB, TM), pp. 675–685.
ICML-2019-BodaA #correlation #fault #how #online
Correlated bandits or: How to minimize mean-squared error online (VPB, PLA), pp. 686–694.
ICML-2019-BojchevskiG #graph
Adversarial Attacks on Node Embeddings via Graph Poisoning (AB, SG), pp. 695–704.
ICML-2019-BorsosCL0 #online #reduction
Online Variance Reduction with Mixtures (ZB, SC, KYL, AK0), pp. 705–714.
ICML-2019-BoseH #composition #constraints #graph
Compositional Fairness Constraints for Graph Embeddings (AJB, WLH), pp. 715–724.
ICML-2019-BouthillierLV #research
Unreproducible Research is Reproducible (XB, CL, PV), pp. 725–734.
ICML-2019-BraunPTW
Blended Conditonal Gradients (GB, SP, DT, SW), pp. 735–743.
ICML-2019-BravermanJKW #clustering #order
Coresets for Ordered Weighted Clustering (VB, SHCJ, RK, XW), pp. 744–753.
ICML-2019-BregereGGS
Target Tracking for Contextual Bandits: Application to Demand Side Management (MB, PG, YG, GS), pp. 754–763.
ICML-2019-BridgesGFVH
Active Manifolds: A non-linear analogue to Active Subspaces (RAB, ADG, CF, MEV, CH), pp. 764–772.
ICML-2019-BrookesPL #adaptation #design #robust
Conditioning by adaptive sampling for robust design (DHB, HP, JL), pp. 773–782.
ICML-2019-BrownGNN #learning
Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations (DSB, WG, PN, SN), pp. 783–792.
ICML-2019-BrownLGS
Deep Counterfactual Regret Minimization (NB, AL, SG, TS), pp. 793–802.
ICML-2019-BrunetAAZ #bias #comprehension #word
Understanding the Origins of Bias in Word Embeddings (MEB, CAH, AA, RSZ), pp. 803–811.
ICML-2019-BrutzkusGE #latency #privacy
Low Latency Privacy Preserving Inference (AB, RGB, OE), pp. 812–821.
ICML-2019-BrutzkusG #modelling #problem #why
Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem (AB, AG), pp. 822–830.
ICML-2019-BubeckLPR #constraints
Adversarial examples from computational constraints (SB, YTL, EP, IPR), pp. 831–840.
ICML-2019-BuchnikCHM #self
Self-similar Epochs: Value in arrangement (EB, EC, AH, YM), pp. 841–850.
ICML-2019-BunneA0J #generative #learning #modelling
Learning Generative Models across Incomparable Spaces (CB, DAM, AK0, SJ), pp. 851–861.
ICML-2019-BurtRW #convergence #process
Rates of Convergence for Sparse Variational Gaussian Process Regression (DRB, CER, MvdW), pp. 862–871.
ICML-2019-ByrdL #learning #question #what
What is the Effect of Importance Weighting in Deep Learning? (JB, ZCL), pp. 872–881.
ICML-2019-CaiLS #analysis #normalisation
A Quantitative Analysis of the Effect of Batch Normalization on Gradient Descent (YC, QL, ZS), pp. 882–890.
ICML-2019-CanGZ #convergence #linear #probability
Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances (BC, MG, LZ), pp. 891–901.
ICML-2019-CanalMDR
Active Embedding Search via Noisy Paired Comparisons (GC, AKM, MAD, CJR), pp. 902–911.
ICML-2019-CaoS #learning #multi #problem
Dynamic Learning with Frequent New Product Launches: A Sequential Multinomial Logit Bandit Problem (JC, WS), pp. 912–920.
ICML-2019-CardosoAWX #game studies #nash
Competing Against Nash Equilibria in Adversarially Changing Zero-Sum Games (ARC, JDA, HW, HX), pp. 921–930.
ICML-2019-ChaiTOG #automation
Automated Model Selection with Bayesian Quadrature (HC, JFT, MAO, RG), pp. 931–940.
ICML-2019-ChandakTKJT #learning
Learning Action Representations for Reinforcement Learning (YC, GT, JK, SMJ, PST), pp. 941–950.
ICML-2019-ChangMG #metric #scheduling #using
Dynamic Measurement Scheduling for Event Forecasting using Deep RL (CHC, MM, AG), pp. 951–960.
ICML-2019-CharoenphakdeeL #learning #on the #symmetry
On Symmetric Losses for Learning from Corrupted Labels (NC, JL, MS), pp. 961–970.
ICML-2019-ChatterjiPB #kernel #learning #online
Online learning with kernel losses (NSC, AP, PLB), pp. 971–980.
ICML-2019-ChattopadhyayMS #network #perspective
Neural Network Attributions: A Causal Perspective (AC, PM, AS, VNB), pp. 981–990.
ICML-2019-ChaudhuriK #identification #multi #probability
PAC Identification of Many Good Arms in Stochastic Multi-Armed Bandits (ARC, SK), pp. 991–1000.
ICML-2019-Chen #analysis #bound #consistency #fault #kernel #nearest neighbour
Nearest Neighbor and Kernel Survival Analysis: Nonasymptotic Error Bounds and Strong Consistency Rates (GHC), pp. 1001–1010.
ICML-2019-ChenBBGGMO #markov #monte carlo
Stein Point Markov Chain Monte Carlo (WYC, AB, FXB, JG, MAG, LWM, CJO), pp. 1011–1021.
ICML-2019-ChenDS
Particle Flow Bayes' Rule (XC, HD, LS), pp. 1022–1031.
ICML-2019-ChenFLM #clustering
Proportionally Fair Clustering (XC, BF, LL, KM), pp. 1032–1041.
ICML-2019-ChenJ #learning
Information-Theoretic Considerations in Batch Reinforcement Learning (JC, NJ), pp. 1042–1051.
ICML-2019-Chen0LJQS #generative #learning #recommendation
Generative Adversarial User Model for Reinforcement Learning Based Recommendation System (XC, SL0, HL, SJ, YQ, LS), pp. 1052–1061.
ICML-2019-ChenLCZ #comprehension #network
Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels (PC, BL, GC, SZ), pp. 1062–1070.
ICML-2019-ChenTZB00 #approach #generative #network
A Gradual, Semi-Discrete Approach to Generative Network Training via Explicit Wasserstein Minimization (YC, MT, CZ, BB, DH0, JP0), pp. 1071–1080.
ICML-2019-ChenWLW #adaptation
Transferability vs. Discriminability: Batch Spectral Penalization for Adversarial Domain Adaptation (XC, SW, ML, JW0), pp. 1081–1090.
ICML-2019-ChenW0R #algorithm #graph #incremental #performance
Fast Incremental von Neumann Graph Entropy Computation: Theory, Algorithm, and Applications (PYC, LW, SL0, IR), pp. 1091–1101.
ICML-2019-ChenXHY #named #problem #scalability
Katalyst: Boosting Convex Katayusha for Non-Convex Problems with a Large Condition Number (ZC, YX, HH, TY), pp. 1102–1111.
ICML-2019-ChenYWLYLL #multi #scalability
Multivariate-Information Adversarial Ensemble for Scalable Joint Distribution Matching (ZC, ZY, XW, XL, XY, GL, LL), pp. 1112–1121.
ICML-2019-ChenZBH #robust
Robust Decision Trees Against Adversarial Examples (HC, HZ0, DSB, CJH), pp. 1122–1131.
ICML-2019-ChenZWMH #named
RaFM: Rank-Aware Factorization Machines (XC, YZ, JW, WM, JH), pp. 1132–1140.
ICML-2019-ChengVOCYB #learning
Control Regularization for Reduced Variance Reinforcement Learning (RC, AV, GO, SC, YY, JB), pp. 1141–1150.
ICML-2019-ChengYRB #optimisation #policy #predict
Predictor-Corrector Policy Optimization (CAC, XY, NDR, BB), pp. 1151–1161.
ICML-2019-ChiquetRM #network #re-engineering
Variational Inference for sparse network reconstruction from count data (JC, SR, MM), pp. 1162–1171.
ICML-2019-ChitraR #random
Random Walks on Hypergraphs with Edge-Dependent Vertex Weights (UC, BJR), pp. 1172–1181.
ICML-2019-ChoiTGWE
Neural Joint Source-Channel Coding (KC, KT, AG, TW, SE), pp. 1182–1192.
ICML-2019-ChoromanskaCKLR #online
Beyond Backprop: Online Alternating Minimization with Auxiliary Variables (AC, BC, SK, RL, MR, IR, PD, VG, BK, RT, DB0), pp. 1193–1202.
ICML-2019-ChoromanskiRCW #monte carlo #orthogonal
Unifying Orthogonal Monte Carlo Methods (KC, MR, WC, AW), pp. 1203–1212.
ICML-2019-ChuBG #functional #learning #probability
Probability Functional Descent: A Unifying Perspective on GANs, Variational Inference, and Reinforcement Learning (CC, JHB, PWG), pp. 1213–1222.
ICML-2019-ChuL #multi #named #summary
MeanSum: A Neural Model for Unsupervised Multi-Document Abstractive Summarization (EC, PJL), pp. 1223–1232.
ICML-2019-ChungL #detection
Weak Detection of Signal in the Spiked Wigner Model (HWC, JOL), pp. 1233–1241.
ICML-2019-CicaleseLM #clustering
New results on information theoretic clustering (FC, ESL, LM), pp. 1242–1251.
ICML-2019-CinelliKCPB #analysis #linear #modelling
Sensitivity Analysis of Linear Structural Causal Models (CC, DK, BC, JP, EB), pp. 1252–1261.
ICML-2019-ClarksonWW #reduction
Dimensionality Reduction for Tukey Regression (KLC, RW, DPW), pp. 1262–1271.
ICML-2019-ClemenconLB #on the #random
On Medians of (Randomized) Pairwise Means (SC, PL, PB), pp. 1272–1281.
ICML-2019-CobbeKHKS #learning
Quantifying Generalization in Reinforcement Learning (KC, OK, CH, TK, JS), pp. 1282–1289.
ICML-2019-CohenB #analysis #empirical #modelling #performance #sequence
Empirical Analysis of Beam Search Performance Degradation in Neural Sequence Models (EC, JCB), pp. 1290–1299.
ICML-2019-CohenKM #learning
Learning Linear-Quadratic Regulators Efficiently with only √T Regret (AC, TK, YM), pp. 1300–1309.
ICML-2019-CohenRK #random #robust
Certified Adversarial Robustness via Randomized Smoothing (JMC, ER, JZK), pp. 1310–1320.
ICML-2019-CohenWKW #network
Gauge Equivariant Convolutional Networks and the Icosahedral CNN (TC, MW, BK, MW), pp. 1321–1330.
ICML-2019-ColasOSFC #composition #learning #motivation #multi #named
CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning (CC, PYO, OS, PF, MC), pp. 1331–1340.
ICML-2019-CollobertHS
A fully differentiable beam search decoder (RC, AH, GS), pp. 1341–1350.
ICML-2019-CornishVBDD #dataset #scalability
Scalable Metropolis-Hastings for Exact Bayesian Inference with Large Datasets (RC, PV, ABC, GD, AD), pp. 1351–1360.
ICML-2019-Correa0B
Adjustment Criteria for Generalizing Experimental Findings (JDC, JT0, EB), pp. 1361–1369.
ICML-2019-CortesDGMY #feedback #graph #learning #online
Online Learning with Sleeping Experts and Feedback Graphs (CC, GD, CG, MM, SY), pp. 1370–1378.
ICML-2019-CortesDMZG #graph #learning
Active Learning with Disagreement Graphs (CC, GD, MM, NZ, CG), pp. 1379–1387.
ICML-2019-CotterGJLMNWZ #constraints #set
Shape Constraints for Set Functions (AC, MRG, HJ, EL, JM, TN, SW, TZ), pp. 1388–1396.
ICML-2019-CotterGJSSWWY #classification #constraints #metric
Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints (AC, MRG, HJ, NS, KS, SW, BEW, SY), pp. 1397–1405.
ICML-2019-CrankoMNOSW
Monge blunts Bayes: Hardness Results for Adversarial Training (ZC, AKM, RN, CSO, ZS, CJW), pp. 1406–1415.
ICML-2019-CrankoN #estimation
Boosted Density Estimation Remastered (ZC, RN), pp. 1416–1425.
ICML-2019-CrawfordKT #approximate
Submodular Cost Submodular Cover with an Approximate Oracle (VGC, AK, MTT), pp. 1426–1435.
ICML-2019-CreagerMJWSPZ #learning #representation
Flexibly Fair Representation Learning by Disentanglement (EC, DM, JHJ, MAW, KS, TP, RSZ), pp. 1436–1445.
ICML-2019-Cutkosky
Anytime Online-to-Batch, Optimism and Acceleration (AC), pp. 1446–1454.
ICML-2019-CutkoskyS #learning #online
Matrix-Free Preconditioning in Online Learning (AC, TS), pp. 1455–1464.
ICML-2019-CvitkovicK #learning #statistics
Minimal Achievable Sufficient Statistic Learning (MC, GK), pp. 1465–1474.
ICML-2019-CvitkovicSA #learning #source code
Open Vocabulary Learning on Source Code with a Graph-Structured Cache (MC, BS, AA), pp. 1475–1485.
ICML-2019-DadashiBTRS #learning
The Value Function Polytope in Reinforcement Learning (RD, MGB, AAT, NLR, DS), pp. 1486–1495.
ICML-2019-DaiYLJ #optimisation
Bayesian Optimization Meets Bayesian Optimal Stopping (ZD, HY, BKHL, PJ), pp. 1496–1506.
ICML-2019-Dann0WB #learning #policy #towards
Policy Certificates: Towards Accountable Reinforcement Learning (CD, LL0, WW, EB), pp. 1507–1516.
ICML-2019-DaoGERR #algorithm #learning #linear #performance #using
Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations (TD, AG, ME, AR, CR), pp. 1517–1527.
ICML-2019-DaoGRSSR #kernel
A Kernel Theory of Modern Data Augmentation (TD, AG, AR, VS, CDS, CR), pp. 1528–1537.
ICML-2019-DasGRBPRP #communication #multi #named
TarMAC: Targeted Multi-Agent Communication (AD, TG, JR, DB, DP, MR, JP), pp. 1538–1546.
ICML-2019-Dasgupta0PZ #black box #education
Teaching a black-box learner (SD, DH0, SP, XZ0), pp. 1547–1555.
ICML-2019-BiePC #network #probability
Stochastic Deep Networks (GdB, GP, MC), pp. 1556–1565.
ICML-2019-DeneviCGP #probability
Learning-to-Learn Stochastic Gradient Descent with Biased Regularization (GD, CC, RG, MP), pp. 1566–1575.
ICML-2019-DereliOG #algorithm #analysis #biology #kernel #learning #multi
A Multitask Multiple Kernel Learning Algorithm for Survival Analysis with Application to Cancer Biology (OD, CO, MG), pp. 1576–1585.
ICML-2019-DiaconuW #approach #learning
Learning to Convolve: A Generalized Weight-Tying Approach (ND, DEW), pp. 1586–1595.
ICML-2019-DiakonikolasKK0 #named #optimisation #probability #robust
Sever: A Robust Meta-Algorithm for Stochastic Optimization (ID, GK0, DK, JL0, JS, AS), pp. 1596–1606.
ICML-2019-DingDGHY #approximate #optimisation
Approximated Oracle Filter Pruning for Destructive CNN Width Optimization (XD, GD, YG, JH, CY), pp. 1607–1616.
ICML-2019-DingZDYRTV #component
Noisy Dual Principal Component Pursuit (TD, ZZ, TD, YY, DPR, MCT, RV), pp. 1617–1625.
ICML-2019-DoanMR #analysis #approximate #distributed #finite #learning #linear #multi
Finite-Time Analysis of Distributed TD(0) with Linear Function Approximation on Multi-Agent Reinforcement Learning (TTD, STM, JR), pp. 1626–1635.
ICML-2019-DoerrVTTD #learning
Trajectory-Based Off-Policy Deep Reinforcement Learning (AD, MV, MT, ST, CD), pp. 1636–1645.
ICML-2019-Dohmatob #robust #theorem
Generalized No Free Lunch Theorem for Adversarial Robustness (ED), pp. 1646–1654.
ICML-2019-DuH #linear #matter #network #optimisation
Width Provably Matters in Optimization for Deep Linear Neural Networks (SSD, WH), pp. 1655–1664.
ICML-2019-DuKJAD0 #performance
Provably efficient RL with Rich Observations via Latent State Decoding (SSD, AK, NJ, AA, MD, JL0), pp. 1665–1674.
ICML-2019-DuLL0Z #network
Gradient Descent Finds Global Minima of Deep Neural Networks (SSD, JDL, HL, LW0, XZ), pp. 1675–1685.
ICML-2019-DuL
Incorporating Grouping Information into Bayesian Decision Tree Ensembles (JD, ARL), pp. 1686–1695.
ICML-2019-DuN #learning
Task-Agnostic Dynamics Priors for Deep Reinforcement Learning (YD, KN), pp. 1696–1705.
ICML-2019-Duetting0NPR #learning
Optimal Auctions through Deep Learning (PD, ZF0, HN, DCP, SSR), pp. 1706–1715.
ICML-2019-DuklerLLM #generative #learning #modelling
Wasserstein of Wasserstein Loss for Learning Generative Models (YD, WL, ATL, GM), pp. 1716–1725.
ICML-2019-DunckerBBS #learning #modelling #probability
Learning interpretable continuous-time models of latent stochastic dynamical systems (LD, GB, JB, MS), pp. 1726–1734.
ICML-2019-DurkanN #energy
Autoregressive Energy Machines (CD, CN), pp. 1735–1744.
ICML-2019-DziedzicPKEF #network
Band-limited Training and Inference for Convolutional Neural Networks (AD, JP, SK, AJE, MJF), pp. 1745–1754.
ICML-2019-EdwardsSSI #policy
Imitating Latent Policies from Observation (ADE, HS, YS, CLIJ), pp. 1755–1763.
ICML-2019-EichnerKMST #probability
Semi-Cyclic Stochastic Gradient Descent (HE, TK, BM, NS, KT), pp. 1764–1773.
ICML-2019-ElfekiCRE #learning #named #process #using
GDPP: Learning Diverse Generations using Determinantal Point Processes (ME, CC, MR, ME), pp. 1774–1783.
ICML-2019-Elhamifar #algorithm #approximate
Sequential Facility Location: Approximate Submodularity and Greedy Algorithm (EE), pp. 1784–1793.
ICML-2019-EneV #convergence
Improved Convergence for l₁ and l₁ l∞ Regression via Iteratively Reweighted Least Squares (AE, AV), pp. 1794–1801.
ICML-2019-EngstromTTSM #robust
Exploring the Landscape of Spatial Robustness (LE, BT, DT, LS, AM), pp. 1802–1811.
ICML-2019-EstevesSLDM #3d #image
Cross-Domain 3D Equivariant Image Embeddings (CE, AS, ZL, KD, AM), pp. 1812–1822.
ICML-2019-EtmannLMS #on the #robust
On the Connection Between Adversarial Robustness and Saliency Map Interpretability (CE, SL, PM, CS), pp. 1823–1832.
ICML-2019-FahrbachMZ #adaptation #complexity #query
Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity (MF, VSM, MZ), pp. 1833–1842.
ICML-2019-FanZ #multi
Multi-Frequency Vector Diffusion Maps (YF, ZZ), pp. 1843–1852.
ICML-2019-FarinaKBS #predict
Stable-Predictive Optimistic Counterfactual Regret Minimization (GF, CK, NB, TS), pp. 1853–1862.
ICML-2019-FarinaKS
Regret Circuits: Composability of Regret Minimizers (GF, CK, TS), pp. 1863–1872.
ICML-2019-FatemiSSK #learning
Dead-ends and Secure Exploration in Reinforcement Learning (MF, SS, HvS, SEK), pp. 1873–1881.
ICML-2019-Feige #invariant #learning #multi #representation
Invariant-Equivariant Representation Learning for Multi-Class Data (IF), pp. 1882–1891.
ICML-2019-FeldmanFH #multi #reuse #testing
The advantages of multiple classes for reducing overfitting from test set reuse (VF, RF, MH), pp. 1892–1900.
ICML-2019-FeraudAL #distributed #multi
Decentralized Exploration in Multi-Armed Bandits (RF, RA, RL), pp. 1901–1909.
ICML-2019-FercoqANC #optimisation
Almost surely constrained convex optimization (OF, AA, IN, VC), pp. 1910–1919.
ICML-2019-FinnRKL #online
Online Meta-Learning (CF, AR, SMK, SL), pp. 1920–1930.
ICML-2019-FischerBDGZV #logic #named #network #query
DL2: Training and Querying Neural Networks with Logic (MF, MB, DDC, TG, CZ, MTV), pp. 1931–1941.
ICML-2019-FoersterSHBDWBB #learning #multi
Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning (JNF, HFS, EH, NB, ID, SW, MB, MB), pp. 1942–1951.
ICML-2019-FongLH #multimodal #parametricity #scalability
Scalable Nonparametric Sampling from Multimodal Posteriors with the Posterior Bootstrap (EF, SL, CCH), pp. 1952–1962.
ICML-2019-FrancP #learning #nondeterminism #on the #predict
On discriminative learning of prediction uncertainty (VF, DP), pp. 1963–1971.
ICML-2019-FranceschiNPH #graph #learning #network
Learning Discrete Structures for Graph Neural Networks (LF, MN, MP, XH), pp. 1972–1982.
ICML-2019-FreirichSMT #evaluation #multi #policy
Distributional Multivariate Policy Evaluation and Exploration with the Bellman GAN (DF, TS, RM, AT), pp. 1983–1992.
ICML-2019-FrerixB #approximate #effectiveness #matrix #orthogonal
Approximating Orthogonal Matrices with Effective Givens Factorization (TF, JB), pp. 1993–2001.
ICML-2019-FrognerP #flexibility #performance
Fast and Flexible Inference of Joint Distributions from their Marginals (CF, TAP), pp. 2002–2011.
ICML-2019-FrosstPH #nearest neighbour
Analyzing and Improving Representations with the Soft Nearest Neighbor Loss (NF, NP, GEH), pp. 2012–2020.
ICML-2019-FuKSL #algorithm
Diagnosing Bottlenecks in Deep Q-learning Algorithms (JF, AK, MS, SL), pp. 2021–2030.
ICML-2019-FuLTL #black box #generative #metric #named #network #optimisation #speech
MetricGAN: Generative Adversarial Networks based Black-box Metric Scores Optimization for Speech Enhancement (SWF, CFL, YT0, SDL), pp. 2031–2041.
ICML-2019-FujiiS #adaptation #approximate #policy
Beyond Adaptive Submodularity: Approximation Guarantees of Greedy Policy with Adaptive Submodularity Ratio (KF, SS), pp. 2042–2051.
ICML-2019-FujimotoMP #learning
Off-Policy Deep Reinforcement Learning without Exploration (SF, DM, DP), pp. 2052–2062.
ICML-2019-GamrianG #learning
Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation (SG, YG), pp. 2063–2072.
ICML-2019-GaneaGBS
Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities (OG, SG, GB, AS), pp. 2073–2082.
ICML-2019-GaoJ #graph
Graph U-Nets (HG, SJ), pp. 2083–2092.
ICML-2019-GaoJWWYZ #generative #learning
Deep Generative Learning via Variational Gradient Flow (YG, YJ, YW, YW0, CY, SZ), pp. 2093–2101.
ICML-2019-GaoL0O #theory and practice
Rate Distortion For Model Compression: From Theory To Practice (WG, YHL, CW0, SO), pp. 2102–2111.
ICML-2019-GaoPH
Demystifying Dropout (HG, JP, HH), pp. 2112–2121.
ICML-2019-GaoWH #data analysis #geometry #graph
Geometric Scattering for Graph Data Analysis (FG, GW, MJH), pp. 2122–2131.
ICML-2019-GaoZ #multi
Multi-Frequency Phase Synchronization (TG, ZZ), pp. 2132–2141.
ICML-2019-GazagnadouGS
Optimal Mini-Batch and Step Sizes for SAGA (NG, RMG, JS), pp. 2142–2150.
ICML-2019-GeifmanE #named #network
SelectiveNet: A Deep Neural Network with an Integrated Reject Option (YG, REY), pp. 2151–2159.
ICML-2019-GeistSP #formal method #markov #process
A Theory of Regularized Markov Decision Processes (MG, BS, OP), pp. 2160–2169.
ICML-2019-GeladaKBNB #learning #modelling #named #representation
DeepMDP: Learning Continuous Latent Space Models for Representation Learning (CG, SK, JB, ON, MGB), pp. 2170–2179.
ICML-2019-GengYKK #linear #modelling #visual notation
Partially Linear Additive Gaussian Graphical Models (SG, MY, MK, SK), pp. 2180–2190.
ICML-2019-GhadikolaeiGFS #big data #dataset #learning
Learning and Data Selection in Big Datasets (HSG, HGG, CF, MS), pp. 2191–2200.
ICML-2019-GhaffariLM #algorithm #clustering #network #parallel
Improved Parallel Algorithms for Density-Based Network Clustering (MG, SL, SM), pp. 2201–2210.
ICML-2019-GhaziPW #composition #learning #recursion #sketching
Recursive Sketches for Modular Deep Learning (BG, RP, JRW), pp. 2211–2220.
ICML-2019-GhorbaniJM #modelling #topic
An Instability in Variational Inference for Topic Models (BG, HJ, AM), pp. 2221–2231.
ICML-2019-GhorbaniKX #optimisation
An Investigation into Neural Net Optimization via Hessian Eigenvalue Density (BG, SK, YX), pp. 2232–2241.
ICML-2019-GhorbaniZ #machine learning
Data Shapley: Equitable Valuation of Data for Machine Learning (AG, JYZ), pp. 2242–2251.
ICML-2019-GilboaB0 #learning #performance #taxonomy
Efficient Dictionary Learning with Gradient Descent (DG, SB, JW0), pp. 2252–2259.
ICML-2019-GillenwaterKMV #performance #process
A Tree-Based Method for Fast Repeated Sampling of Determinantal Point Processes (JG, AK, ZM, SV), pp. 2260–2268.
ICML-2019-GillickREEB #learning #sequence
Learning to Groove with Inverse Sequence Transformations (JG, AR, JHE, DE, DB), pp. 2269–2279.
ICML-2019-GilmerFCC #fault
Adversarial Examples Are a Natural Consequence of Test Error in Noise (JG, NF, NC, EDC), pp. 2280–2289.
ICML-2019-GimenezZ #statistics
Discovering Conditionally Salient Features with Statistical Guarantees (JRG, JYZ), pp. 2290–2298.
ICML-2019-GoldfeldBGMNKP #data flow #network
Estimating Information Flow in Deep Neural Networks (ZG, EvdB, KHG, IM, NN, BK, YP), pp. 2299–2308.
ICML-2019-GolinskiWR #integration #monte carlo
Amortized Monte Carlo Integration (AG, FW, TR), pp. 2309–2318.
ICML-2019-GollapudiP #algorithm #online
Online Algorithms for Rent-Or-Buy with Expert Advice (SG, DP), pp. 2319–2327.
ICML-2019-GolovnevPS #learning
The information-theoretic value of unlabeled data in semi-supervised learning (AG, DP, BS), pp. 2328–2336.
ICML-2019-GongHLQWL #performance
Efficient Training of BERT by Progressively Stacking (LG, DH, ZL, TQ, LW0, TYL), pp. 2337–2346.
ICML-2019-Gong0L #optimisation
Quantile Stein Variational Gradient Descent for Batch Bayesian Optimization (CG, JP0, QL0), pp. 2347–2356.
ICML-2019-GordalizaBGL #using
Obtaining Fairness using Optimal Transport Theory (PG, EdB, FG, JML), pp. 2357–2365.
ICML-2019-GottesmanLSBD #evaluation #modelling #parametricity
Combining parametric and nonparametric models for off-policy evaluation (OG, YL0, SS, EB, FDV), pp. 2366–2375.
ICML-2019-GoyalWEBPL #visual notation
Counterfactual Visual Explanations (YG, ZW, JE, DB, DP, SL), pp. 2376–2384.
ICML-2019-GrantBGLVC #adaptation
Adaptive Sensor Placement for Continuous Spaces (JAG, AB, RRG, DSL, SV, EMdC), pp. 2385–2393.
ICML-2019-Greaves-Tunnell #memory management #music #statistics
A Statistical Investigation of Long Memory in Language and Music (AGT, ZH), pp. 2394–2403.
ICML-2019-GreenbergNM #automation
Automatic Posterior Transformation for Likelihood-Free Inference (DSG, MN, JHM), pp. 2404–2414.
ICML-2019-GreenfeldGBYK #learning #multi
Learning to Optimize Multigrid PDE Solvers (DG, MG, RB, IY, RK), pp. 2415–2423.
ICML-2019-GreffKKWBZMBL #learning #multi #representation
Multi-Object Representation Learning with Iterative Variational Inference (KG, RLK, RK, NW, CB, DZ, LM, MB, AL), pp. 2424–2433.
ICML-2019-GroverZE #generative #graph #modelling #named
Graphite: Iterative Generative Modeling of Graphs (AG, AZ, SE), pp. 2434–2444.
ICML-2019-GuY #algorithm #modelling #multi #performance #ranking
Fast Algorithm for Generalized Multinomial Models with Ranking Data (JG, GY), pp. 2445–2453.
ICML-2019-GuanWZCH0 #comprehension #modelling #towards
Towards a Deep and Unified Understanding of Deep Neural Models in NLP (CG, XW, QZ, RC, DH, XX0), pp. 2454–2463.
ICML-2019-GuezMGKRWRSOEWS
An Investigation of Model-Free Planning (AG, MM, KG, RK, SR, TW, DR, AS, LO, TE, GW, DS, TPL), pp. 2464–2473.
ICML-2019-GultchinPBSK #word
Humor in Word Embeddings: Cockamamie Gobbledegook for Nincompoops (LG, GP, NB, NS, AK), pp. 2474–2483.
ICML-2019-GuoGYWW #black box
Simple Black-box Adversarial Attacks (CG, JRG, YY, AGW, KQW), pp. 2484–2493.
ICML-2019-0002LA #multi #network
Exploring interpretable LSTM neural networks over multi-variable data (TG0, TL, NAF), pp. 2494–2504.
ICML-2019-GuoSH #dependence #graph #learning #relational
Learning to Exploit Long-term Relational Dependencies in Knowledge Graphs (LG, ZS, WH0), pp. 2505–2514.
ICML-2019-GuralM #classification #embedded
Memory-Optimal Direct Convolutions for Maximizing Classification Accuracy in Embedded Applications (AG, BM), pp. 2515–2524.
ICML-2019-HaberLTR #network
IMEXnet A Forward Stable Deep Neural Network (EH, KL, ET, LR), pp. 2525–2534.
ICML-2019-HacohenW #education #learning #network #on the #power of
On The Power of Curriculum Learning in Training Deep Networks (GH, DW), pp. 2535–2544.
ICML-2019-HaddadpourKMC #communication #distributed #optimisation
Trading Redundancy for Communication: Speeding up Distributed SGD for Non-convex Optimization (FH, MMK, MM, VRC), pp. 2545–2554.
ICML-2019-HafnerLFVHLD #learning
Learning Latent Dynamics for Planning from Pixels (DH, TPL, IF, RV, DH, HL, JD), pp. 2555–2565.
ICML-2019-HalperinEH
Neural Separation of Observed and Unobserved Distributions (TH, AE, YH), pp. 2566–2575.
ICML-2019-HanSDXWSLZ #game studies #learning #multi #video
Grid-Wise Control for Multi-Agent Reinforcement Learning in Video Game AI (LH, PS, YD, JX, QW, XS, HL, TZ), pp. 2576–2585.
ICML-2019-HanS #learning
Dimension-Wise Importance Sampling Weight Clipping for Sample-Efficient Reinforcement Learning (SH, YS), pp. 2586–2595.
ICML-2019-HaninR #complexity #linear #network
Complexity of Linear Regions in Deep Networks (BH, DR), pp. 2596–2604.
ICML-2019-HannaNS #behaviour #evaluation #policy
Importance Sampling Policy Evaluation with an Estimated Behavior Policy (JH, SN, PS), pp. 2605–2613.
ICML-2019-HaoO #estimation
Doubly-Competitive Distribution Estimation (YH, AO), pp. 2614–2623.
ICML-2019-HaochenS #finite #random
Random Shuffling Beats SGD after Finite Epochs (JH, SS), pp. 2624–2633.
ICML-2019-HarshawFWK #algorithm #performance
Submodular Maximization beyond Non-negativity: Guarantees, Fast Algorithms, and Applications (CH, MF, JW, AK), pp. 2634–2643.
ICML-2019-HarutyunyanVHNP
Per-Decision Option Discounting (AH, PV, PH, AN, DP), pp. 2644–2652.
ICML-2019-HashemiGVT #information management #modelling #polynomial
Submodular Observation Selection and Information Gathering for Quadratic Models (AH, MG, HV, UT), pp. 2653–2662.
ICML-2019-HavivRB #comprehension #memory management #network
Understanding and Controlling Memory in Recurrent Neural Networks (DH, AR, OB), pp. 2663–2671.
ICML-2019-HayouDR #network #on the
On the Impact of the Activation function on Deep Neural Networks Training (SH, AD, JR), pp. 2672–2680.
ICML-2019-HazanKSS #performance
Provably Efficient Maximum Entropy Exploration (EH, SMK, KS, AVS), pp. 2681–2691.
ICML-2019-HeidariNG #algorithm #learning #on the #policy #social
On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning (HH, VN, KPG), pp. 2692–2701.
ICML-2019-HendrickxOS #graph #learning
Graph Resistance and Learning from Pairwise Comparisons (JMH, AO, VS), pp. 2702–2711.
ICML-2019-HendrycksLM #nondeterminism #robust #using
Using Pre-Training Can Improve Model Robustness and Uncertainty (DH, KL, MM), pp. 2712–2721.
ICML-2019-HoCSDA #architecture #design #generative #modelling
Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design (JH, XC0, AS, YD, PA), pp. 2722–2730.
ICML-2019-HoLCSA #learning #performance #policy
Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules (DH, EL, XC0, IS, PA), pp. 2731–2741.
ICML-2019-HoangHLK #black box #multi
Collective Model Fusion for Multiple Black-Box Experts (QMH, TNH, BKHL, CK), pp. 2742–2750.
ICML-2019-HoferKND #learning #persistent #representation
Connectivity-Optimized Representation Learning via Persistent Homology (CDH, RK, MN, MD), pp. 2751–2760.
ICML-2019-HollandI #robust #using
Better generalization with less data using robust gradient descent (MJH, KI), pp. 2761–2770.
ICML-2019-HoogeboomBW #generative #normalisation
Emerging Convolutions for Generative Normalizing Flows (EH, RvdB, MW), pp. 2771–2780.
ICML-2019-HorvathR #optimisation
Nonconvex Variance Reduced Optimization with Arbitrary Sampling (SH, PR), pp. 2781–2789.
ICML-2019-HoulsbyGJMLGAG #learning
Parameter-Efficient Transfer Learning for NLP (NH, AG, SJ, BM, QdL, AG, MA, SG), pp. 2790–2799.
ICML-2019-HsiehLBK #linear
Stay With Me: Lifetime Maximization Through Heteroscedastic Linear Bandits With Reneging (PCH, XL0, AB, PRK), pp. 2800–2809.
ICML-2019-HsiehLC #generative #nash #network
Finding Mixed Nash Equilibria of Generative Adversarial Networks (YPH, CL, VC), pp. 2810–2819.
ICML-2019-HsiehNS #classification
Classification from Positive, Unlabeled and Biased Negative Data (YGH, GN, MS), pp. 2820–2829.
ICML-2019-HsuR #kernel
Bayesian Deconditional Kernel Mean Embeddings (KH, FR), pp. 2830–2838.
ICML-2019-HuangCH #multi #optimisation #performance #probability
Faster Stochastic Alternating Direction Method of Multipliers for Nonconvex Optimization (FH, SC, HH), pp. 2839–2848.
ICML-2019-HuangDGZ #learning
Unsupervised Deep Learning by Neighbourhood Discovery (JH, QD0, SG, XZ), pp. 2849–2858.
ICML-2019-Huang0 #algorithm #community #correlation #detection
Detecting Overlapping and Correlated Communities without Pure Nodes: Identifiability and Algorithm (KH, XF0), pp. 2859–2868.
ICML-2019-HuangSDLC
Hierarchical Importance Weighted Autoencoders (CWH, KS, ED, AL, ACC), pp. 2869–2878.
ICML-2019-HuangV #classification
Stable and Fair Classification (LH, NKV), pp. 2879–2890.
ICML-2019-HuangZTMSGS #adaptation
Addressing the Loss-Metric Mismatch with Adaptive Loss Alignment (CH, SZ, WT, MÁB0, SYS, CG, JMS), pp. 2891–2900.
ICML-2019-Huang0GG #modelling
Causal Discovery and Forecasting in Nonstationary Environments with State-Space Models (BH, KZ0, MG, CG), pp. 2901–2910.
ICML-2019-HuntBLH #policy #using
Composing Entropic Policies using Divergence Correction (JJH, AB, TPL, NH), pp. 2911–2920.
ICML-2019-HwangJY #classification #generative #named
HexaGAN: Generative Adversarial Nets for Real World Classification (UH, DJ, SY), pp. 2921–2930.
ICML-2019-IalongoWHR #approximate #modelling #process
Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models (ADI, MvdW, JH, CER), pp. 2931–2940.
ICML-2019-InnesL #learning #problem
Learning Structured Decision Problems with Unawareness (CI, AL), pp. 2941–2950.
ICML-2019-IpsenH #exclamation
Phase transition in PCA with missing data: Reduced signal-to-noise ratio, not sample size! (NBI, LKH), pp. 2951–2960.
ICML-2019-IqbalS #learning #multi
Actor-Attention-Critic for Multi-Agent Reinforcement Learning (SI, FS), pp. 2961–2970.
ICML-2019-IshidaNMS #learning #modelling
Complementary-Label Learning for Arbitrary Losses and Models (TI, GN, AKM, MS), pp. 2971–2980.
ICML-2019-JaberZB #equivalence #identification #markov
Causal Identification under Markov Equivalence: Completeness Results (AJ, JZ, EB), pp. 2981–2989.
ICML-2019-JacqGPP #learning
Learning from a Learner (AJ, MG, AP, OP), pp. 2990–2999.
ICML-2019-JagielskiKMORSU #learning
Differentially Private Fair Learning (MJ, MJK, JM, AO, AR0, SSM, JU), pp. 3000–3008.
ICML-2019-JainiSY #polynomial
Sum-of-Squares Polynomial Flow (PJ, KAS, YY), pp. 3009–3018.
ICML-2019-JangJ #clustering #performance #scalability #towards
DBSCAN++: Towards fast and scalable density clustering (JJ, HJ), pp. 3019–3029.
ICML-2019-JangLHS #learning #what
Learning What and Where to Transfer (YJ, HL, SJH, JS), pp. 3030–3039.
ICML-2019-JaquesLHGOSLF #learning #motivation #multi #social
Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning (NJ, AL, EH, ÇG, PAO, DS, JZL, NdF), pp. 3040–3049.
ICML-2019-JayRGST #internet #learning
A Deep Reinforcement Learning Perspective on Internet Congestion Control (NJ, NHR, BG, MS, AT), pp. 3050–3059.
ICML-2019-JeongKKN #graph #modelling #music #network #performance
Graph Neural Network for Music Score Data and Modeling Expressive Piano Performance (DJ, TK, YK, JN), pp. 3060–3070.
ICML-2019-JeongLK #network
Ladder Capsule Network (TJ, YL, HK), pp. 3071–3079.
ICML-2019-JeongS
Training CNNs with Selective Allocation of Channels (JJ, JS), pp. 3080–3090.
ICML-2019-JeongS19a #learning
Learning Discrete and Continuous Factors of Data via Alternating Disentanglement (YJ, HOS), pp. 3091–3099.
ICML-2019-JiWZL #algorithm #analysis #optimisation
Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization (KJ, ZW, YZ, YL), pp. 3100–3109.
ICML-2019-JiangL #learning #logic
Neural Logic Reinforcement Learning (ZJ, SL), pp. 3110–3119.
ICML-2019-JinnaiAHLK
Finding Options that Minimize Planning Time (YJ, DA, DEH, MLL, GDK), pp. 3120–3129.
ICML-2019-JinnaiPAK
Discovering Options for Exploration by Minimizing Cover Time (YJ, JWP, DA, GDK), pp. 3130–3139.
ICML-2019-JitkrittumSGRHS #kernel
Kernel Mean Matching for Content Addressability of GANs (WJ, PS, MWG, AR, JH, BS), pp. 3140–3151.
ICML-2019-JohnHS #difference #equation #named #off the shelf
GOODE: A Gaussian Off-The-Shelf Ordinary Differential Equation Solver (DJ, VH, MS), pp. 3152–3162.
ICML-2019-JunWWN #rank
Bilinear Bandits with Low-rank Structure (KSJ, RW, SW, RDN), pp. 3163–3172.
ICML-2019-KahngLNPP #statistics
Statistical Foundations of Virtual Democracy (AK, MKL, RN, ADP, CAP), pp. 3173–3182.
ICML-2019-Kajino #graph grammar #optimisation
Molecular Hypergraph Grammar with Its Application to Molecular Optimization (HK), pp. 3183–3191.
ICML-2019-KalimerisKS #modelling #robust
Robust Influence Maximization for Hyperparametric Models (DK, GK, YS), pp. 3192–3200.
ICML-2019-Kallus
Classifying Treatment Responders Under Causal Effect Monotonicity (NK), pp. 3201–3210.
ICML-2019-KalyanALB #modelling #sequence #set
Trainable Decoding of Sets of Sequences for Neural Sequence Models (AK, PA, SL, DB), pp. 3211–3221.
ICML-2019-KandasamyNZKSP #adaptation #design
Myopic Posterior Sampling for Adaptive Goal Oriented Design of Experiments (KK, WN, RZ, AK, JS, BP), pp. 3222–3232.
ICML-2019-KaplanMMS #concept #geometry #learning
Differentially Private Learning of Geometric Concepts (HK, YM, YM, US), pp. 3233–3241.
ICML-2019-KaplanisSC #learning #policy
Policy Consolidation for Continual Reinforcement Learning (CK, MS, CC), pp. 3242–3251.
ICML-2019-KarimireddyRSJ #fault #feedback
Error Feedback Fixes SignSGD and other Gradient Compression Schemes (SPK, QR, SUS, MJ), pp. 3252–3261.
ICML-2019-KasaiJM #adaptation #algorithm #matrix #probability
Riemannian adaptive stochastic gradient algorithms on matrix manifolds (HK, PJ, BM), pp. 3262–3271.
ICML-2019-KasparOMKM #image
Neural Inverse Knitting: From Images to Manufacturing Instructions (AK, THO, LM, PK, WM), pp. 3272–3281.
ICML-2019-KatharopoulosF #image #modelling
Processing Megapixel Images with Deep Attention-Sampling Models (AK, FF), pp. 3282–3291.
ICML-2019-KatiyarHC #estimation #modelling #robust #visual notation
Robust Estimation of Tree Structured Gaussian Graphical Models (AK, JH, CC), pp. 3292–3300.
ICML-2019-KayaHD #comprehension #network
Shallow-Deep Networks: Understanding and Mitigating Network Overthinking (YK, SH, TD), pp. 3301–3310.
ICML-2019-0001MZLK #adaptation #approximate #complexity #memory management #streaming
Submodular Streaming in All Its Glory: Tight Approximation, Minimum Memory and Low Adaptive Complexity (EK0, MM, MZ, SL, AK), pp. 3311–3320.
ICML-2019-KempkaKW #adaptation #algorithm #invariant #learning #linear #modelling #online
Adaptive Scale-Invariant Online Algorithms for Learning Linear Models (MK, WK, MKW), pp. 3321–3330.
ICML-2019-KenterWCCV #named #network #speech #synthesis
CHiVE: Varying Prosody in Speech Synthesis with a Linguistically Driven Dynamic Hierarchical Conditional Variational Network (TK, VW, CaC, RC, JV), pp. 3331–3340.
ICML-2019-KhadkaMNDTMLT #collaboration #learning
Collaborative Evolutionary Reinforcement Learning (SK, SM, TN, ZD, ET, SM, YL, KT), pp. 3341–3350.
ICML-2019-KhasanovaF #geometry #image #representation
Geometry Aware Convolutional Filters for Omnidirectional Images Representation (RK, PF), pp. 3351–3359.
ICML-2019-KimKJLS #named
EMI: Exploration with Mutual Information (HK, JK, YJ, SL, HOS), pp. 3360–3369.
ICML-2019-KimLSKY #generative
FloWaveNet : A Generative Flow for Raw Audio (SK, SgL, JS, JK, SY), pp. 3370–3378.
ICML-2019-KimNKKK #named
Curiosity-Bottleneck: Exploration By Distilling Task-Specific Novelty (YK, WN, HK, JHK, GK), pp. 3379–3388.
ICML-2019-KimP #algorithm #multi
Contextual Multi-armed Bandit Algorithm for Semiparametric Reward Model (GSK, MCP), pp. 3389–3397.
ICML-2019-KimSRW #adaptation #convergence #kernel
Uniform Convergence Rate of the Kernel Density Estimator Adaptive to Intrinsic Volume Dimension (JK, JS, AR, LAW), pp. 3398–3407.
ICML-2019-KingmaAH #named #recursion
Bit-Swap: Recursive Bits-Back Coding for Lossless Compression with Hierarchical Latent Variables (FHK, PA, JH), pp. 3408–3417.
ICML-2019-KipfLDZSGKB #composition #execution #learning #named
CompILE: Compositional Imitation Learning and Execution (TK, YL, HD, VFZ, ASG, EG, PK, PWB), pp. 3418–3428.
ICML-2019-KirschnerMHI0 #adaptation #optimisation
Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces (JK, MM, NH, RI, AK0), pp. 3429–3438.
ICML-2019-KleimanP #machine learning #metric #modelling #multi #named #performance
AUCμ: A Performance Metric for Multi-Class Machine Learning Models (RK, DP), pp. 3439–3447.
ICML-2019-KleindessnerAM #clustering #summary
Fair k-Center Clustering for Data Summarization (MK, PA, JM), pp. 3448–3457.
ICML-2019-KleindessnerSAM #clustering #constraints
Guarantees for Spectral Clustering with Fairness Constraints (MK, SS, PA, JM), pp. 3458–3467.
ICML-2019-KoLWDWL #named #network #robust
POPQORN: Quantifying Robustness of Recurrent Neural Networks (CYK, ZL, LW, LD, NW, DL), pp. 3468–3477.
ICML-2019-KoloskovaSJ #algorithm #communication #distributed #optimisation #probability
Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication (AK, SUS, MJ), pp. 3478–3487.
ICML-2019-KonstantinovL #learning #robust
Robust Learning from Untrusted Sources (NK, CL), pp. 3488–3498.
ICML-2019-KoolHW #probability #sequence
Stochastic Beams and Where To Find Them: The Gumbel-Top-k Trick for Sampling Sequences Without Replacement (WK, HvH, MW), pp. 3499–3508.
ICML-2019-KoratanaKBZ #named #representation
LIT: Learned Intermediate Representation Training for Model Compression (AK, DK, PB, MZ), pp. 3509–3518.
ICML-2019-Kornblith0LH #network #revisited #similarity
Similarity of Neural Network Representations Revisited (SK, MN0, HL, GEH), pp. 3519–3529.
ICML-2019-KroshninTDDGU #approximate #complexity #on the
On the Complexity of Approximating Wasserstein Barycenters (AK, NT, DD, PED, AG, CAU), pp. 3530–3540.
ICML-2019-KulunchakovM #optimisation #probability #sequence
Estimate Sequences for Variance-Reduced Stochastic Composite Optimization (AK, JM), pp. 3541–3550.
ICML-2019-0001PRW #algorithm #matrix #performance
Faster Algorithms for Binary Matrix Factorization (RK0, RP, AR, DPW), pp. 3551–3559.
ICML-2019-KuninBGS #linear
Loss Landscapes of Regularized Linear Autoencoders (DK, JMB, AG, CS), pp. 3560–3569.
ICML-2019-KuoLZ0 #geometry #symmetry
Geometry and Symmetry in Short-and-Sparse Deconvolution (HWK, YL, YZ, JW0), pp. 3570–3580.
ICML-2019-KurachLZMG #normalisation #scalability
A Large-Scale Study on Regularization and Normalization in GANs (KK, ML, XZ, MM, SG), pp. 3581–3590.
ICML-2019-Kusner0LS
Making Decisions that Reduce Discriminatory Impacts (MJK, CR0, JRL, RS), pp. 3591–3600.
ICML-2019-KvetonSVWLG #multi
Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits (BK, CS, SV, ZW, TL, MG), pp. 3601–3610.
ICML-2019-Labatie #network
Characterizing Well-Behaved vs. Pathological Deep Neural Networks (AL), pp. 3611–3621.
ICML-2019-LambBGSMBM #modelling #network
State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations (AL, JB, AG, SS, IM, YB, MM), pp. 3622–3631.
ICML-2019-Lamprier
A Recurrent Neural Cascade-based Model for Continuous-Time Diffusion (SL), pp. 3632–3641.
ICML-2019-WonXL #learning
Projection onto Minkowski Sums with Application to Constrained Learning (JHW, JX, KL), pp. 3642–3651.
ICML-2019-LarocheTC #policy
Safe Policy Improvement with Baseline Bootstrapping (RL, PT, RTdC), pp. 3652–3661.
ICML-2019-LattanziS #algorithm
A Better k-means++ Algorithm via Local Search (SL, CS), pp. 3662–3671.
ICML-2019-LawLSZ #distance #learning
Lorentzian Distance Learning for Hyperbolic Representations (MTL, RL, JS, RSZ), pp. 3672–3681.
ICML-2019-LawrenceEC #dependence #learning #multi #named #parametricity
DP-GP-LVM: A Bayesian Non-Parametric Model for Learning Multivariate Dependency Structures (ARL, CHE, NDFC), pp. 3682–3691.
ICML-2019-X #bound #named #policy #predict #using
POLITEX: Regret Bounds for Policy Iteration using Expert Prediction, pp. 3692–3702.
ICML-2019-0002VY #constraints #learning #policy
Batch Policy Learning under Constraints (HML0, CV, YY), pp. 3703–3712.
ICML-2019-0002H #learning
Target-Based Temporal-Difference Learning (DL0, NH), pp. 3713–3722.
ICML-2019-LeeJAJ #approach #functional #game studies
Functional Transparency for Structured Data: a Game-Theoretic Approach (GHL, WJ, DAM, TSJ), pp. 3723–3733.
ICML-2019-LeeLK #graph #self
Self-Attention Graph Pooling (JL, IL, JK), pp. 3734–3743.
ICML-2019-LeeLKKCT #framework #invariant #network #set
Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks (JL, YL, JK, ARK, SC, YWT), pp. 3744–3753.
ICML-2019-LeeW #algorithm #first-order #performance #problem
First-Order Algorithms Converge Faster than $O(1/k)$ on Convex Problems (CPL, SW), pp. 3754–3762.
ICML-2019-LeeYLLLS #classification #generative #robust
Robust Inference via Generative Classifiers for Handling Noisy Labels (KL, SY, KL, HL, BL, JS), pp. 3763–3772.
ICML-2019-LeiHKT #nearest neighbour #sublinear
Sublinear Time Nearest Neighbor Search over Generalized Weighted Space (YL, QH, MSK, AKHT), pp. 3773–3781.
ICML-2019-LerasleSML #estimation
MONK Outlier-Robust Mean Embedding Estimation by Median-of-Means (ML, ZS, TM, GL), pp. 3782–3793.
ICML-2019-CasadoM #constraints #network #orthogonal
Cheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Group (MLC, DMR), pp. 3794–3803.
ICML-2019-LiBS #classification #generative #question #robust
Are Generative Classifiers More Robust to Adversarial Attacks? (YL, JB, YS), pp. 3804–3814.
ICML-2019-LiCW #algorithm #classification #kernel #linear #quantum #sublinear
Sublinear quantum algorithms for training linear and kernel-based classifiers (TL, SC, XW), pp. 3815–3824.
ICML-2019-LiDMMHH #learning #named #network
LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning (HYL, WD, XM, CM, FH, BGH), pp. 3825–3834.
ICML-2019-LiGDVK #graph #learning #network #similarity
Graph Matching Networks for Learning the Similarity of Graph Structured Objects (YL, CG, TD, OV, PK), pp. 3835–3845.
ICML-2019-LiKBS
Area Attention (YL0, LK, SB, SS), pp. 3846–3855.
ICML-2019-LiLS #learning #online #rank
Online Learning to Rank with Features (SL, TL, CS), pp. 3856–3865.
ICML-2019-LiLWZG #black box #learning #named #network
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks (YL, LL, LW, TZ, BG), pp. 3866–3876.
ICML-2019-LiMC #visual notation
Bayesian Joint Spike-and-Slab Graphical Lasso (ZRL, THM, SJC), pp. 3877–3885.
ICML-2019-LiRC #correlation #crowdsourcing
Exploiting Worker Correlation for Label Aggregation in Crowdsourcing (YL0, BIPR, TC), pp. 3886–3895.
ICML-2019-LiSK #learning #physics
Adversarial camera stickers: A physical camera-based attack on deep learning systems (JL0, FRS, JZK), pp. 3896–3904.
ICML-2019-LiTOS #analysis #fourier #random #towards
Towards a Unified Analysis of Random Fourier Features (ZL, JFT, DO, DS), pp. 3905–3914.
ICML-2019-LiYZH #network
Feature-Critic Networks for Heterogeneous Domain Generalization (YL, YY, WZ, TMH), pp. 3915–3924.
ICML-2019-LiZWSX #framework #learning
Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting (XL, YZ, TW, RS, CX), pp. 3925–3934.
ICML-2019-LiZT #higher-order
Alternating Minimizations Converge to Second-Order Optimal Solutions (QL, ZZ, GT), pp. 3935–3943.
ICML-2019-LiakopoulosDPSM #constraints #online #optimisation
Cautious Regret Minimization: Online Optimization with Long-Term Budget Constraints (NL, AD, GSP, TS, PM), pp. 3944–3952.
ICML-2019-LichtenbergS
Regularization in directable environments with application to Tetris (JML, ÖS), pp. 3953–3962.
ICML-2019-LikhosherstovMC #modelling
Inference and Sampling of $K_33$-free Ising Models (VL, YM, MC), pp. 3963–3972.
ICML-2019-LimA #kernel #learning #markov #process #robust
Kernel-Based Reinforcement Learning in Robust Markov Decision Processes (SHL, AA), pp. 3973–3981.
ICML-2019-LinHJ #algorithm #analysis #on the #performance
On Efficient Optimal Transport: An Analysis of Greedy and Accelerated Mirror Descent Algorithms (TL, NH, MIJ), pp. 3982–3991.
ICML-2019-LinKS #approximate #performance
Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations (WL, MEK, MWS), pp. 3992–4002.
ICML-2019-LiuFY
Acceleration of SVRG and Katyusha X by Inexact Preconditioning (YL0, FF, WY), pp. 4003–4012.
ICML-2019-LiuLWJ #adaptation #approach #classification
Transferable Adversarial Training: A General Approach to Adapting Deep Classifiers (HL, ML, JW0, MIJ), pp. 4013–4022.
ICML-2019-LiuRTJM #probability
Rao-Blackwellized Stochastic Gradients for Discrete Distributions (RL, JR, NT, MIJ, JDM), pp. 4023–4031.
ICML-2019-LiuS #learning #multi
Sparse Extreme Multi-label Learning with Oracle Property (WL, XS0), pp. 4032–4041.
ICML-2019-LiuS19a #probability
Data Poisoning Attacks on Stochastic Bandits (FL0, NBS), pp. 4042–4050.
ICML-2019-LiuSH #learning
The Implicit Fairness Criterion of Unconstrained Learning (LTL, MS, MH), pp. 4051–4060.
ICML-2019-LiuSX #learning #performance
Taming MAML: Efficient unbiased meta-reinforcement learning (HL, RS, CX), pp. 4061–4071.
ICML-2019-LiuTC #bound #on the
On Certifying Non-Uniform Bounds against Adversarial Attacks (CL, RT, VC), pp. 4072–4081.
ICML-2019-LiuZCZ0 #comprehension
Understanding and Accelerating Particle-Based Variational Inference (CL0, JZ, PC, RZ, JZ0), pp. 4082–4092.
ICML-2019-LiuZ0 #comprehension
Understanding MCMC Dynamics as Flows on the Wasserstein Space (CL0, JZ, JZ0), pp. 4093–4103.
ICML-2019-LiutkusSMDS #generative #modelling #parametricity
Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions (AL, US, SM, AD, FRS), pp. 4104–4113.
ICML-2019-LocatelloBLRGSB #learning
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations (FL, SB, ML, GR, SG, BS, OB), pp. 4114–4124.
ICML-2019-LondonS
Bayesian Counterfactual Risk Minimization (BL, TS), pp. 4125–4133.
ICML-2019-LuHW #convergence #higher-order #named #optimisation
PA-GD: On the Convergence of Perturbed Alternating Gradient Descent to Second-Order Stationary Points for Structured Nonconvex Optimization (SL, MH, ZW), pp. 4134–4143.
ICML-2019-LuMT0
Neurally-Guided Structure Inference (SL, JM, JBT, JW0), pp. 4144–4153.
ICML-2019-LuWHZ #algorithm
Optimal Algorithms for Lipschitz Bandits with Heavy-tailed Rewards (SL, GW, YH, LZ0), pp. 4154–4163.
ICML-2019-LuYFZ0 #generative #modelling #named
CoT: Cooperative Training for Generative Modeling of Discrete Data (SL, LY, SF, YZ, WZ0), pp. 4164–4172.
ICML-2019-LucibelloSL #approximate #estimation #overview
Generalized Approximate Survey Propagation for High-Dimensional Estimation (CL, LS, YML), pp. 4173–4182.
ICML-2019-LucicTRZBG #generative #image
High-Fidelity Image Generation With Fewer Labels (ML, MT, MR, XZ, OB, SG), pp. 4183–4192.
ICML-2019-LuiseSPC #predict #rank
Leveraging Low-Rank Relations Between Surrogate Tasks in Structured Prediction (GL, DS, MP, CC), pp. 4193–4202.
ICML-2019-PingPSZRW #learning #normalisation #representation
Differentiable Dynamic Normalization for Learning Deep Representation (LP, ZP, WS, RZ, JR, LW), pp. 4203–4211.
ICML-2019-Ma0KW0 #graph #network
Disentangled Graph Convolutional Networks (JM, PC0, KK, XW0, WZ0), pp. 4212–4221.
ICML-2019-MaLH #process
Variational Implicit Processes (CM, YL, JMHL), pp. 4222–4233.
ICML-2019-MaTPHNZ #named #performance
EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE (CM, ST, KP, JMHL, SN, CZ), pp. 4234–4243.
ICML-2019-MagnussonAJV #scalability
Bayesian leave-one-out cross-validation for large data (MM, MRA, JJ, AV), pp. 4244–4253.
ICML-2019-MahabadiIGR #algorithm #composition
Composable Core-sets for Determinant Maximization: A Simple Near-Optimal Algorithm (SM, PI, SOG, AR), pp. 4254–4263.
ICML-2019-Maheswaranathan #random
Guided evolutionary strategies: augmenting random search with surrogate gradients (NM, LM, GT, DC, JSD), pp. 4264–4273.
ICML-2019-MahloujifarMM #learning #multi
Data Poisoning Attacks in Multi-Party Learning (SM, MM, AM), pp. 4274–4283.
ICML-2019-MahoneyM #modelling #network
Traditional and Heavy Tailed Self Regularization in Neural Network Models (MWM, CM), pp. 4284–4293.
ICML-2019-MaiJ
Curvature-Exploiting Acceleration of Elastic Net Computations (VVM, MJ0), pp. 4294–4303.
ICML-2019-MakkuvaVKO #algorithm #consistency #performance
Breaking the gridlock in Mixture-of-Experts: Consistent and Efficient Algorithms (AVM, PV, SK, SO), pp. 4304–4313.
ICML-2019-MalikKSNSE #learning #modelling
Calibrated Model-Based Deep Reinforcement Learning (AM, VK, JS, DN, HS, SE), pp. 4314–4323.
ICML-2019-MannGGHJLS #learning #recommendation
Learning from Delayed Outcomes via Proxies with Applications to Recommender Systems (TAM, SG, AG, HH, RJ, BL, PS), pp. 4324–4332.
ICML-2019-MannelliKUZ #algorithm #modelling
Passed & Spurious: Descent Algorithms and Local Minima in Spiked Matrix-Tensor Models (SSM, FK, PU, LZ), pp. 4333–4342.
ICML-2019-MaoFRAFW #estimation #graph #order #probability
A Baseline for Any Order Gradient Estimation in Stochastic Computation Graphs (JM, JNF, TR, MAS, GF, SW), pp. 4343–4351.
ICML-2019-MarafiotiPHM #generative #synthesis
Adversarial Generation of Time-Frequency Features with application in audio synthesis (AM, NP, NH, PM), pp. 4352–4362.
ICML-2019-MaronFSL #invariant #network #on the
On the Universality of Invariant Networks (HM, EF, NS, YL), pp. 4363–4371.
ICML-2019-MartensCY #modelling #process
Decomposing feature-level variation with Covariate Gaussian Process Latent Variable Models (KM, KRC, CY), pp. 4372–4381.
ICML-2019-MaryCK #learning
Fairness-Aware Learning for Continuous Attributes and Treatments (JM, CC, NEK), pp. 4382–4391.
ICML-2019-MathiasenLG
Optimal Minimal Margin Maximization with Boosting (AM, KGL, AG), pp. 4392–4401.
ICML-2019-MathieuRST
Disentangling Disentanglement in Variational Autoencoders (EM, TR, NS0, YWT), pp. 4402–4412.
ICML-2019-MatteiF #generative #modelling #named #semistructured data #set
MIWAE: Deep Generative Modelling and Imputation of Incomplete Data Sets (PAM, JF), pp. 4413–4423.
ICML-2019-MavrinYKWY #learning #performance
Distributional Reinforcement Learning for Efficient Exploration (BM, HY, LK, KW, YY), pp. 4424–4434.
ICML-2019-McKennaSM #difference #estimation #modelling #privacy
Graphical-model based estimation and inference for differential privacy (RM, DS, GM), pp. 4435–4444.
ICML-2019-RoederGPDM #performance
Efficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems (GR, PKG, AP, ND, EM), pp. 4445–4455.
ICML-2019-MehrotraCV #online #towards
Toward Controlling Discrimination in Online Ad Auctions (LEC, AM, NKV), pp. 4456–4465.
ICML-2019-MehtaCR #graph #network #probability
Stochastic Blockmodels meet Graph Neural Networks (NM, LC, PR), pp. 4466–4474.
ICML-2019-MeiQE
Imputing Missing Events in Continuous-Time Event Streams (HM, GQ, JE), pp. 4475–4485.
ICML-2019-MellerFAG #fault #network
Same, Same But Different: Recovering Neural Network Quantization Error Through Weight Factorization (EM, AF, UA, MG), pp. 4486–4495.
ICML-2019-MemoliSW
The Wasserstein Transform (FM, ZTS, ZW), pp. 4496–4504.
ICML-2019-MendisRAC #estimation #named #network #performance #throughput #using
Ithemal: Accurate, Portable and Fast Basic Block Throughput Estimation using Deep Neural Networks (CM, AR, SPA, MC), pp. 4505–4515.
ICML-2019-MenschBP #geometry #learning
Geometric Losses for Distributional Learning (AM, MB, GP), pp. 4516–4525.
ICML-2019-MercadoT0 #clustering #graph #matrix
Spectral Clustering of Signed Graphs via Matrix Power Means (PM0, FT, MH0), pp. 4526–4536.
ICML-2019-MetelT #optimisation #probability
Simple Stochastic Gradient Methods for Non-Smooth Non-Convex Regularized Optimization (MRM, AT), pp. 4537–4545.
ICML-2019-MetelliGR #configuration management #learning
Reinforcement Learning in Configurable Continuous Environments (AMM, EG, MR), pp. 4546–4555.
ICML-2019-MetzMNFS #comprehension
Understanding and correcting pathologies in the training of learned optimizers (LM, NM, JN, CDF, JSD), pp. 4556–4565.
ICML-2019-MeyerH #classification #kernel #performance #statistics
Optimality Implies Kernel Sum Classifiers are Statistically Efficient (RAM, JH), pp. 4566–4574.
ICML-2019-MianjyA #on the
On Dropout and Nuclear Norm Regularization (PM, RA), pp. 4575–4584.
ICML-2019-MillerOCM #modelling
Discriminative Regularization for Latent Variable Models with Applications to Electrocardiography (ACM, ZO, JPC, SM), pp. 4585–4594.
ICML-2019-MirshaniRS #functional #privacy
Formal Privacy for Functional Data with Gaussian Perturbations (AM, MR, ABS), pp. 4595–4604.
ICML-2019-MishneCC #learning
Co-manifold learning with missing data (GM, ECC, RRC), pp. 4605–4614.
ICML-2019-MohriSS #learning
Agnostic Federated Learning (MM, GS, ATS), pp. 4615–4625.
ICML-2019-MollenhoffC #generative #metric #modelling
Flat Metric Minimization with Applications in Generative Modeling (TM, DC), pp. 4626–4635.
ICML-2019-MoonAS #black box #combinator #optimisation #performance
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization (SM, GA, HOS), pp. 4636–4645.
ICML-2019-MostafaW #network #parametricity #performance
Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization (HM, XW), pp. 4646–4655.
ICML-2019-MuehlebachJ
A Dynamical Systems Perspective on Nesterov Acceleration (MM, MIJ), pp. 4656–4662.
ICML-2019-Murphy0R0 #graph #relational
Relational Pooling for Graph Representations (RLM, BS0, VAR, BR0), pp. 4663–4673.
ICML-2019-NabiMS #learning #policy
Learning Optimal Fair Policies (RN, DM, IS), pp. 4674–4682.
ICML-2019-NacsonGLSS #modelling
Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models (MSN, SG, JDL, NS, DS), pp. 4683–4692.
ICML-2019-NaganoY0K #learning
A Wrapped Normal Distribution on Hyperbolic Space for Gradient-Based Learning (YN, SY, YF0, MK), pp. 4693–4702.
ICML-2019-Nagaraj0N
SGD without Replacement: Sharper Rates for General Smooth Convex Functions (DN, PJ0, PN), pp. 4703–4711.
ICML-2019-NalisnickHS
Dropout as a Structured Shrinkage Prior (ETN, JMHL, PS), pp. 4712–4722.
ICML-2019-NalisnickMTGL #hybrid #modelling
Hybrid Models with Deep and Invertible Features (ETN, AM, YWT, DG, BL), pp. 4723–4732.
ICML-2019-NamKMPSF #classification #learning #multi #permutation
Learning Context-dependent Label Permutations for Multi-label Classification (JN, YBK, ELM, SP, RS, JF), pp. 4733–4742.
ICML-2019-NayakMSRC #network
Zero-Shot Knowledge Distillation in Deep Networks (GKN, KRM, VS, VBR, AC), pp. 4743–4751.
ICML-2019-NayebiMP #embedded #framework #optimisation
A Framework for Bayesian Optimization in Embedded Subspaces (AN, AM, MP), pp. 4752–4761.
ICML-2019-NayerNV #matrix #metric #rank
Phaseless PCA: Low-Rank Matrix Recovery from Column-wise Phaseless Measurements (SN, PN, NV), pp. 4762–4770.
ICML-2019-NdiayeLFST #complexity #grid
Safe Grid Search with Optimal Complexity (EN, TL, OF, JS, IT), pp. 4771–4780.
ICML-2019-NedelecKP #learning
Learning to bid in revenue-maximizing auctions (TN, NEK, VP), pp. 4781–4789.
ICML-2019-Nguyen #learning #on the #set
On Connected Sublevel Sets in Deep Learning (QN), pp. 4790–4799.
ICML-2019-NguyenLKB #detection #multi #predict
Anomaly Detection With Multiple-Hypotheses Predictions (DTN, ZL, MK, TB), pp. 4800–4809.
ICML-2019-NguyenSR #analysis #monte carlo #optimisation
Non-Asymptotic Analysis of Fractional Langevin Monte Carlo for Non-Convex Optimization (THN, US, GR), pp. 4810–4819.
ICML-2019-NirwanB #invariant
Rotation Invariant Householder Parameterization for Bayesian PCA (RSN, NB), pp. 4820–4828.
ICML-2019-NockW #integer
Lossless or Quantized Boosting with Integer Arithmetic (RN, RCW), pp. 4829–4838.
ICML-2019-NoklandE #fault #network
Training Neural Networks with Local Error Signals (AN, LHE), pp. 4839–4850.
ICML-2019-NovatiK #experience
Remember and Forget for Experience Replay (GN, PK), pp. 4851–4860.
ICML-2019-NyeHTS #learning #sketching
Learning to Infer Program Sketches (MIN, LBH, JBT, ASL), pp. 4861–4870.
ICML-2019-ObermeyerBJPCRG #graph
Tensor Variable Elimination for Plated Factor Graphs (FO, EB, MJ, NP, JC, AMR, NDG), pp. 4871–4880.
ICML-2019-OberstS #evaluation #modelling
Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models (MO, DAS), pp. 4881–4890.
ICML-2019-OchsM
Model Function Based Conditional Gradient Method with Armijo-like Line Search (PO, YM), pp. 4891–4900.
ICML-2019-OdenaOAG #debugging #fuzzing #named #network
TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing (AO, CO, DA, IJG), pp. 4901–4911.
ICML-2019-OglicG #kernel #learning #scalability
Scalable Learning in Reproducing Kernel Krein Spaces (DO, TG0), pp. 4912–4921.
ICML-2019-OonoS #approximate #estimation #network #parametricity
Approximation and non-parametric estimation of ResNet-type convolutional neural networks (KO, TS), pp. 4922–4931.
ICML-2019-OprescuSW #orthogonal #random
Orthogonal Random Forest for Causal Inference (MO, VS, ZSW), pp. 4932–4941.
ICML-2019-OsamaZS
Inferring Heterogeneous Causal Effects in Presence of Spatial Confounding (MO, DZ, TBS), pp. 4942–4950.
ICML-2019-OymakS #learning #question
Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path? (SO, MS), pp. 4951–4960.
ICML-2019-PanageasPW #algorithm #convergence #distributed #higher-order #multi #optimisation
Multiplicative Weights Updates as a distributed constrained optimization algorithm: Convergence to second-order stationary points almost always (IP, GP, XW), pp. 4961–4969.
ICML-2019-PangXDCZ #robust
Improving Adversarial Robustness via Promoting Ensemble Diversity (TP, KX, CD, NC, JZ0), pp. 4970–4979.
ICML-2019-PanousisCT #contest #network #parametricity
Nonparametric Bayesian Deep Networks with Local Competition (KPP, SC, ST), pp. 4980–4988.
ICML-2019-PapiniMLR #multi #optimisation #policy
Optimistic Policy Optimization via Multiple Importance Sampling (MP, AMM, LL, MR), pp. 4989–4999.
ICML-2019-PappasH #generative
Deep Residual Output Layers for Neural Language Generation (NP0, JH), pp. 5000–5011.
ICML-2019-Papyan #metric
Measurements of Three-Level Hierarchical Structure in the Outliers in the Spectrum of Deepnet Hessians (VP), pp. 5012–5021.
ICML-2019-Parizi0ASF
Generalized Majorization-Minimization (SNP, KH0, RA, SS, PFF), pp. 5022–5031.
ICML-2019-ParkKK
Variational Laplace Autoencoders (YSP, CDK, GK), pp. 5032–5041.
ICML-2019-ParkSLS #empirical #network #probability
The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study (DSP, JSD, QVL, SLS), pp. 5042–5051.
ICML-2019-ParkYYS #approximate
Spectral Approximate Inference (SP, EY, SYY, JS), pp. 5052–5061.
ICML-2019-PathakG0 #self
Self-Supervised Exploration via Disagreement (DP, DG, AG0), pp. 5062–5071.
ICML-2019-PatyC #robust
Subspace Robust Wasserstein Distances (FPP, MC), pp. 5072–5081.
ICML-2019-PaulOW #learning #optimisation #policy #robust
Fingerprint Policy Optimisation for Robust Reinforcement Learning (SP, MAO, SW), pp. 5082–5091.
ICML-2019-0001HLZZ #clustering #multi #named #parametricity
COMIC: Multi-view Clustering Without Parameter Selection (XP0, ZH, JL, HZ, JTZ), pp. 5092–5101.
ICML-2019-PengHSS #learning
Domain Agnostic Learning with Disentangled Representations (XP, ZH, XS, KS), pp. 5102–5112.
ICML-2019-PengWCH #collaboration #network
Collaborative Channel Pruning for Deep Networks (HP, JW, SC, JH), pp. 5113–5122.
ICML-2019-PerraultPV #nondeterminism #performance
Exploiting structure of uncertainty for efficient matroid semi-bandits (PP, VP, MV), pp. 5123–5132.
ICML-2019-PetersonB0GR #predict
Cognitive model priors for predicting human decisions (JCP, DB, DR0, TLG, SJR), pp. 5133–5141.
ICML-2019-PhuongL #comprehension #towards
Towards Understanding Knowledge Distillation (MP, CL), pp. 5142–5151.
ICML-2019-PiergiovanniR
Temporal Gaussian Mixture Layer for Videos (AJP, MSR), pp. 5152–5161.
ICML-2019-PolianskiiP #approach #bound #classification #geometry #integration #monte carlo
Voronoi Boundary Classification: A High-Dimensional Geometric Approach via Weighted Monte Carlo Integration (VP, FTP), pp. 5162–5170.
ICML-2019-PooleOOAT #bound #on the
On Variational Bounds of Mutual Information (BP, SO, AvdO, AA, GT), pp. 5171–5180.
ICML-2019-PurohitGR #nondeterminism
Hiring Under Uncertainty (MP, SG, MR), pp. 5181–5189.
ICML-2019-QianQR
SAGA with Arbitrary Sampling (XQ, ZQ, PR), pp. 5190–5199.
ICML-2019-QianRGSLS #analysis
SGD with Arbitrary Sampling: General Analysis and Improved Rates (XQ, PR, RMG, AS, NL, ES), pp. 5200–5209.
ICML-2019-QianZCYH #named
AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss (KQ, YZ, SC, XY, MHJ), pp. 5210–5219.
ICML-2019-QiaoAZX #fault tolerance #machine learning
Fault Tolerance in Iterative-Convergent Machine Learning (AQ, BA, BZ, EPX), pp. 5220–5230.
ICML-2019-QinCCGR #automation #recognition #robust #speech
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition (YQ, NC, GWC, IJG, CR), pp. 5231–5240.
ICML-2019-QuBT #graph #markov #named #network
GMNN: Graph Markov Neural Networks (MQ, YB, JT0), pp. 5241–5250.
ICML-2019-QuMX #learning
Nonlinear Distributional Gradient Temporal-Difference Learning (CQ, SM, HX), pp. 5251–5260.
ICML-2019-RadanovicDPS #learning #markov #process
Learning to Collaborate in Markov Decision Processes (GR, RD, DCP, AS), pp. 5261–5270.
ICML-2019-RaeBL
Meta-Learning Neural Bloom Filters (JWR, SB, TPL), pp. 5271–5280.
ICML-2019-RaghuBSOKMK #nondeterminism #predict
Direct Uncertainty Prediction for Medical Second Opinions (MR, KB, RS, ZO, RDK, SM, JMK), pp. 5281–5290.
ICML-2019-RaghunathanCJ #game studies #optimisation
Game Theoretic Optimization via Gradient-based Nikaido-Isoda Function (AUR, AC, DKJ), pp. 5291–5300.
ICML-2019-RahamanBADLHBC #bias #network #on the
On the Spectral Bias of Neural Networks (NR, AB, DA, FD, ML, FAH, YB, ACC), pp. 5301–5310.
ICML-2019-RahmanJG #compilation #network
Look Ma, No Latent Variables: Accurate Cutset Networks via Compilation (TR, SJ, VG), pp. 5311–5320.
ICML-2019-RajputFCLP #question
Does Data Augmentation Lead to Positive Margin? (SR, ZF, ZBC, PLL, DSP), pp. 5321–5330.
ICML-2019-RakellyZFLQ #learning #performance #probability
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables (KR, AZ, CF, SL, DQ), pp. 5331–5340.
ICML-2019-RakotomamonjyGS
Screening rules for Lasso with non-convex Sparse Regularizers (AR, GG, JS), pp. 5341–5350.
ICML-2019-RamamurthyVM #bound #data analysis
Topological Data Analysis of Decision Boundaries with Application to Model Selection (KNR, KRV, KM), pp. 5351–5360.
ICML-2019-RatzlaffL #generative #named #network
HyperGAN: A Generative Model for Diverse, Performant Neural Networks (NR, FL), pp. 5361–5369.
ICML-2019-Ravi #modelling #performance #using
Efficient On-Device Models using Neural Projections (SR), pp. 5370–5379.
ICML-2019-Raziperchikolaei #coordination #estimation #parametricity
A Block Coordinate Descent Proximal Method for Simultaneous Filtering and Parameter Estimation (RR, HSB), pp. 5380–5388.
ICML-2019-RechtRSS #classification #question
Do ImageNet Classifiers Generalize to ImageNet? (BR, RR, LS, VS), pp. 5389–5400.
ICML-2019-ReeveK #classification #performance #robust #symmetry
Fast Rates for a kNN Classifier Robust to Unknown Asymmetric Label Noise (HWJR, AK), pp. 5401–5409.
ICML-2019-RenTQZZL #automation #recognition #speech
Almost Unsupervised Text to Speech and Automatic Speech Recognition (YR, XT, TQ, SZ, ZZ, TYL), pp. 5410–5419.
ICML-2019-RenZE #adaptation #reduction
Adaptive Antithetic Sampling for Variance Reduction (HR, SZ, SE), pp. 5420–5428.
ICML-2019-ReslerM #learning #online
Adversarial Online Learning with noise (AR, YM), pp. 5429–5437.
ICML-2019-RezaeiG #polynomial #process
A Polynomial Time MCMC Method for Sampling from Continuous Determinantal Point Processes (AR, SOG), pp. 5438–5447.
ICML-2019-RieckBB #classification #graph #persistent
A Persistent Weisfeiler-Lehman Procedure for Graph Classification (BR, CB, KMB), pp. 5448–5458.
ICML-2019-RollandKISC #learning #performance #probability #testing
Efficient learning of smooth probability functions from Bernoulli tests with guarantees (PR, AK, AI, AS, VC), pp. 5459–5467.
ICML-2019-Romoff0TOPB
Separable value functions across time-scales (JR, PH0, AT, YO, JP, EB), pp. 5468–5477.
ICML-2019-RosenbergM #markov #online #optimisation #process
Online Convex Optimization in Adversarial Markov Decision Processes (AR0, YM), pp. 5478–5486.
ICML-2019-RossiMF #modelling
Good Initializations of Variational Bayes for Deep Models (SR, PM, MF), pp. 5487–5497.
ICML-2019-RothKH #detection #statistics
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples (KR, YK, TH), pp. 5498–5507.
ICML-2019-RotskoffJBV
Neuron birth-death dynamics accelerates gradient descent and converges asymptotically (GMR, SJ, JB, EVE), pp. 5508–5517.
ICML-2019-RouletDSH #algorithm #complexity
Iterative Linearized Control: Stable Algorithms and Complexity Guarantees (VR, DD, SSS, ZH), pp. 5518–5527.
ICML-2019-RowlandDKMBD #learning #statistics
Statistics and Samples in Distributional Reinforcement Learning (MR, RD, SK, RM, MGB, WD), pp. 5528–5536.
ICML-2019-RuizT
A Contrastive Divergence for Combining Variational Inference and MCMC (FJRR, MKT), pp. 5537–5545.
ICML-2019-RyuLWCWY
Plug-and-Play Methods Provably Converge with Properly Trained Denoisers (EKR, JL0, SW, XC, ZW, WY), pp. 5546–5557.
ICML-2019-SablayrollesDSO #black box
White-box vs Black-box: Bayes Optimal Strategies for Membership Inference (AS, MD, CS, YO, HJ), pp. 5558–5567.
ICML-2019-SafaviB #graph #multi
Tractable n-Metrics for Multiple Graphs (SS, JB), pp. 5568–5578.
ICML-2019-SajedS #algorithm
An Optimal Private Stochastic-MAB Algorithm based on Optimal Private Stopping Rule (TS, OS), pp. 5579–5588.
ICML-2019-SalimbeniDHD #process
Deep Gaussian Processes with Importance-Weighted Variational Inference (HS, VD, JH, MPD), pp. 5589–5598.
ICML-2019-SantiagoS #multi #optimisation
Multivariate Submodular Optimization (RS, FBS), pp. 5599–5609.
ICML-2019-SarkarR #finite #identification #linear
Near optimal finite time identification of arbitrary linear dynamical systems (TS, AR), pp. 5610–5618.
ICML-2019-SatoILT #adaptation #classification #co-evolution
Breaking Inter-Layer Co-Adaptation by Classifier Anonymization (IS, KI, GL, MT), pp. 5619–5627.
ICML-2019-SaunshiPAKK #analysis #learning #representation
A Theoretical Analysis of Contrastive Unsupervised Representation Learning (NS, OP, SA, MK, HK), pp. 5628–5637.
ICML-2019-ScheinWSZW #modelling
Locally Private Bayesian Inference for Count Models (AS, ZSW, AS, MZ, HMW), pp. 5638–5648.
ICML-2019-SchroeterSM #learning #locality
Weakly-Supervised Temporal Localization via Occurrence Count Learning (JS, KAS, ADM), pp. 5649–5659.
ICML-2019-SeshadriPU
Discovering Context Effects from Raw Choice Data (AS, AP, JU), pp. 5660–5669.
ICML-2019-ShahGAD #bias #learning #on the
On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference (RS, NG, PA, ADD), pp. 5670–5679.
ICML-2019-ShaniEM #learning #revisited
Exploration Conscious Reinforcement Learning Revisited (LS, YE, SM), pp. 5680–5689.
ICML-2019-SharanTBV #performance #rank
Compressed Factorization: Fast and Accurate Low-Rank Factorization of Compressively-Sensed Data (VS, KST, PB, GV), pp. 5690–5700.
ICML-2019-ShenHCD #independence #network #testing
Conditional Independence in Testing Bayesian Networks (YS, HH, AC, AD), pp. 5701–5709.
ICML-2019-ShenLL #learning
Learning to Clear the Market (WS, SL, RPL), pp. 5710–5718.
ICML-2019-ShenOAR #modelling
Mixture Models for Diverse Machine Translation: Tricks of the Trade (TS, MO, MA, MR), pp. 5719–5728.
ICML-2019-ShenRHQM #policy
Hessian Aided Policy Gradient (ZS, AR, HH, HQ, CM), pp. 5729–5738.
ICML-2019-ShenS #learning
Learning with Bad Training Data via Iterative Trimmed Loss Minimization (YS, SS), pp. 5739–5748.
ICML-2019-ShestopaloffD #monte carlo
Replica Conditional Sequential Monte Carlo (AS, AD), pp. 5749–5757.
ICML-2019-ShiK0 #modelling #network #scalability
Scalable Training of Inference Networks for Gaussian-Process Models (JS, MEK, JZ0), pp. 5758–5768.
ICML-2019-Shi0 #learning #multi #performance
Fast Direct Search in an Optimally Compressed Continuous Target Space for Efficient Multi-Label Active Learning (WS, QY0), pp. 5769–5778.
ICML-2019-ShyamJG #modelling
Model-Based Active Exploration (PS, WJ, FG), pp. 5779–5788.
ICML-2019-SiminelakisRBCL #evaluation #kernel
Rehashing Kernel Evaluation in High Dimensions (PS, KR, PB, MC, PL), pp. 5789–5798.
ICML-2019-SimonWR #generative #modelling #precise
Revisiting precision recall definition for generative modeling (LS, RW, JR), pp. 5799–5808.
ICML-2019-Simon-GabrielOB #first-order #network
First-Order Adversarial Vulnerability of Neural Networks and Input Dimension (CJSG, YO, LB, BS, DLP), pp. 5809–5817.
ICML-2019-SimonovFGP #complexity
Refined Complexity of PCA with Outliers (KS, FVF, PAG, FP), pp. 5818–5826.
ICML-2019-SimsekliSG #analysis #network #probability
A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks (US, LS, MG), pp. 5827–5837.
ICML-2019-SinghTJGB #generative #network #parametricity
Non-Parametric Priors For Generative Adversarial Networks (RS, PKT, SJ, RG, MWB), pp. 5838–5847.
ICML-2019-SinglaWFF #approximate #comprehension #higher-order #learning
Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation (SS0, EW, SF, SF), pp. 5848–5856.
ICML-2019-SlimCAV #framework #kernel #named
kernelPSI: a Post-Selection Inference Framework for Nonlinear Variable Selection (LS, CC0, CAA, JPV), pp. 5857–5865.
ICML-2019-SmithFRM #geometry #named
GEOMetrics: Exploiting Geometric Structure for Graph-Encoded Objects (EJS, SF, AR, DM), pp. 5866–5876.
ICML-2019-SoLL
The Evolved Transformer (DRS, QVL, CL), pp. 5877–5886.
ICML-2019-SonKKHY #learning #multi #named
QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning (KS, DK, WJK, DH, YY), pp. 5887–5896.
ICML-2019-SongDKF
Distribution calibration for regression (HS, TD, MK, PAF), pp. 5897–5906.
ICML-2019-SongK0 #learning #named #robust
SELFIE: Refurbishing Unclean Samples for Robust Deep Learning (HS, MK, JGL0), pp. 5907–5915.
ICML-2019-SongPC #perspective
Revisiting the Softmax Bellman Operator: New Benefits and New Perspective (ZS, RP, LC), pp. 5916–5925.
ICML-2019-SongTQLL #generative #named #sequence
MASS: Masked Sequence to Sequence Pre-training for Language Generation (KS, XT, TQ, JL, TYL), pp. 5926–5936.
ICML-2019-SotoLF #3d #distributed #matrix #multi #polynomial
Dual Entangled Polynomial Code: Three-Dimensional Coding for Distributed Matrix Multiplication (PS, JL, XF), pp. 5937–5945.
ICML-2019-SpringKMS #sketching
Compressing Gradient Optimizers via Count-Sketches (RS, AK, VM, AS), pp. 5946–5955.
ICML-2019-StaibRKKS #adaptation
Escaping Saddle Points with Adaptive Gradient Methods (MS, SJR, SK, SK, SS), pp. 5956–5965.
ICML-2019-StelznerPK #modelling #performance #probability
Faster Attend-Infer-Repeat with Tractable Probabilistic Models (KS, RP, KK), pp. 5966–5975.
ICML-2019-SternCKU #flexibility #generative #sequence
Insertion Transformer: Flexible Sequence Generation via Insertion Operations (MS, WC, JK, JU), pp. 5976–5985.
ICML-2019-Stickland0 #adaptation #learning #multi #performance
BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (ACS, IM0), pp. 5986–5995.
ICML-2019-Streeter #learning #linear
Learning Optimal Linear Regularizers (MS), pp. 5996–6004.
ICML-2019-SuWSJ #adaptation #evaluation #learning #named #policy
CAB: Continuous Adaptive Blending for Policy Evaluation and Learning (YS, LW, MS, TJ), pp. 6005–6014.
ICML-2019-SuW #distance #learning #metric #sequence
Learning Distance for Sequences by Learning a Ground Metric (BS, YW), pp. 6015–6025.
ICML-2019-SunBD0M #memory management
Contextual Memory Trees (WS, AB, HDI, JL0, PM), pp. 6026–6035.
ICML-2019-0002VBB #learning #performance
Provably Efficient Imitation Learning from Observation Alone (WS0, AV, BB, DB), pp. 6036–6045.
ICML-2019-SundinSSVSK #learning
Active Learning for Decision-Making from Imbalanced Observational Data (IS, PS, ES, AV, SS, SK), pp. 6046–6055.
ICML-2019-SuterMSB #robust #validation
Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness (RS, ÐM, BS, SB), pp. 6056–6065.
ICML-2019-0008TO #graph
Hyperbolic Disk Embeddings for Directed Acyclic Graphs (RS0, RT, SO), pp. 6066–6075.
ICML-2019-TaghvaeiM #probability
Accelerated Flow for Probability Distributions (AT, PGM), pp. 6076–6085.
ICML-2019-TaiBV #network
Equivariant Transformer Networks (KST, PB, GV), pp. 6086–6095.
ICML-2019-TallecBO #robust
Making Deep Q-learning methods robust to time discretization (CT, LB, YO), pp. 6096–6104.
ICML-2019-TanL #named #network #scalability
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (MT, QVL), pp. 6105–6114.
ICML-2019-TanP
Hierarchical Decompositional Mixtures of Variational Autoencoders (PLT, RP), pp. 6115–6124.
ICML-2019-Tang #modelling #ranking
Mallows ranking models: maximum likelihood estimate and regeneration (WT), pp. 6125–6134.
ICML-2019-TangLJR #correlation
Correlated Variational Auto-Encoders (DT, DL, TJ, NR), pp. 6135–6144.
ICML-2019-TangR #predict
The Variational Predictive Natural Gradient (DT, RR), pp. 6145–6154.
ICML-2019-TangYLZL #named #parallel #probability
DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-pass Error-Compensated Compression (HT, CY, XL, TZ, JL0), pp. 6155–6165.
ICML-2019-TannoAACN #adaptation
Adaptive Neural Trees (RT, KA, DCA, AC, AVN), pp. 6166–6175.
ICML-2019-TaoDCBCLZBC #perspective
Variational Annealing of GANs: A Langevin Perspective (CT, SD, LC, KB, JC, CL0, RZ, GVB, LC), pp. 6176–6185.
ICML-2019-TavaresBMSR #declarative
Predicate Exchange: Inference with Declarative Knowledge (ZT, JB, EM, ASL, RR), pp. 6186–6195.
ICML-2019-TennenholtzM #natural language
The Natural Language of Actions (GT, SM), pp. 6196–6205.
ICML-2019-TeradaY #kernel #normalisation
Kernel Normalized Cut: a Theoretical Revisit (YT, MY), pp. 6206–6214.
ICML-2019-TesslerEM #learning #robust
Action Robust Reinforcement Learning and Applications in Continuous Control (CT, YE, SM), pp. 6215–6224.
ICML-2019-ThomasL
Concentration Inequalities for Conditional Value at Risk (PST, EGLM), pp. 6225–6233.
ICML-2019-ThulasidasanBBC #learning #using
Combating Label Noise in Deep Learning using Abstention (ST, TB, JAB, GC, JMY), pp. 6234–6243.
ICML-2019-TianMGSCPZ #analysis
ELF OpenGo: an analysis and open reimplementation of AlphaZero (YT, JM, QG, SS, ZC, JP, LZ), pp. 6244–6253.
ICML-2019-TiomokoCBG #estimation #matrix #metric #random #scalability
Random Matrix Improved Covariance Estimation for a Large Class of Metrics (MT, RC, FB, GG), pp. 6254–6263.
ICML-2019-TirinzoniSR #multi #policy
Transfer of Samples in Policy Search via Multiple Importance Sampling (AT, MS, MR), pp. 6264–6274.
ICML-2019-VayerCTCF #graph
Optimal Transport for structured data with application on graphs (TV, NC, RT, LC, RF), pp. 6275–6284.
ICML-2019-TongC #multi
Discovering Latent Covariance Structures for Multiple Time Series (AT, JC), pp. 6285–6294.
ICML-2019-TranDRC #generative #learning
Bayesian Generative Active Deep Learning (TT, TTD, IDR0, GC), pp. 6295–6304.
ICML-2019-TranKSK #named #network #using
DeepNose: Using artificial neural networks to represent the space of odorants (NBT, DRK, SS, AAK), pp. 6305–6314.
ICML-2019-TrippeHAB #approximate #named #rank #using
LR-GLM: High-Dimensional Bayesian Inference Using Low-Rank Data Approximations (BLT, JHH, RA, TB), pp. 6315–6324.
ICML-2019-TrouleauEGKT #learning #process
Learning Hawkes Processes Under Synchronization Noise (WT, JE, MG, NK, PT), pp. 6325–6334.
ICML-2019-TsakirisP
Homomorphic Sensing (MCT, LP), pp. 6335–6344.
ICML-2019-TurnerHFSY #generative #network
Metropolis-Hastings Generative Adversarial Networks (RDT, JH, EF, YS, JY), pp. 6345–6353.
ICML-2019-TzengW #detection #distributed #graph
Distributed, Egocentric Representations of Graphs for Detecting Critical Structures (RCT, SHW), pp. 6354–6362.
ICML-2019-Upadhyay #algorithm #sublinear
Sublinear Space Private Algorithms Under the Sliding Window Model (JU), pp. 6363–6372.
ICML-2019-UstunLP #classification
Fairness without Harm: Decoupled Classifiers with Preference Guarantees (BU, YL0, DCP), pp. 6373–6382.
ICML-2019-UurtioBR #analysis #canonical #correlation #kernel #scalability
Large-Scale Sparse Kernel Canonical Correlation Analysis (VU, SB, JR), pp. 6383–6391.
ICML-2019-DijkNNP #convergence
Characterization of Convex Objective Functions and Optimal Expected Convergence Rates for SGD (MvD, LMN, PHN, DTP), pp. 6392–6400.
ICML-2019-NiekerkJER #learning
Composing Value Functions in Reinforcement Learning (BvN, SJ, ACE, BR), pp. 6401–6409.
ICML-2019-VargasBH #comparison #difference #semantics
Model Comparison for Semantic Grouping (FV, KB, NH), pp. 6410–6417.
ICML-2019-VarmaSHRR #dependence #learning #modelling
Learning Dependency Structures for Weak Supervision Models (PV, FS, AH, AR, CR), pp. 6418–6427.
ICML-2019-VedantamDLRBP #modelling #probability #visual notation
Probabilistic Neural Symbolic Models for Interpretable Visual Question Answering (RV, KD, SL, MR, DB, DP), pp. 6428–6437.
ICML-2019-VermaLBNMLB
Manifold Mixup: Better Representations by Interpolating Hidden States (VV, AL, CB, AN, IM, DLP, YB), pp. 6438–6447.
ICML-2019-VinayakKVK #estimation #learning #parametricity
Maximum Likelihood Estimation for Learning Populations of Parameters (RKV, WK, GV, SMK), pp. 6448–6457.
ICML-2019-VladimirovaVMA #comprehension #network
Understanding Priors in Bayesian Neural Networks at the Unit Level (MV, JV, PM, JA), pp. 6458–6467.
ICML-2019-VlassisBDJ #design #evaluation #on the
On the Design of Estimators for Bandit Off-Policy Evaluation (NV, AB, MD, TJ), pp. 6468–6476.
ICML-2019-VorobevUGS #learning #ranking
Learning to select for a predefined ranking (AV, AU, GG, PS), pp. 6477–6486.
ICML-2019-WagstaffFEPO #on the #representation #set
On the Limitations of Representing Functions on Sets (EW, FF, ME, IP, MAO), pp. 6487–6494.
ICML-2019-WalkerG #graph #process
Graph Convolutional Gaussian Processes (IW, BG), pp. 6495–6504.
ICML-2019-Wang #low cost
Gaining Free or Low-Cost Interpretability with Interpretable Partial Substitute (TW), pp. 6505–6514.
ICML-2019-Wang0XZ #network
Convolutional Poisson Gamma Belief Network (CW, BC0, SX, MZ), pp. 6515–6525.
ICML-2019-WangC0 #empirical
Differentially Private Empirical Risk Minimization with Non-convex Loss Functions (DW, CC, JX0), pp. 6526–6535.
ICML-2019-WangCAD #estimation #learning #policy #random
Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation (RW, CC, PVA, YD), pp. 6536–6544.
ICML-2019-WangDWK #learning #logic #named #reasoning #satisfiability #using
SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver (PWW, PLD, BW, JZK), pp. 6545–6554.
ICML-2019-WangG0 #modelling
Improving Neural Language Modeling via Adversarial Training (DW, CG, QL0), pp. 6555–6565.
ICML-2019-WangGFZ #named
EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis (CW, RBG, SF, GZ), pp. 6566–6575.
ICML-2019-Wang0 #learning #modelling
Nonlinear Stein Variational Gradient Descent for Learning Diversified Mixture Models (DW, QL0), pp. 6576–6585.
ICML-2019-WangM0YZG #convergence #on the #robust
On the Convergence and Robustness of Adversarial Training (YW0, XM, JB0, JY, BZ, QG), pp. 6586–6595.
ICML-2019-WangN #network
State-Regularized Recurrent Neural Networks (CW, MN), pp. 6596–6606.
ICML-2019-WangSMGFJ
Deep Factors for Forecasting (YW, AS, DCM, JG, DF, TJ), pp. 6607–6617.
ICML-2019-WangUC
Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions (HW0, BU, FPC), pp. 6618–6627.
ICML-2019-Wang019a #difference #linear #on the #privacy
On Sparse Linear Regression in the Local Differential Privacy Model (DW, JX0), pp. 6628–6637.
ICML-2019-WangZ0Q #learning #random #recommendation #robust
Doubly Robust Joint Learning for Recommendation on Data Missing Not at Random (XW, RZ0, YS0, JQ0), pp. 6638–6647.
ICML-2019-WangZXS #learning #on the
On the Generalization Gap in Reparameterizable Reinforcement Learning (HW, SZ, CX, RS), pp. 6648–6658.
ICML-2019-WangZB #bias #matter #network
Bias Also Matters: Bias Attribution for Deep Neural Network Explanation (SW, TZ, JAB), pp. 6659–6667.
ICML-2019-WangZB19a #network
Jumpout : Improved Dropout for Deep Neural Networks with ReLUs (SW, TZ, JAB), pp. 6668–6676.
ICML-2019-WardWB #convergence
AdaGrad stepsizes: sharp convergence over nonconvex landscapes (RW, XW, LB), pp. 6677–6686.
ICML-2019-WeiDGG #linear #modelling
Generalized Linear Rule Models (DW, SD, TG, OG), pp. 6687–6696.
ICML-2019-WeiYW #generative #modelling #on the #statistics
On the statistical rate of nonlinear recovery in generative models with heavy-tailed data (XW, ZY, ZW), pp. 6697–6706.
ICML-2019-WeiszGS #algorithm #approximate #named
CapsAndRuns: An Improved Method for Approximately Optimal Algorithm Configuration (GW, AG, CS), pp. 6707–6715.
ICML-2019-WelleckBDC #generative
Non-Monotonic Sequential Text Generation (SW, KB, HDI, KC), pp. 6716–6726.
ICML-2019-WengCNSBOD #approach #named #network #probability #robust #verification
PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach (LW, PYC, LMN, MSS, AB, IVO, LD), pp. 6727–6736.
ICML-2019-LiSSG #exponential #kernel #learning #product line
Learning deep kernels for exponential family densities (WL, DJS, HS, AG), pp. 6737–6746.
ICML-2019-WestphalB #testing
Improving Model Selection by Employing the Test Data (MW, WB), pp. 6747–6756.
ICML-2019-WhitehillR #automation #classification
Automatic Classifiers as Scientific Instruments: One Step Further Away from Ground-Truth (JW, AR), pp. 6757–6765.
ICML-2019-WildnerK #markov #process
Moment-Based Variational Inference for Markov Jump Processes (CW, HK), pp. 6766–6775.
ICML-2019-WilkinsonARSS #analysis #probability
End-to-End Probabilistic Inference for Nonstationary Audio Analysis (WJW, MRA, JDR, DS, AS), pp. 6776–6785.
ICML-2019-WilliamsonM #metric
Fairness risk measures (RCW, AKM), pp. 6786–6797.
ICML-2019-WiqvistMPF #approximate #architecture #learning #network #statistics #summary
Partially Exchangeable Networks and Architectures for Learning Summary Statistics in Approximate Bayesian Computation (SW, PAM, UP, JF), pp. 6798–6807.
ICML-2019-WongSK
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations (EW, FRS, JZK), pp. 6808–6817.
ICML-2019-WuCBTS #learning
Imitation Learning from Imperfect Demonstration (YHW, NC, HB, VT, MS), pp. 6818–6827.
ICML-2019-WuDSYHSRK #learning #matrix #metric
Learning a Compressed Sensing Measurement Matrix via Gradient Unrolling (SW, AD, SS, FXY, DNHR, DS, AR, SK), pp. 6828–6839.
ICML-2019-WuLZ #multi #optimisation #reuse
Heterogeneous Model Reuse via Optimizing Multiparty Multiclass Margin (XZW, SL, ZHZ), pp. 6840–6849.
ICML-2019-WuRL
Deep Compressed Sensing (YW, MR, TPL), pp. 6850–6860.
ICML-2019-WuSZFYW #graph #network
Simplifying Graph Convolutional Networks (FW, AHSJ, TZ, CF, TY, KQW), pp. 6861–6871.
ICML-2019-WuWKL #adaptation #symmetry
Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment (YW, EW, DK, ZCL), pp. 6872–6881.
ICML-2019-XieCJZZ #on the #performance #scalability
On Scalable and Efficient Computation of Large Scale Optimal Transport (YX, MC, HJ, TZ, HZ), pp. 6882–6892.
ICML-2019-XieKG #distributed #fault tolerance #named #probability
Zeno: Distributed Stochastic Gradient Descent with Suspicion-based Fault-tolerance (CX, SK, IG), pp. 6893–6901.
ICML-2019-XieWLZL
Differentiable Linearized ADMM (XX, JW, GL, ZZ, ZL), pp. 6902–6911.
ICML-2019-XingNL #approximate
Calibrated Approximate Bayesian Inference (HX, GN, JL), pp. 6912–6920.
ICML-2019-XuL #clustering
Power k-Means Clustering (JX, KL), pp. 6921–6931.
ICML-2019-XuLZC #graph #learning
Gromov-Wasserstein Learning for Graph Matching and Node Embedding (HX, DL, HZ, LC), pp. 6932–6941.
ICML-2019-XuQLJY #convergence #optimisation #probability
Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence (YX, QQ, QL, RJ, TY), pp. 6942–6951.
ICML-2019-XuRDLF #learning
Learning a Prior over Intent via Meta-Inverse Reinforcement Learning (KX, ER, ADD, SL, CF), pp. 6952–6962.
ICML-2019-XuSS
Variational Russian Roulette for Deep Bayesian Nonparametrics (KX, AS, CAS), pp. 6963–6972.
ICML-2019-YadavKMM #clustering #exponential
Supervised Hierarchical Clustering with Exponential Linkage (NY, AK, NM, AM), pp. 6973–6983.
ICML-2019-YangD #learning #proving #theorem
Learning to Prove Theorems via Interacting with Proof Assistants (KY, JD), pp. 6984–6994.
ICML-2019-YangW #parametricity #using
Sample-Optimal Parametric Q-Learning Using Linearly Additive Features (LY, MW), pp. 6995–7004.
ICML-2019-YangWLCXS0X #named #network #performance
LegoNet: Efficient Convolutional Neural Networks with Lego Filters (ZY, YW, CL, HC, CX, BS, CX0, CX0), pp. 7005–7014.
ICML-2019-YangZKBWS #precise #probability
SWALP : Stochastic Weight Averaging in Low Precision Training (GY, TZ, PK, JB, AGW, CDS), pp. 7015–7024.
ICML-2019-YangZXK #effectiveness #estimation #matrix #named #robust #towards
ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation (YY, GZ, ZX, DK), pp. 7025–7034.
ICML-2019-YaoK0 #performance
Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations (QY, JTYK, BH0), pp. 7035–7044.
ICML-2019-YaoWHL
Hierarchically Structured Meta-learning (HY, YW, JH, ZL), pp. 7045–7054.
ICML-2019-0002WF #clustering #complexity #kernel #query
Tight Kernel Query Complexity of Kernel Ridge Regression and Kernel $k$-means Clustering (TY0, DPW, MF), pp. 7055–7063.
ICML-2019-YeS #comprehension #geometry
Understanding Geometry of Encoder-Decoder CNNs (JCY, WKS), pp. 7064–7073.
ICML-2019-YinCRB #distributed #learning
Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning (DY, YC0, KR, PLB), pp. 7074–7084.
ICML-2019-YinRB #complexity #robust
Rademacher Complexity for Adversarially Robust Generalization (DY, KR, PLB), pp. 7085–7094.
ICML-2019-YinYZ #category theory #named
ARSM: Augment-REINFORCE-Swap-Merge Estimator for Gradient Backpropagation Through Categorical Variables (MY, YY, MZ), pp. 7095–7104.
ICML-2019-YingKCR0H #architecture #named #towards
NAS-Bench-101: Towards Reproducible Neural Architecture Search (CY, AK, EC, ER, KM0, FH), pp. 7105–7114.
ICML-2019-YoonSM #adaptation #learning #named #network
TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning (SWY, JS, JM), pp. 7115–7123.
ICML-2019-YouWLJ #adaptation #towards
Towards Accurate Model Selection in Deep Unsupervised Domain Adaptation (KY, XW, ML, MIJ), pp. 7124–7133.
ICML-2019-YouYL #graph #network
Position-aware Graph Neural Networks (JY, RY, JL), pp. 7134–7143.
ICML-2019-YoungBN #generative #learning #modelling #synthesis
Learning Neurosymbolic Generative Models via Program Synthesis (HY, OB, MN), pp. 7144–7153.
ICML-2019-YuCGY #graph #learning #named #network
DAG-GNN: DAG Structure Learning with Graph Neural Networks (YY, JC, TG, MY), pp. 7154–7163.
ICML-2019-Yu0YNTS #how #question
How does Disagreement Help Generalization against Label Corruption? (XY, BH0, JY, GN, IWT, MS), pp. 7164–7173.
ICML-2019-YuJ #communication #complexity #on the #optimisation #parallel #probability
On the Computation and Communication Complexity of Parallel SGD with Dynamic Batch Sizes for Stochastic Non-Convex Optimization (HY, RJ), pp. 7174–7183.
ICML-2019-YuJY #analysis #communication #distributed #linear #on the #optimisation #performance
On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization (HY, RJ, SY), pp. 7184–7193.
ICML-2019-YuSE #learning #multi
Multi-Agent Adversarial Inverse Reinforcement Learning (LY, JS, SE), pp. 7194–7201.
ICML-2019-YuTRKSAZL #distributed #learning #network
Distributed Learning over Unreliable Networks (CY, HT, CR, SK, AS, DA, CZ, JL0), pp. 7202–7212.
ICML-2019-YuanL #adaptation #analysis #component #online
Online Adaptive Principal Component Analysis and Its extensions (JY, AGL), pp. 7213–7221.
ICML-2019-YuanLX #composition #generative #infinity #modelling #representation
Generative Modeling of Infinite Occluded Objects for Compositional Scene Representation (JY, BL, XX), pp. 7222–7231.
ICML-2019-YuanZLS #difference #modelling
Differential Inclusions for Modeling Nonsmooth ADMM Variants: A Continuous Limit Theory (HY, YZ, CJL, QS), pp. 7232–7241.
ICML-2019-YunZYLA #analysis #learning #optimisation #statistics
Trimming the l₁ Regularizer: Statistical Analysis, Optimization, and Applications to Deep Learning (JY, PZ, EY, ACL, AYA), pp. 7242–7251.
ICML-2019-YurochkinAGGHK #learning #network #parametricity
Bayesian Nonparametric Federated Learning of Neural Networks (MY, MA, SG, KHG, TNH, YK), pp. 7252–7261.
ICML-2019-YurochkinGSN #geometry
Dirichlet Simplex Nest and Geometric Inference (MY, AG, YS, XN), pp. 7262–7271.
ICML-2019-YurtseverFC #framework
A Conditional-Gradient-Based Augmented Lagrangian Framework (AY, OF, VC), pp. 7272–7281.
ICML-2019-YurtseverSC #difference #probability
Conditional Gradient Methods via Stochastic Path-Integrated Differential Estimator (AY, SS, VC), pp. 7282–7291.
ICML-2019-ZablockiBSPG #learning #recognition
Context-Aware Zero-Shot Learning for Object Recognition (EZ, PB, LS, BP, PG), pp. 7292–7303.
ICML-2019-ZanetteB #bound #learning #problem #using
Tighter Problem-Dependent Regret Bounds in Reinforcement Learning without Domain Knowledge using Value Function Bounds (AZ, EB), pp. 7304–7312.
ICML-2019-ZengLLY #convergence #coordination #learning
Global Convergence of Block Coordinate Descent in Deep Learning (JZ, TTKL, SL, YY0), pp. 7313–7323.
ICML-2019-Zhang #invariant #network
Making Convolutional Networks Shift-Invariant Again (RZ), pp. 7324–7334.
ICML-2019-ZhangAD0N #feedback #robust
Warm-starting Contextual Bandits: Robustly Combining Supervised and Bandit Feedback (CZ, AA, HDI, JL0, SN), pp. 7335–7344.
ICML-2019-ZhangCC
When Samples Are Strategically Selected (HZ, YC0, VC), pp. 7345–7353.
ICML-2019-ZhangGMO #generative #network #self
Self-Attention Generative Adversarial Networks (HZ0, IJG, DNM, AO), pp. 7354–7363.
ICML-2019-ZhangHK #design #distributed #graph #named #network
Circuit-GNN: Graph Neural Networks for Distributed Circuit Design (GZ, HH, DK), pp. 7364–7373.
ICML-2019-ZhangHY #learning #named #performance #recognition #visual notation
LatentGNN: Learning Efficient Non-local Relations for Visual Recognition (SZ, XH, SY), pp. 7374–7383.
ICML-2019-ZhangJHHL #clustering #collaboration
Neural Collaborative Subspace Clustering (TZ, PJ, MH, WbH, HL), pp. 7384–7393.
ICML-2019-ZhangL #incremental #kernel #learning #online #random #sketching
Incremental Randomized Sketching for Online Kernel Learning (XZ, SL), pp. 7394–7403.
ICML-2019-0002LLJ #adaptation #algorithm
Bridging Theory and Algorithm for Domain Adaptation (YZ0, TL, ML, MIJ), pp. 7404–7413.
ICML-2019-ZhangLZ #adaptation
Adaptive Regret of Convex and Smooth Functions (LZ0, TYL, ZHZ), pp. 7414–7423.
ICML-2019-ZhangP #correlation #modelling #random
Random Function Priors for Correlation Modeling (AZ, JWP), pp. 7424–7433.
ICML-2019-ZhangS #learning #network
Co-Representation Network for Generalized Zero-Shot Learning (FZ, GS), pp. 7434–7443.
ICML-2019-ZhangVSA0L #learning #modelling #named
SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning (MZ, SV, LS, PA, MJJ0, SL), pp. 7444–7453.
ICML-2019-ZhangX #incremental #random
A Composite Randomized Incremental Gradient Method (JZ, LX), pp. 7454–7462.
ICML-2019-ZhangY #estimation #modelling #multi #performance
Fast and Stable Maximum Likelihood Estimation for Incomplete Multinomial Models (CZ, GY), pp. 7463–7471.
ICML-2019-ZhangYJXGJ #robust #trade-off
Theoretically Principled Trade-off between Robustness and Accuracy (HZ, YY, JJ, EPX, LEG, MIJ), pp. 7472–7482.
ICML-2019-ZhangYT #learning #novel #policy
Learning Novel Policies For Tasks (YZ, WY, GT), pp. 7483–7492.
ICML-2019-ZhangZLWZ #algorithm #matrix #orthogonal
Greedy Orthogonal Pivoting Algorithm for Non-Negative Matrix Factorization (KZ, SZ, JL, JW, JZ), pp. 7493–7501.
ICML-2019-ZhangZ #network
Interpreting Adversarially Trained Convolutional Neural Networks (TZ, ZZ), pp. 7502–7511.
ICML-2019-ZhangZT #adaptation #monte carlo #multi #testing
Adaptive Monte Carlo Multiple Testing via Multi-Armed Bandits (MJZ, JZ, DT), pp. 7512–7522.
ICML-2019-0002CZG #adaptation #invariant #learning #on the
On Learning Invariant Representations for Domain Adaptation (HZ0, RTdC, KZ0, GJG), pp. 7523–7532.
ICML-2019-ZhaoFNG
Metric-Optimized Example Weights (SZ, MMF, HN, MRG), pp. 7533–7542.
ICML-2019-ZhaoHDSZ #network #using
Improving Neural Network Quantization without Retraining using Outlier Channel Splitting (RZ, YH, JD, CDS, ZZ), pp. 7543–7552.
ICML-2019-ZhaoST #learning #multi
Maximum Entropy-Regularized Multi-Goal Reinforcement Learning (RZ, XS0, VT), pp. 7553–7562.
ICML-2019-Zhou0Y #optimisation #probability
Stochastic Iterative Hard Thresholding for Graph-structured Sparsity Optimization (BZ, FC0, YY), pp. 7563–7573.
ICML-2019-ZhouG #bound #optimisation
Lower Bounds for Smooth Nonconvex Finite-Sum Optimization (DZ, QG), pp. 7574–7583.
ICML-2019-ZhouLSYW00Z #generative
Lipschitz Generative Adversarial Nets (ZZ, JL, YS, LY, HW, WZ0, YY0, ZZ), pp. 7584–7593.
ICML-2019-ZhouLLLZZ #comprehension #network #towards
Toward Understanding the Importance of Noise in Training Neural Networks (MZ, TL, YL, DL, EZ, TZ), pp. 7594–7602.
ICML-2019-ZhouYWP #approach #architecture #named
BayesNAS: A Bayesian Approach for Neural Architecture Search (HZ, MY, JW0, WP), pp. 7603–7613.
ICML-2019-ZhuHLTSG
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets (CZ, WRH, HL, GT, CS, TG), pp. 7614–7623.
ICML-2019-ZhuSLHB #fault tolerance #graph #learning
Improved Dynamic Graph Learning through Fault-Tolerant Sparsification (CJZ, SS, KyL, SH, JB), pp. 7624–7633.
ICML-2019-ZhuW #difference #privacy
Poission Subsampled Rényi Differential Privacy (YZ0, YXW), pp. 7634–7642.
ICML-2019-ZhuWS #classification #learning
Learning Classifiers for Target Domain with Limited or No Labels (PZ, HW, VS), pp. 7643–7653.
ICML-2019-ZhuWYWM #behaviour #probability
The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Sharp Minima and Regularization Effects (ZZ, JW, BY, LW, JM), pp. 7654–7663.
ICML-2019-ZhuangCO #learning #online #optimisation #probability
Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization (ZZ, AC, FO), pp. 7664–7672.
ICML-2019-ZieglerR #normalisation #sequence
Latent Normalizing Flows for Discrete Sequences (ZMZ, AMR), pp. 7673–7682.
ICML-2019-ZimmertLW #probability
Beating Stochastic and Adversarial Semi-bandits Optimally and Simultaneously (JZ, HL, CYW), pp. 7683–7692.
ICML-2019-ZintgrafSKHW #adaptation #performance
Fast Context Adaptation via Meta-Learning (LMZ, KS, VK, KH, SW), pp. 7693–7702.
ICML-2019-ZrnicH #adaptation #data analysis
Natural Analysts in Adaptive Data Analysis (TZ, MH), pp. 7703–7711.

Bibliography of Software Language Engineering in Generated Hypertext (BibSLEIGH) is created and maintained by Dr. Vadim Zaytsev.
Hosted as a part of SLEBOK on GitHub.