Proceedings of the 35th International Conference on Machine Learning
BibSLEIGH corpus
BibSLEIGH tags
BibSLEIGH bundles
BibSLEIGH people
EDIT!
CC-BY
Open Knowledge
XHTML 1.0 W3C Rec
CSS 2.1 W3C CanRec
email twitter

Jennifer G. Dy, Andreas Krause 0001
Proceedings of the 35th International Conference on Machine Learning
ICML, 2018.

KER
DBLP
Scholar
?EE?
Full names Links ISxN
@proceedings{ICML-2018,
	editor        = "Jennifer G. Dy and Andreas Krause 0001",
	ee            = "http://proceedings.mlr.press/v80/",
	publisher     = "{PMLR}",
	series        = "{Proceedings of Machine Learning Research}",
	title         = "{Proceedings of the 35th International Conference on Machine Learning}",
	volume        = 80,
	year          = 2018,
}

Contents (621 items)

ICML-2018-AbeilleL #bound #linear #polynomial #problem
Improved Regret Bounds for Thompson Sampling in Linear Quadratic Control Problems (MA, AL), pp. 1–9.
ICML-2018-AbelALL #abstraction #learning
State Abstractions for Lifelong Reinforcement Learning (DA, DA, LL, MLL), pp. 10–19.
ICML-2018-AbelJGKL #learning #policy
Policy and Value Transfer in Lifelong Reinforcement Learning (DA, YJ, SYG, GDK, MLL), pp. 20–29.
ICML-2018-AcharyaKSZ #named
INSPECTRE: Privately Estimating the Unseen (JA, GK0, ZS, HZ), pp. 30–39.
ICML-2018-AchlioptasDMG #3d #generative #learning #modelling
Learning Representations and Generative Models for 3D Point Clouds (PA, OD, IM, LJG), pp. 40–49.
ICML-2018-AdelGW #generative #modelling
Discovering Interpretable Representations for Both Deep Generative and Discriminative Models (TA, ZG, AW), pp. 50–59.
ICML-2018-AgarwalBD0W #approach #classification #reduction
A Reductions Approach to Fair Classification (AA, AB, MD, JL0, HMW), pp. 60–69.
ICML-2018-AgarwalPA #ranking
Accelerated Spectral Ranking (AA, PP, SA0), pp. 70–79.
ICML-2018-AghazadehSLDSB #feature model #named #scalability #sketching #using
MISSION: Ultra Large-Scale Feature Selection using Count-Sketches (AA, RS, DL, GD, AS, RGB), pp. 80–88.
ICML-2018-AgrawalUB #graph #modelling #scalability
Minimal I-MAP MCMC for Scalable Structure Discovery in Causal DAG Models (RA, CU, TB), pp. 89–98.
ICML-2018-0001ZM #distributed
Proportional Allocation: Simple, Distributed, and Diverse Matching with High Entropy (SA0, MZ, VSM), pp. 99–108.
ICML-2018-AhnCWS #approximate
Bucket Renormalization for Approximate Inference (SA, MC, AW, JS), pp. 109–118.
ICML-2018-AinsworthFLF #analysis #named
oi-VAE: Output Interpretable VAEs for Nonlinear Group Factor Analysis (SKA, NJF, AKCL, EBF), pp. 119–128.
ICML-2018-AlaaS #algorithm #design #guidelines
Limits of Estimating Heterogeneous Treatment Effects: Guidelines for Practical Algorithm Design (AMA, MvdS), pp. 129–138.
ICML-2018-AlaaS18a #automation #kernel #learning #modelling #named #optimisation
AutoPrognosis: Automated Clinical Prognostic Modeling via Bayesian Optimization with Structured Kernel Learning (AMA, MvdS), pp. 139–148.
ICML-2018-Alabdulmohsin #empirical #optimisation #scalability
Information Theoretic Guarantees for Empirical Risk Minimization with Applications to Model Selection and Large-Scale Optimization (IMA), pp. 149–158.
ICML-2018-AlemiPFDS0
Fixing a Broken ELBO (AAA, BP, IF, JVD, RAS, KM0), pp. 159–168.
ICML-2018-AliakbarpourDR #equivalence #testing
Differentially Private Identity and Equivalence Testing of Discrete Distributions (MA, ID, RR), pp. 169–178.
ICML-2018-Allen-Zhu #optimisation #probability
Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization (ZAZ), pp. 179–185.
ICML-2018-Allen-ZhuBL #bound #first-order
Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits (ZAZ, SB, YL), pp. 186–194.
ICML-2018-AlmahairiRSBC #learning
Augmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data (AA, SR, AS, PB, ACC), pp. 195–204.
ICML-2018-AmitM
Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory (RA, RM), pp. 205–214.
ICML-2018-AmodioK #biology #named
MAGAN: Aligning Biological Manifolds (MA, SK), pp. 215–223.
ICML-2018-AndoniLSZZ #linear
Subspace Embedding and Linear Regression with Orlicz Norm (AA, CL0, YS0, PZ, RZ), pp. 224–233.
ICML-2018-ArenzZN #performance #policy #using
Efficient Gradient-Free Variational Inference using Policy Search (OA, MZ, GN), pp. 234–243.
ICML-2018-AroraCH #network #on the #optimisation
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization (SA, NC, EH), pp. 244–253.
ICML-2018-Arora0NZ #approach #bound
Stronger Generalization Bounds for Deep Nets via a Compression Approach (SA, RG0, BN, YZ), pp. 254–263.
ICML-2018-AsadiML #learning #modelling
Lipschitz Continuity in Model-based Reinforcement Learning (KA, DM, MLL), pp. 264–273.
ICML-2018-AthalyeC0 #obfuscation #security
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples (AA, NC, DAW0), pp. 274–283.
ICML-2018-AthalyeEIK #robust
Synthesizing Robust Adversarial Examples (AA, LE, AI, KK), pp. 284–293.
ICML-2018-AwasthiV #clustering
Clustering Semi-Random Mixtures of Gaussians (PA, AV), pp. 294–303.
ICML-2018-BacciuEM #approach #generative #graph #markov
Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing (DB, FE, AM), pp. 304–313.
ICML-2018-BaiB
Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions (WB, JAB), pp. 314–323.
ICML-2018-Baity-JesiSGSAC #network
Comparing Dynamics: Deep Neural Networks versus Glassy Systems (MBJ, LS, MG, SS, GBA, CC, YL, MW, GB), pp. 324–333.
ICML-2018-BajajGHHL #clustering #named #using
SMAC: Simultaneous Mapping and Clustering Using Spectral Decompositions (CB, TG, ZH, QH, ZL), pp. 334–343.
ICML-2018-BajgarKK #architecture #performance
A Boo(n) for Evaluating Architecture Performance (OB, RK, JK), pp. 344–352.
ICML-2018-BalcanDSV #branch #learning
Learning to Branch (MFB, TD, TS, EV), pp. 353–362.
ICML-2018-BalduzziRMFTG #game studies
The Mechanics of n-Player Differentiable Games (DB, SR, JM, JNF, KT, TG), pp. 363–372.
ICML-2018-BalestrieroCGB #learning
Spline Filters For End-to-End Deep Learning (RB, RC, HG, RGB), pp. 373–382.
ICML-2018-BalestrieroB #network
A Spline Theory of Deep Networks (RB, RGB), pp. 383–392.
ICML-2018-BalkanskiS #adaptation #approximate
Approximation Guarantees for Adaptive Sampling (EB, YS), pp. 393–402.
ICML-2018-BalleW #difference #privacy
Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising (BB, YXW), pp. 403–412.
ICML-2018-BallesH #probability
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients (LB, PH), pp. 413–422.
ICML-2018-BalogTS #database #kernel
Differentially Private Database Release via Kernel Mean Embeddings (MB, IOT, BS), pp. 423–431.
ICML-2018-BamlerM #modelling #optimisation #symmetry
Improving Optimization in Models With Continuous Symmetry Breaking (RB, SM), pp. 432–441.
ICML-2018-BangS #generative #network #using
Improved Training of Generative Adversarial Networks using Representative Features (DB, HS), pp. 442–451.
ICML-2018-BansalAB #agile #design #using
Using Inherent Structures to design Lean 2-layer RBMs (AB, AA, CB), pp. 452–460.
ICML-2018-BaoNS #classification #similarity
Classification from Pairwise Similarity and Unlabeled Data (HB, GN, MS), pp. 461–470.
ICML-2018-BaptistaP #combinator #optimisation
Bayesian Optimization of Combinatorial Structures (RB, MP), pp. 471–480.
ICML-2018-BaqueRFF #optimisation
Geodesic Convolutional Shape Optimization (PB, ER, FF, PF), pp. 481–490.
ICML-2018-BargiacchiVRNH #coordination #graph #learning #multi #problem
Learning to Coordinate with Coordination Graphs in Repeated Single-Stage Multi-Agent Decision Problems (EB, TV, DMR, AN, HvH), pp. 491–499.
ICML-2018-BarmanBG #testing
Testing Sparsity over Known and Unknown Bases (SB, AB0, SG), pp. 500–509.
ICML-2018-BarretoBQSSHMZM #learning #policy #using
Transfer in Deep Reinforcement Learning Using Successor Features and Generalised Policy Improvement (AB, DB, JQ, TS, DS, MH, DJM, AZ, RM), pp. 510–519.
ICML-2018-BartlettHL #linear
Gradient descent with identity initialization efficiently learns positive definite linear transformations (PLB, DPH, PML), pp. 520–529.
ICML-2018-BelghaziBROBHC #estimation
Mutual Information Neural Estimation (MIB, AB, SR, SO, YB, RDH, ACC), pp. 530–539.
ICML-2018-BelkinMM #kernel #learning
To Understand Deep Learning We Need to Understand Kernel Learning (MB, SM, SM), pp. 540–548.
ICML-2018-BenderKZVL #architecture #comprehension
Understanding and Simplifying One-Shot Architecture Search (GB, PJK, BZ, VV, QVL), pp. 549–558.
ICML-2018-BernsteinWAA #named #optimisation #problem
SIGNSGD: Compressed Optimisation for Non-Convex Problems (JB, YXW, KA, AA), pp. 559–568.
ICML-2018-BhaskaraW #clustering #distributed
Distributed Clustering via LSH Based Data Partitioning (AB, MW), pp. 569–578.
ICML-2018-BinkowskiMD #network
Autoregressive Convolutional Neural Networks for Asynchronous Time Series (MB, GM, PD), pp. 579–588.
ICML-2018-BlancR #adaptation #kernel
Adaptive Sampled Softmax with Kernel Based Sampling (GB, SR), pp. 589–598.
ICML-2018-BojanowskiJLS #generative #network #optimisation
Optimizing the Latent Space of Generative Networks (PB, AJ, DLP, AS), pp. 599–608.
ICML-2018-BojchevskiSZG #generative #graph #named #random
NetGAN: Generating Graphs via Random Walks (AB, OS, DZ, SG), pp. 609–618.
ICML-2018-BollapragadaMNS #machine learning
A Progressive Batching L-BFGS Method for Machine Learning (RB, DM, JN, HJMS, PTPT), pp. 619–628.
ICML-2018-BonakdarpourCBL #predict
Prediction Rule Reshaping (MB, SC, RFB, JL), pp. 629–637.
ICML-2018-BoracchiCCM #data type #detection #multi #named
QuantTree: Histograms for Change Detection in Multivariate Data Streams (GB, DC, CC, DM), pp. 638–647.
ICML-2018-BravermanCKLWY #data type #matrix #multi #performance
Matrix Norms in Data Streams: Faster, Multi-Pass and Row-Order (VB, SRC, RK, YL0, DPW, LFY), pp. 648–657.
ICML-2018-BrukhimG #modelling #predict
Predict and Constrain: Modeling Cardinality in Deep Structured Prediction (NB, AG), pp. 658–666.
ICML-2018-BuchholzWM #monte carlo
Quasi-Monte Carlo Variational Inference (AB, FW, SM), pp. 667–676.
ICML-2018-CaiYZHY #architecture #network #performance
Path-Level Network Transformation for Efficient Architecture Search (HC, JY, WZ0, SH, YY0), pp. 677–686.
ICML-2018-CalandrielloKLV #graph #learning #scalability
Improved Large-Scale Graph Learning through Ridge Spectral Sparsification (DC, IK, AL, MV), pp. 687–696.
ICML-2018-CampbellB
Bayesian Coreset Construction via Greedy Iterative Geodesic Ascent (TC, TB), pp. 697–705.
ICML-2018-CaoGWSHT #coordination #learning
Adversarial Learning with Local Coordinate Coding (JC, YG, QW, CS, JH, MT), pp. 706–714.
ICML-2018-CelisKS0KV #summary
Fair and Diverse DPP-Based Data Summarization (LEC, VK, DS, AD0, TK, NKV), pp. 715–724.
ICML-2018-CeylanG #estimation #modelling
Conditional Noise-Contrastive Estimation of Unnormalised Models (CC, MUG), pp. 725–733.
ICML-2018-ChapfuwaTLPGCH #modelling
Adversarial Time-to-Event Modeling (PC, CT, CL, CP, BG, LC, RH), pp. 734–743.
ICML-2018-CharlesP #algorithm #learning
Stability and Generalization of Learning Algorithms that Converge to Global Optima (ZBC, DSP), pp. 744–753.
ICML-2018-Chatterjee #learning
Learning and Memorization (SC), pp. 754–762.
ICML-2018-ChatterjiFMBJ #formal method #monte carlo #on the #probability #reduction
On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo (NSC, NF, YAM, PLB, MIJ), pp. 763–772.
ICML-2018-ChatziafratisNC #clustering #constraints
Hierarchical Clustering with Structural Constraints (VC, RN, MC), pp. 773–782.
ICML-2018-ChePLJL #generative #modelling #multi
Hierarchical Deep Generative Models for Multi-Rate Multivariate Time Series (ZC, SP, MGL, BJ, YL0), pp. 783–792.
ICML-2018-ChenBLR #adaptation #multi #named #network #normalisation
GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks (ZC0, VB, CYL, AR), pp. 793–802.
ICML-2018-0003FK #constraints #question
Weakly Submodular Maximization Beyond Cardinality Constraints: Does Randomization Help Greedy? (LC0, MF, AK), pp. 803–812.
ICML-2018-ChenHHK #online #optimisation #probability
Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity (LC0, CH, HH, AK), pp. 813–822.
ICML-2018-ChenLCWPC #estimation #performance
Continuous-Time Flows for Efficient Inference and Density Estimation (CC, CL, LC, WW, YP, LC), pp. 823–832.
ICML-2018-ChenLW #learning #scalability #using
Scalable Bilinear Learning Using State and Action Features (YC, LL0, MW), pp. 833–842.
ICML-2018-ChenMGBO
Stein Points (WYC, LWM, JG, FXB, CJO), pp. 843–852.
ICML-2018-ChenMS #learning
Learning K-way D-dimensional Discrete Codes for Compact Embedding Representations (TC0, MRM, YS), pp. 853–862.
ICML-2018-ChenMRA #generative #named
PixelSNAIL: An Improved Autoregressive Generative Model (XC0, NM, MR, PA), pp. 863–871.
ICML-2018-ChenPS #network
Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks (MC, JP, SSS), pp. 872–881.
ICML-2018-ChenSWJ #learning
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation (JC, LS, MJW, MIJ), pp. 882–891.
ICML-2018-ChenTZHC #bound
Variational Inference and Model Selection with Generalized Evidence Bounds (LC, CT, RZ, RH, LC), pp. 892–901.
ICML-2018-ChenWCP #distributed #named
DRACO: Byzantine-resilient Distributed Training via Redundant Gradients (LC, HW, ZBC, DSP), pp. 902–911.
ICML-2018-ChenXCY #adaptation #named #probability
SADAGRAD: Strongly Adaptive Stochastic Gradient Methods (ZC, YX, EC, TY), pp. 912–920.
ICML-2018-ChenXW0G #estimation #matrix #optimisation #precise
Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization (JC, PX0, LW, JM0, QG), pp. 921–930.
ICML-2018-ChenXG #learning #multi
End-to-End Learning for the Deep Multivariate Probit Model (DC, YX, CPG), pp. 931–940.
ICML-2018-ChenZS #graph #network #probability #reduction
Stochastic Training of Graph Convolutional Networks with Variance Reduction (JC, JZ0, LS), pp. 941–949.
ICML-2018-ChengDH #learning #rank
Extreme Learning to Rank via Low Rank Assumption (MC, ID, CJH), pp. 950–959.
ICML-2018-Chierichetti0T #learning #multi
Learning a Mixture of Two Multinomial Logits (FC, RK0, AT), pp. 960–968.
ICML-2018-ChoromanskiRSTW #architecture #evolution #optimisation #policy #scalability
Structured Evolution with Compact Architectures for Scalable Policy Optimization (KC, MR, VS, RET, AW), pp. 969–977.
ICML-2018-ChowNG #consistency #learning
Path Consistency Learning in Tsallis Entropy Regularized MDPs (YC, ON, MG), pp. 978–987.
ICML-2018-ChowdhuryYD #framework #sketching
An Iterative, Sketching-based Framework for Ridge Regression (AC, JY, PD), pp. 988–997.
ICML-2018-ClaiciCS #probability
Stochastic Wasserstein Barycenters (SC, EC, JS), pp. 998–1007.
ICML-2018-Co-ReyesLGEAL #learning #self
Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings (JDCR, YL, AG0, BE, PA, SL), pp. 1008–1017.
ICML-2018-CohenDO #on the
On Acceleration with Noise-Corrupted Gradients (MC0, JD, LO), pp. 1018–1027.
ICML-2018-CohenHKLMT #linear #online #polynomial
Online Linear Quadratic Control (AC, AH, TK, NL, YM, KT), pp. 1028–1037.
ICML-2018-ColasSO #algorithm #learning #named
GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms (CC, OS, PYO), pp. 1038–1047.
ICML-2018-CormodeDW #distributed #streaming #summary
Leveraging Well-Conditioned Bases: Streaming and Distributed Summaries in Minkowski p-Norms (GC, CD, DPW), pp. 1048–1056.
ICML-2018-CorneilGB #learning #performance
Efficient ModelBased Deep Reinforcement Learning with Variational State Tabulation (DSC, WG, JB), pp. 1057–1066.
ICML-2018-CortesDGMY #learning #online
Online Learning with Abstention (CC, GD, CG, MM, SY), pp. 1067–1075.
ICML-2018-CotterFYGB
Constrained Interacting Submodular Groupings (AC, MMF, SY, MRG, JAB), pp. 1076–1085.
ICML-2018-CremerLD
Inference Suboptimality in Variational Autoencoders (CC, XL, DD), pp. 1086–1094.
ICML-2018-CzarneckiJJHTHO #education #learning
Mix & Match Agent Curricula for Reinforcement Learning (WMC, SMJ, MJ, LH, YWT, NH, SO, RP), pp. 1095–1103.
ICML-2018-DabneyOSM #learning #network
Implicit Quantile Networks for Distributional Reinforcement Learning (WD, GO, DS, RM), pp. 1104–1113.
ICML-2018-DaiKDSS #algorithm #graph #learning
Learning Steady-States of Iterative Algorithms over Graphs (HD, ZK, BD, AJS, LS), pp. 1114–1122.
ICML-2018-DaiLTHWZS #graph
Adversarial Attack on Graph Structured Data (HD, HL, TT0, XH, LW, JZ0, LS), pp. 1123–1132.
ICML-2018-DaiS0XHLCS #approximate #convergence #learning #named
SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation (BD, AS, LL0, LX, NH, ZL0, JC, LS), pp. 1133–1142.
ICML-2018-DaiZGW #network #using
Compressing Neural Networks using the Variational Information Bottleneck (BD, CZ, BG, DPW), pp. 1143–1152.
ICML-2018-DamaskinosMGPT #machine learning
Asynchronous Byzantine Machine Learning (the case of SGD) (GD, EMEM, RG, RP, MT), pp. 1153–1162.
ICML-2018-DaneshmandKLH #probability
Escaping Saddles with Stochastic Gradients (HD, JMK, AL, TH), pp. 1163–1172.
ICML-2018-SaCW #modelling #scalability #visual notation
Minibatch Gibbs Sampling on Large Graphical Models (CDS, VC, WW), pp. 1173–1181.
ICML-2018-DentonF #generative #probability #video
Stochastic Video Generation with a Learned Prior (ED, RF), pp. 1182–1191.
ICML-2018-DepewegHDU #composition #learning #nondeterminism #performance
Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning (SD, JMHL, FDV, SU), pp. 1192–1201.
ICML-2018-DeshpandeMST #adaptation #linear #modelling
Accurate Inference for Adaptive Linear Models (YD, LWM, VS, MT), pp. 1202–1211.
ICML-2018-DezfouliBN #network
Variational Network Inference: Strong and Stable with Concrete Support (AD, EVB, RN), pp. 1212–1221.
ICML-2018-DharGE #generative #modelling #using
Modeling Sparse Deviations for Compressed Sensing using Generative Models (MD, AG, SE), pp. 1222–1231.
ICML-2018-DiakonikolasO #coordination #random
Alternating Randomized Block Coordinate Descent (JD, LO), pp. 1232–1240.
ICML-2018-DibangoyeB #distributed #learning
Learning to Act in Decentralized Partially Observable MDPs (JSD, OB), pp. 1241–1250.
ICML-2018-DiengRAB #named #network
Noisin: Unbiased Regularization for Recurrent Neural Networks (ABD, RR, JA, DMB), pp. 1251–1260.
ICML-2018-DietterichTC #learning
Discovering and Removing Exogenous State Variables and Rewards for Reinforcement Learning (TGD, GT, ZC), pp. 1261–1269.
ICML-2018-DimakopoulouR #concurrent #coordination #learning
Coordinated Exploration in Concurrent Reinforcement Learning (MD, BVR), pp. 1270–1278.
ICML-2018-DoerrDSNSTT #modelling #probability
Probabilistic Recurrent State-Space Models (AD, CD, MS, DNT, SS, MT, ST), pp. 1279–1288.
ICML-2018-DoikovR #polynomial #random
Randomized Block Cubic Newton Method (ND, PR), pp. 1289–1297.
ICML-2018-DouikH #clustering #graph #matrix #optimisation #probability #rank
Low-Rank Riemannian Optimization on Positive Semidefinite Stochastic Matrices with Applications to Graph Clustering (AD, BH), pp. 1298–1307.
ICML-2018-DraxlerVSH #energy #network
Essentially No Barriers in Neural Network Energy Landscape (FD, KV, MS, FAH), pp. 1308–1317.
ICML-2018-Drutsa #algorithm #consistency
Weakly Consistent Optimal Pricing Algorithms in Repeated Posted-Price Auctions with Strategic Buyer (AD), pp. 1318–1327.
ICML-2018-DuL #network #on the #polynomial #power of
On the Power of Over-parametrization in Neural Networks with Quadratic Activation (SSD, JDL), pp. 1328–1337.
ICML-2018-DuLTSP
Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima (SSD, JDL, YT, AS, BP), pp. 1338–1347.
ICML-2018-DubeyAPGE #game studies #video
Investigating Human Priors for Playing Video Games (RD, PA, DP, TG, AAE), pp. 1348–1356.
ICML-2018-DunnerLGBHJ #algorithm #distributed #higher-order #trust
A Distributed Second-Order Algorithm You Can Trust (CD, AL, MG, AB, TH, MJ), pp. 1357–1365.
ICML-2018-DvurechenskyGK #algorithm #complexity
Computational Optimal Transport: Complexity by Accelerated Gradient Descent Is Better Than by Sinkhorn's Algorithm (PED, AG, AK), pp. 1366–1375.
ICML-2018-Dziugaite0 #bound
Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors (GKD, DMR0), pp. 1376–1385.
ICML-2018-EfroniDSM #approach #learning
Beyond the One-Step Greedy Approach in Reinforcement Learning (YE, GD, BS, SM), pp. 1386–1395.
ICML-2018-EsfandiariLM #algorithm #composition #parallel #streaming
Parallel and Streaming Algorithms for K-Core Decomposition (HE, SL, VSM), pp. 1396–1405.
ICML-2018-EspeholtSMSMWDF #architecture #distributed #named #scalability
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures (LE, HS, RM, KS, VM, TW, YD, VF, TH, ID, SL, KK), pp. 1406–1415.
ICML-2018-EvansN #process #scalability
Scalable Gaussian Processes with Grid-Structured Eigenfunctions (GP-GRIEF) (TWE, PBN), pp. 1416–1425.
ICML-2018-FalahatgarJOPR #learning #ranking
The Limits of Maxing, Ranking, and Preference Learning (MF, AJ, AO, VP, VR), pp. 1426–1435.
ICML-2018-FalknerKH #named #optimisation #performance #robust #scalability
BOHB: Robust and Efficient Hyperparameter Optimization at Scale (SF, AK, FH), pp. 1436–1445.
ICML-2018-FarajtabarCG #evaluation #robust
More Robust Doubly Robust Off-policy Evaluation (MF, YC, MG), pp. 1446–1455.
ICML-2018-FathonyBZZ #consistency #performance
Efficient and Consistent Adversarial Bipartite Matching (RF, SB, XZ, BDZ), pp. 1456–1465.
ICML-2018-Fazel0KM #convergence #linear #policy #polynomial
Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator (MF, RG0, SMK, MM), pp. 1466–1475.
ICML-2018-FazelniaP #named
CRVI: Convex Relaxation for Variational Inference (GF, JWP), pp. 1476–1484.
ICML-2018-FellowsCW #fourier #policy
Fourier Policy Gradients (MF, KC, SW), pp. 1485–1494.
ICML-2018-FengWCS #learning #multi #network #parametricity #using
Nonparametric variable importance using an augmented neural network with multi-task learning (JF, BDW, MC, NS), pp. 1495–1504.
ICML-2018-FilstroffLF #matrix
Closed-form Marginal Likelihood in Gamma-Poisson Matrix Factorization (LF, AL, CF), pp. 1505–1513.
ICML-2018-FlorensaHGA #automation #generative #learning
Automatic Goal Generation for Reinforcement Learning Agents (CF, DH, XG, PA), pp. 1514–1523.
ICML-2018-FoersterFARXW #infinity #monte carlo #named
DiCE: The Infinitely Differentiable Monte Carlo Estimator (JNF, GF, MAS, TR, EPX, SW), pp. 1524–1533.
ICML-2018-FosterADLS
Practical Contextual Bandits with Regression Oracles (DJF, AA, MD, HL, RES), pp. 1534–1543.
ICML-2018-FraccaroRZPEV #generative #memory management #modelling
Generative Temporal Models with Spatial Memory for Partially Observed Environments (MF, DJR, YZ, AP, SMAE, FV), pp. 1544–1553.
ICML-2018-FrancaRV
ADMM and Accelerated ADMM as Continuous Dynamical Systems (GF, DPR, RV), pp. 1554–1562.
ICML-2018-FranceschiFSGP #optimisation #programming
Bilevel Programming for Hyperparameter Optimization and Meta-Learning (LF, PF, SS, RG, MP), pp. 1563–1572.
ICML-2018-FruitPLO #learning #performance
Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning (RF, MP, AL, RO), pp. 1573–1581.
ICML-2018-FujimotoHM #approximate #fault
Addressing Function Approximation Error in Actor-Critic Methods (SF, HvH, DM), pp. 1582–1591.
ICML-2018-FujitaM #policy
Clipped Action Policy Gradient (YF0, SiM), pp. 1592–1601.
ICML-2018-FurlanelloLTIA #network
Born-Again Neural Networks (TF, ZCL, MT, LI, AA), pp. 1602–1611.
ICML-2018-Gaboardi0 #testing
Local Private Hypothesis Testing: Chi-Square Tests (MG, RR0), pp. 1612–1621.
ICML-2018-GanapathiramanS #induction #modelling #parametricity
Inductive Two-layer Modeling with Parametric Bregman Transfer (VG, ZS, XZ, YY), pp. 1622–1631.
ICML-2018-GaneaBH #learning
Hyperbolic Entailment Cones for Learning Hierarchical Embeddings (OEG, GB, TH), pp. 1632–1641.
ICML-2018-GanianKOS #algorithm #matrix #problem
Parameterized Algorithms for the Matrix Completion Problem (RG, IAK, SO, SS), pp. 1642–1651.
ICML-2018-GaninKBEV #image #learning #source code #using
Synthesizing Programs for Images using Reinforced Adversarial Learning (YG, TK, IB, SMAE, OV), pp. 1652–1661.
ICML-2018-GaoCL #named #network #optimisation
Spotlight: Optimizing Device Placement for Training Deep Neural Networks (YG, LC0, BL), pp. 1662–1670.
ICML-2018-GaoW #learning #network #parallel
Parallel Bayesian Network Structure Learning (TG, DW), pp. 1671–1680.
ICML-2018-GarciaCEd #learning #predict
Structured Output Learning with Abstention: Application to Accurate Opinion Prediction (AG0, CC, SE, FdB), pp. 1681–1689.
ICML-2018-GarneloRMRSSTRE #process
Conditional Neural Processes (MG, DR, CM, TR, DS, MS, YWT, DJR, SMAE), pp. 1690–1699.
ICML-2018-GengKPP #modelling #visual notation
Temporal Poisson Square Root Graphical Models (SG, ZK, PLP, DP), pp. 1700–1709.
ICML-2018-Georgogiannis #fault #learning #taxonomy
The Generalization Error of Dictionary Learning with Moreau Envelopes (AG), pp. 1710–1718.
ICML-2018-GhassamiSKB #design #empirical #learning
Budgeted Experiment Design for Causal Structure Learning (AG, SS, NK, EB), pp. 1719–1728.
ICML-2018-GhodsLGS #linear #retrieval
Linear Spectral Estimators and an Application to Phase Retrieval (RG, ASL, TG, CS), pp. 1729–1738.
ICML-2018-GhoshYD #learning #network
Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors (SG, JY, FDV), pp. 1739–1748.
ICML-2018-GhoshalH #learning #modelling #polynomial #predict
Learning Maximum-A-Posteriori Perturbation Models for Structured Prediction in Polynomial Time (AG, JH), pp. 1749–1757.
ICML-2018-GibsonG #modelling #robust #scalability
Robust and Scalable Models of Microbiome Dynamics (TEG, GKG), pp. 1758–1767.
ICML-2018-GilraG #learning #network
Non-Linear Motor Control by Local Learning in Spiking Neural Networks (AG, WG), pp. 1768–1777.
ICML-2018-GoelKM #learning
Learning One Convolutional Layer with Overlapping Patches (SG, ARK, RM), pp. 1778–1786.
ICML-2018-GreydanusKDF #comprehension #visualisation
Visualizing and Understanding Atari Agents (SG, AK, JD, AF), pp. 1787–1796.
ICML-2018-GroverAGBE #learning #multi #policy
Learning Policy Representations in Multiagent Systems (AG, MAS, JKG, YB, HE), pp. 1797–1806.
ICML-2018-GuHDH #algorithm #memory management #performance #probability
Faster Derivative-Free Stochastic Algorithm for Shared Memory Machines (BG, ZH, CD, HH), pp. 1807–1816.
ICML-2018-GuezWASVWMS #learning
Learning to Search with MCTSnets (AG, TW, IA, KS, OV, DW, RM, DS), pp. 1817–1826.
ICML-2018-GunasekarLSS #bias #geometry #optimisation
Characterizing Implicit Bias in Terms of Optimization Geometry (SG, JDL, DS, NS), pp. 1827–1836.
ICML-2018-0001KS #named #optimisation #probability
Shampoo: Preconditioned Stochastic Tensor Optimization (VG0, TK, YS), pp. 1837–1845.
ICML-2018-HaarnojaHAL #learning #policy
Latent Space Policies for Hierarchical Reinforcement Learning (TH, KH, PA, SL), pp. 1846–1855.
ICML-2018-HaarnojaZAL #learning #probability
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor (TH, AZ, PA, SL), pp. 1856–1865.
ICML-2018-HaghiriGL #random
Comparison-Based Random Forests (SH, DG, UvL), pp. 1866–1875.
ICML-2018-HammN #learning #optimisation #performance
K-Beam Minimax: Efficient Optimization for Deep Adversarial Learning (JH, YKN), pp. 1876–1884.
ICML-2018-HanHZ #classification #estimation #multi #problem #scalability
Candidates vs. Noises Estimation for Large Multi-Class Classification Problem (LH, YH, TZ), pp. 1885–1894.
ICML-2018-HanL
Stein Variational Gradient Descent Without Gradient (JH0, QL0), pp. 1895–1903.
ICML-2018-YeZ0Z #modelling #semantics
Rectify Heterogeneous Models with Semantic Mapping (HJY, DCZ, YJ0, ZHZ), pp. 1904–1913.
ICML-2018-HartfordGLR #interactive #modelling #set
Deep Models of Interactions Across Sets (JSH, DRG, KLB, SR), pp. 1914–1923.
ICML-2018-HashemiSSALCKR #data access #learning #memory management
Learning Memory Access Patterns (MH, KS, JAS, GA, HL, JC, CK, PR), pp. 1924–1933.
ICML-2018-HashimotoSNL
Fairness Without Demographics in Repeated Loss Minimization (TBH, MS, HN, PL), pp. 1934–1943.
ICML-2018-Hebert-JohnsonK #multi #named
Multicalibration: Calibration for the (Computationally-Identifiable) Masses (ÚHJ, MPK, OR, GNR), pp. 1944–1953.
ICML-2018-HefnyM0SG #network #policy #predict
Recurrent Predictive State Policy Networks (AH, ZM, WS0, SSS, GJG), pp. 1954–1963.
ICML-2018-HeinonenYMIL #learning #modelling #process
Learning unknown ODE models with Gaussian processes (MH, CY, HM, JI, HL), pp. 1964–1973.
ICML-2018-HelfrichWY #network #orthogonal
Orthogonal Recurrent Neural Networks with Scaled Cayley Transform (KH, DW, QY0), pp. 1974–1983.
ICML-2018-HoPW #performance #robust
Fast Bellman Updates for Robust MDPs (CPH, MP, WW), pp. 1984–1993.
ICML-2018-HoffmanTPZISED #adaptation #named
CyCADA: Cycle-Consistent Adversarial Domain Adaptation (JH, ET, TP, JYZ, PI, KS, AAE, TD), pp. 1994–2003.
ICML-2018-HoltzenBM #abstraction #composition #probability #source code
Sound Abstraction and Decomposition of Probabilistic Programs (SH, GVdB, TDM), pp. 2004–2013.
ICML-2018-HongRL #algorithm #distributed #higher-order #network #optimisation
Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks (MH, MR, JDL), pp. 2014–2023.
ICML-2018-HronMG
Variational Bayesian dropout: pitfalls and fixes (JH, AGdGM, ZG), pp. 2024–2033.
ICML-2018-HuNSS #classification #learning #question #robust
Does Distributionally Robust Supervised Learning Give Robust Classifiers? (WH, GN, IS, MS), pp. 2034–2042.
ICML-2018-HuWL #analysis #probability #reduction #source code #using
Dissipativity Theory for Accelerating Stochastic Variance Reduction: A Unified Analysis of SVRG and Katyusha Using Semidefinite Programs (BH, SW, LL), pp. 2043–2052.
ICML-2018-Huang #matrix #sketching
Near Optimal Frequent Directions for Sketching Dense and Sparse Matrices (ZH), pp. 2053–2062.
ICML-2018-HuangA0S #learning #using
Learning Deep ResNet Blocks Sequentially using Boosting Theory (FH, JTA, JL0, RES), pp. 2063–2072.
ICML-2018-Huang0S #learning #markov #modelling #topic
Learning Hidden Markov Models from Pairwise Co-occurrences with Application to Topic Modeling (KH, XF0, NDS), pp. 2073–2082.
ICML-2018-HuangKLC
Neural Autoregressive Flows (CWH, DK, AL, ACC), pp. 2083–2092.
ICML-2018-Huntsman #estimation
Topological Mixture Estimation (SH), pp. 2093–2102.
ICML-2018-HuoGYH #convergence #parallel
Decoupled Parallel Backpropagation with Convergence Guarantee (ZH, BG, QY, HH), pp. 2103–2111.
ICML-2018-IcarteKVM #composition #learning #specification #using
Using Reward Machines for High-Level Task Specification and Decomposition in Reinforcement Learning (RTI, TQK, RAV, SAM), pp. 2112–2121.
ICML-2018-IglZLWW #learning
Deep Variational Reinforcement Learning for POMDPs (MI, LMZ, TAL, FW, SW), pp. 2122–2131.
ICML-2018-IlseTW #learning #multi
Attention-based Deep Multiple Instance Learning (MI, JMT, MW), pp. 2132–2141.
ICML-2018-IlyasEAL #black box #query
Black-box Adversarial Attacks with Limited Queries and Information (AI, LE, AA, JL), pp. 2142–2151.
ICML-2018-ImamuraSS #analysis #clustering #crowdsourcing #fault
Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model (HI, IS, MS), pp. 2152–2161.
ICML-2018-ImaniW #performance
Improving Regression Performance with Distributional Losses (EI, MW), pp. 2162–2171.
ICML-2018-InouyeR
Deep Density Destructors (DII, PR), pp. 2172–2180.
ICML-2018-ItoYF #estimation #optimisation #predict
Unbiased Objective Estimation in Predictive Optimization (SI, AY, RF), pp. 2181–2190.
ICML-2018-IvanovB
Anonymous Walk Embeddings (SI, EB), pp. 2191–2200.
ICML-2018-JaffeWCKN #approach #learning #modelling
Learning Binary Latent Variable Models: A Tensor Eigenpair Approach (AJ, RW, SC, YK, BN), pp. 2201–2210.
ICML-2018-JainJ #optimisation
Firing Bandits: Optimizing Crowdfunding (LJ, KGJ), pp. 2211–2219.
ICML-2018-0002TT #matrix #revisited
Differentially Private Matrix Completion Revisited (PJ0, ODT, AT), pp. 2220–2229.
ICML-2018-JangKS #predict #video
Video Prediction with Appearance and Motion Conditions (YJ, GK, YS), pp. 2230–2239.
ICML-2018-JankowiakO
Pathwise Derivatives Beyond the Reparameterization Trick (MJ, FO), pp. 2240–2249.
ICML-2018-JanzingS #detection #linear #modelling #multi
Detecting non-causal artifacts in multivariate linear regression models (DJ, BS), pp. 2250–2258.
ICML-2018-JawanpuriaM #framework #learning #matrix #rank
A Unified Framework for Structured Low-rank Matrix Learning (PJ, BM), pp. 2259–2268.
ICML-2018-JeongS #learning #performance
Efficient end-to-end learning for quantizable representations (YJ, HOS), pp. 2269–2278.
ICML-2018-JiaLQA #network
Exploring Hidden Dimensions in Parallelizing Convolutional Neural Networks (ZJ, SL, CRQ, AA), pp. 2279–2288.
ICML-2018-JiangEL #learning
Feedback-Based Tree Search for Reinforcement Learning (DRJ, EE, HL), pp. 2289–2298.
ICML-2018-JiangJK
Quickshift++: Provably Good Initializations for Sample-Based Mean Shift (HJ, JJ, SK), pp. 2299–2308.
ICML-2018-JiangZLLF #data-driven #education #learning #named #network
MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels (LJ0, ZZ, TL, LJL, LFF0), pp. 2309–2318.
ICML-2018-JiaoV #higher-order #kernel #permutation
The Weighted Kendall and High-order Kernels for Permutations (YJ, JPV), pp. 2319–2327.
ICML-2018-JinBJ #generative #graph
Junction Tree Variational Autoencoder for Molecular Graph Generation (WJ, RB, TSJ), pp. 2328–2337.
ICML-2018-JinKL #network #testing
Network Global Testing by Counting Graphlets (JJ, ZTK, SL), pp. 2338–2346.
ICML-2018-JinKL18a #learning
Regret Minimization for Partially Observable Deep Reinforcement Learning (PHJ, KK, SL), pp. 2347–2356.
ICML-2018-JinYXYJFY #named #network #performance
WSNet: Compact and Efficient Networks Through Weight Sampling (XJ, YY, NX0, JY, NJ, JF, SY), pp. 2357–2366.
ICML-2018-JohnH #fourier #process #scalability #using
Large-Scale Cox Process Inference using Variational Fourier Features (STJ, JH), pp. 2367–2375.
ICML-2018-Johnson0 #functional #generative #learning #modelling
Composite Functional Gradient Learning of Generative Adversarial Models (RJ, TZ0), pp. 2376–2384.
ICML-2018-JoseCF
Kronecker Recurrent Units (CJ, MC, FF), pp. 2385–2394.
ICML-2018-KaiserBRVPUS #modelling #performance #sequence #using
Fast Decoding in Sequence Models Using Discrete Latent Variables (LK, SB, AR, AV, NP, JU, NS), pp. 2395–2404.
ICML-2018-KajiharaKYF #estimation #kernel #recursion
Kernel Recursive ABC: Point Estimation with Intractable Likelihood (TK, MK, KY, KF), pp. 2405–2414.
ICML-2018-KalchbrennerESN #performance #synthesis
Efficient Neural Audio Synthesis (NK, EE, KS, SN, NC, EL, FS, AvdO, SD, KK), pp. 2415–2424.
ICML-2018-KalimerisSSW #learning #using
Learning Diffusion using Hyperparameters (DK, YS, KS, UW), pp. 2425–2433.
ICML-2018-KallummilK #orthogonal #statistics
Signal and Noise Statistics Oblivious Orthogonal Matching Pursuit (SK, SK), pp. 2434–2443.
ICML-2018-KallusZ #machine learning
Residual Unfairness in Fair Machine Learning from Prejudiced Data (NK, AZ), pp. 2444–2453.
ICML-2018-KalyanLKB #learning #multi
Learn from Your Neighbor: Learning Multi-modal Mappings from Sparse Annotations (AK, SL, AK, DB), pp. 2454–2463.
ICML-2018-KamnitsasCFWTRG #clustering #learning
Semi-Supervised Learning via Compact Latent Space Clustering (KK, DCC, LLF, IW, RT, DR, BG, AC, AVN), pp. 2464–2473.
ICML-2018-KangJF #optimisation #policy
Policy Optimization with Demonstrations (BK, ZJ, JF), pp. 2474–2483.
ICML-2018-KangP #random
Improving Sign Random Projections With Additional Information (KK, WWP), pp. 2484–2492.
ICML-2018-KangarshahiHSC #framework #game studies
Let's be Honest: An Optimal No-Regret Framework for Zero-Sum Games (EAK, YPH, MFS, VC), pp. 2493–2501.
ICML-2018-KaplanisSC #learning
Continual Reinforcement Learning with Complex Synapses (CK, MS, CC), pp. 2502–2511.
ICML-2018-KarmonZG #locality #named
LaVAN: Localized and Visible Adversarial Noise (DK, DZ, YG), pp. 2512–2520.
ICML-2018-KasaiSM #algorithm #analysis #convergence #probability #recursion
Riemannian Stochastic Recursive Gradient Algorithm with Retraction and Vector Transport and Its Convergence Analysis (HK, HS, BM), pp. 2521–2529.
ICML-2018-KatharopoulosF #learning
Not All Samples Are Created Equal: Deep Learning with Importance Sampling (AK, FF), pp. 2530–2539.
ICML-2018-Katz-SamuelsS #identification
Feasible Arm Identification (JKS, CS), pp. 2540–2548.
ICML-2018-0001ZK #constraints #privacy #scalability #summary
Scalable Deletion-Robust Submodular Maximization: Data Summarization with Privacy and Fairness Constraints (EK0, MZ, AK), pp. 2549–2558.
ICML-2018-KeZSLTBPCP #sequence
Focused Hierarchical RNNs for Conditional Sequence Processing (NRK, KZ, AS, ZL, AT, YB, JP, LC, CJP), pp. 2559–2568.
ICML-2018-KearnsNRW #learning
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness (MJK, SN, AR0, ZSW), pp. 2569–2577.
ICML-2018-KeivaniS #nearest neighbour #using
Improved nearest neighbor search using auxiliary information and priority functions (OK, KS), pp. 2578–2586.
ICML-2018-KennamerKIS #classification #learning #named
ContextNet: Deep learning for Star Galaxy Classification (NK, DK, ATI, FJSL), pp. 2587–2595.
ICML-2018-KerdreuxPd
Frank-Wolfe with Subsampling Oracle (TK, FP, Ad), pp. 2596–2605.
ICML-2018-KhamaruW #convergence #optimisation #problem
Convergence guarantees for a class of non-convex and non-smooth optimization problems (KK, MJW), pp. 2606–2615.
ICML-2018-KhanNTLGS #learning #performance #scalability
Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam (MEK, DN, VT, WL, YG, AS), pp. 2616–2625.
ICML-2018-KhrulkovO #generative #geometry #network
Geometry Score: A Method For Comparing Generative Adversarial Networks (VK, IVO), pp. 2626–2634.
ICML-2018-KilbertusGKVGW
Blind Justice: Fairness with Encrypted Sensitive Attributes (NK, AG, MJK, MV, KPG, AW), pp. 2635–2644.
ICML-2018-Kim #markov #modelling #process
Markov Modulated Gaussian Cox Processes for Semi-Stationary Intensity Modeling of Events Data (MK), pp. 2645–2653.
ICML-2018-KimM
Disentangling by Factorising (HK, AM), pp. 2654–2663.
ICML-2018-KimW #approximate #bound #predict #self #string
Self-Bounded Prediction Suffix Tree via Approximate String Matching (DK0, CJW), pp. 2664–2672.
ICML-2018-KimWGCWVS #concept #testing
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) (BK, MW, JG, CJC, JW, FBV, RS), pp. 2673–2682.
ICML-2018-KimWMSR
Semi-Amortized Variational Autoencoders (YK, SW, ACM, DAS, AMR), pp. 2683–2692.
ICML-2018-KipfFWWZ #relational
Neural Relational Inference for Interacting Systems (TNK, EF, KCW, MW, RSZ), pp. 2693–2702.
ICML-2018-KleinbergLY #question
An Alternative View: When Does SGD Escape Local Minima? (RK, YL, YY), pp. 2703–2712.
ICML-2018-KleindessnerA #crowdsourcing
Crowdsourcing with Arbitrary Adversaries (MK, PA), pp. 2713–2722.
ICML-2018-KnoblauchD #detection #online
Spatio-temporal Bayesian On-line Changepoint Detection with Model Selection (JK, TD), pp. 2723–2732.
ICML-2018-KolarijaniEK #exponential #framework #hybrid #performance
Fast Gradient-Based Methods with Exponential Rate: A Hybrid Control Framework (ASK, PME, TK), pp. 2733–2741.
ICML-2018-KomiyamaTHS #constraints #optimisation
Nonconvex Optimization for Regression with Fairness Constraints (JK, AT, JH, HS), pp. 2742–2751.
ICML-2018-KondorT #network #on the
On the Generalization of Equivariance and Convolution in Neural Networks to the Action of Compact Groups (RK, ST), pp. 2752–2760.
ICML-2018-Koriche #combinator #compilation #game studies #predict
Compiling Combinatorial Prediction Games (FK), pp. 2761–2770.
ICML-2018-KrauseK0R #evaluation #modelling #sequence
Dynamic Evaluation of Neural Sequence Models (BK, EK, IM0, SR), pp. 2771–2780.
ICML-2018-KrishnamurthyWS
Semiparametric Contextual Bandits (AK, ZSW, VS), pp. 2781–2790.
ICML-2018-KuhnleSCT #integer #performance
Fast Maximization of Non-Submodular, Monotonic Functions on the Integer Lattice (AK, JDS, VGC, MTT), pp. 2791–2800.
ICML-2018-KuleshovFE #learning #nondeterminism #using
Accurate Uncertainties for Deep Learning Using Calibrated Regression (VK, NF, SE), pp. 2801–2809.
ICML-2018-KumarSJ #kernel #metric #network
Trainable Calibration Measures For Neural Networks From Kernel Mean Embeddings (AK, SS, UJ), pp. 2810–2819.
ICML-2018-KuzborskijL #probability
Data-Dependent Stability of Stochastic Gradient Descent (IK, CHL), pp. 2820–2829.
ICML-2018-LiGD #bias #induction #learning #network
Explicit Inductive Bias for Transfer Learning with Convolutional Networks (XL0, YG, FD), pp. 2830–2839.
ICML-2018-LiangSLS #classification #comprehension #network
Understanding the Loss Surface of Neural Networks for Binary Classification (SL, RS, YL, RS), pp. 2840–2849.
ICML-2018-LucasTOV #symmetry
Mixed batches and symmetric discriminators for GAN training (TL, CT, YO, JV), pp. 2850–2859.
ICML-2018-LaberMP #approximate
Binary Partitions with Approximate Minimum Impurity (ESL, MM, FdAMP), pp. 2860–2868.
ICML-2018-LacroixUO #canonical #composition #knowledge base
Canonical Tensor Decomposition for Knowledge Base Completion (TL, NU, GO), pp. 2869–2878.
ICML-2018-LakeB #composition #network
Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks (BML, MB), pp. 2879–2888.
ICML-2018-LanCS #analysis #estimation #framework
An Estimation and Analysis Framework for the Rasch Model (ASL, MC, CS), pp. 2889–2897.
ICML-2018-LangeKA #bound #clustering #correlation #performance
Partial Optimality and Fast Lower Bounds for Weighted Correlation Clustering (JHL, AK, BA), pp. 2898–2907.
ICML-2018-LaurentB #linear #network
Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global (TL0, JvB), pp. 2908–2913.
ICML-2018-LaurentB18a #multi #network
The Multilinear Structure of ReLU Networks (TL0, JvB), pp. 2914–2922.
ICML-2018-0001JADYD #learning
Hierarchical Imitation and Reinforcement Learning (HML0, NJ, AA, MD, YY, HDI), pp. 2923–2932.
ICML-2018-LeeC #metric
Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace (YL, SC), pp. 2933–2942.
ICML-2018-LeeKCL #case study #game studies #learning
Deep Reinforcement Learning in Continuous Action Spaces: a Case Study in the Game of Simulated Curling (KL, SAK, JC, SWL), pp. 2943–2952.
ICML-2018-LeePCXS #network
Gated Path Planning Networks (LL, EP, DSC, EPX, RS), pp. 2953–2961.
ICML-2018-LeeYH #learning #multi #symmetry
Deep Asymmetric Multi-task Feature Learning (HL, EY, SJH), pp. 2962–2970.
ICML-2018-LehtinenMHLKAA #image #learning #named
Noise2Noise: Learning Image Restoration without Clean Data (JL, JM, JH, SL, TK, MA, TA), pp. 2971–2980.
ICML-2018-LevinRMP #graph
Out-of-sample extension of graph adjacency spectral embedding (KL, FRK, MWM, CEP), pp. 2981–2990.
ICML-2018-LiH #approach #learning #network
An Optimal Control Approach to Deep Learning and Applications to Discrete-Weight Neural Networks (QL, SH), pp. 2991–3000.
ICML-2018-LiHT0QWL #robust #towards
Towards Binary-Valued Gates for Robust LSTM Training (ZL, DH, FT, WC0, TQ, LW0, TYL), pp. 3001–3010.
ICML-2018-0001MPS #approximate #first-order #on the
On the Limitations of First-Order Approximation in GAN Dynamics (JL0, AM, JP, LS), pp. 3011–3019.
ICML-2018-LiM #clustering
Submodular Hypergraphs: p-Laplacians, Cheeger Inequalities and Spectral Clustering (PL0, OM), pp. 3020–3029.
ICML-2018-LiS
The Well-Tempered Lasso (YL, YS), pp. 3030–3038.
ICML-2018-LiWZ #estimation #markov
Estimation of Markov Chain via Rank-constrained Likelihood (XL, MW, AZ), pp. 3039–3048.
ICML-2018-LianZZL #distributed #parallel #probability
Asynchronous Decentralized Parallel Stochastic Gradient Descent (XL, WZ0, CZ, JL0), pp. 3049–3058.
ICML-2018-LiangLNMFGGJS #abstraction #distributed #learning #named
RLlib: Abstractions for Distributed Reinforcement Learning (EL, RL, RN, PM, RF, KG, JG, MIJ, IS), pp. 3059–3068.
ICML-2018-LiaoC #on the #random
On the Spectrum of Random Features Maps of High Dimensional Data (ZL, RC), pp. 3069–3077.
ICML-2018-LiaoC18a #approach #learning #matrix #random
The Dynamics of Learning: A Random Matrix Approach (ZL, RC), pp. 3078–3087.
ICML-2018-LiaoXFZYPUZ
Reviving and Improving Recurrent Back-Propagation (RL, YX, EF, LZ, KY, XP, RU, RSZ), pp. 3088–3097.
ICML-2018-LinC #distributed #learning #multi #probability
Optimal Distributed Learning with Multi-pass Stochastic Gradient Methods (JL, VC), pp. 3098–3107.
ICML-2018-LinC18a #algorithm #sketching
Optimal Rates of Sketched-regularized Algorithms for Least-Squares Regression over Hilbert Spaces (JL, VC), pp. 3108–3117.
ICML-2018-LinMY #optimisation
Level-Set Methods for Finite-Sum Constrained Convex Optimization (QL, RM, TY), pp. 3118–3127.
ICML-2018-LiptonWS #black box #detection #predict
Detecting and Correcting for Label Shift with Black Box Predictors (ZCL, YXW, AJS), pp. 3128–3136.
ICML-2018-LiuCWO #process #robust #scalability
Generalized Robust Bayesian Committee Machine for Large-scale Gaussian Process Regression (HL, JC, YW, YSO), pp. 3137–3146.
ICML-2018-LiuDLLRS #black box #education #towards
Towards Black-box Iterative Machine Teaching (WL, BD, XL, ZL0, JMR, LS), pp. 3147–3155.
ICML-2018-LiuDRSH #machine learning
Delayed Impact of Fair Machine Learning (LTL, SD, ER, MS, MH), pp. 3156–3164.
ICML-2018-LiuGS #distance
A Two-Step Computation of the Exact GAN Wasserstein Distance (HL, XG, DS), pp. 3165–3174.
ICML-2018-LiuGDFH #detection
Open Category Detection with PAC Guarantees (SL, RG, TGD, AF, DH), pp. 3175–3184.
ICML-2018-LiuH #performance #probability #reduction
Fast Variance Reduction Method with Stochastic Batch Size (XL, CJH), pp. 3185–3194.
ICML-2018-LiuZCWY #performance #probability
Fast Stochastic AUC Maximization with O(1/n)-Convergence Rate (ML, XZ, ZC, XW, TY), pp. 3195–3203.
ICML-2018-LocatelloRKRSSJ #coordination #on the
On Matching Pursuit and Coordinate Descent (FL, AR, SPK, GR, BS, SUS, MJ), pp. 3204–3213.
ICML-2018-LongLMD #learning #named
PDE-Net: Learning PDEs from Data (ZL, YL, XM, BD0), pp. 3214–3222.
ICML-2018-LopesWM #algorithm #estimation #fault #random
Error Estimation for Randomized Least-Squares Algorithms via the Bootstrap (MEL, SW, MWM), pp. 3223–3232.
ICML-2018-LorenziF #modelling #probability
Constraining the Dynamics of Deep Probabilistic Models (ML, MF), pp. 3233–3242.
ICML-2018-LoukasV #approximate #graph #scalability
Spectrally Approximating Large Graphs with Smaller Graphs (AL, PV), pp. 3243–3252.
ICML-2018-LuCLLW #combinator #statistics #trade-off
The Edge Density Barrier: Computational-Statistical Tradeoffs in Combinatorial Inference (HL, YC, JL, HL0, ZW), pp. 3253–3262.
ICML-2018-LuFM #coordination
Accelerating Greedy Coordinate Descent Methods (HL, RMF, VSM), pp. 3263–3272.
ICML-2018-LuGDL #optimisation
Structured Variationally Auto-encoded Optimization (XL, JG, ZD, NDL), pp. 3273–3281.
ICML-2018-LuZLD #architecture #difference #equation #finite #network
Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations (YL, AZ, QL, BD0), pp. 3282–3291.
ICML-2018-LuoSZLZW #learning
End-to-end Active Object Tracking via Reinforcement Learning (WL, PS, FZ, WL0, TZ0, YW), pp. 3292–3301.
ICML-2018-LykourisV
Competitive Caching with Machine Learned Advice (TL, SV), pp. 3302–3311.
ICML-2018-Lyu0YZ0 #automation #design #multi #optimisation
Batch Bayesian Optimization via Multi-objective Acquisition Ensemble for Automated Analog Circuit Design (WL, FY0, CY, DZ, XZ0), pp. 3312–3320.
ICML-2018-MassiasSG #named #performance
Celer: a Fast Solver for the Lasso with Dual Extrapolation (MM, JS, AG), pp. 3321–3330.
ICML-2018-MaBB #comprehension #effectiveness #learning #power of
The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning (SM, RB, MB), pp. 3331–3340.
ICML-2018-MaOSS #matrix
Gradient Descent for Sparse Rank-One Matrix Completion for Crowd-Sourced Aggregation of Sparsely Interacting Workers (YM, AO, CS, VS), pp. 3341–3350.
ICML-2018-MaWCC #estimation #matrix #retrieval #statistics
Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval and Matrix Completion (CM, KW, YC, YC0), pp. 3351–3360.
ICML-2018-MaWHZEXWB #learning
Dimensionality-Driven Learning with Noisy Labels (XM, YW0, MEH, SZ0, SME, STX, SNRW, JB0), pp. 3361–3370.
ICML-2018-MaXM #approximate #message passing #optimisation
Approximate message passing for amplitude based optimization (JM0, JX, AM), pp. 3371–3380.
ICML-2018-MadrasCPZ #learning
Learning Adversarially Fair and Transferable Representations (DM, EC, TP, RSZ), pp. 3381–3390.
ICML-2018-MalikPFHRD #learning #performance
An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning (DM, MP, JFF, DHM, SJR, ADD), pp. 3391–3399.
ICML-2018-MarinoYM
Iterative Amortized Inference (JM, YY, SM), pp. 3400–3409.
ICML-2018-MarinovMA #analysis #component #streaming
Streaming Principal Component Analysis in Noisy Settings (TVM, PM, RA), pp. 3410–3419.
ICML-2018-MartinLV #approximate #clustering #network #performance
Fast Approximate Spectral Clustering for Dynamic Networks (LM, AL, PV), pp. 3420–3429.
ICML-2018-MazharRFH #clustering #detection
Bayesian Model Selection for Change Point Detection and Clustering (OM, CRR, CF, MRH), pp. 3430–3439.
ICML-2018-McLeodRO #optimisation #performance
Optimization, Fast and Slow: Optimally Switching between Local and Bayesian Optimization (MM, SJR, MAO), pp. 3440–3449.
ICML-2018-MehrabiTY #approximate #bound #network #power of
Bounds on the Approximation Power of Feedforward Neural Networks (MM, AT, MIY), pp. 3450–3458.
ICML-2018-MenschB #predict #programming
Differentiable Dynamic Programming for Structured Prediction and Attention (AM, MB), pp. 3459–3468.
ICML-2018-Mesaoudi-PaulHB #ranking #sorting
Ranking Distributions based on Noisy Sorting (AEMP, EH, RBF), pp. 3469–3477.
ICML-2018-MeschederGN #question
Which Training Methods for GANs do actually Converge? (LMM, AG, SN), pp. 3478–3487.
ICML-2018-MetelliMR #configuration management #markov #process
Configurable Markov Decision Processes (AMM, MM, MR), pp. 3488–3497.
ICML-2018-MetzlerSVB #flexibility #named #network #retrieval #robust
prDeep: Robust Phase Retrieval with a Flexible Deep Network (CAM, PS, AV, RGB), pp. 3498–3507.
ICML-2018-MeyersonM #learning #multi #pseudo
Pseudo-task Augmentation: From Deep Multitask Learning to Intratask Sharing-and Back (EM, RM), pp. 3508–3517.
ICML-2018-MhamdiGR #distributed #learning
The Hidden Vulnerability of Distributed Learning in Byzantium (EMEM, RG, SR), pp. 3518–3527.
ICML-2018-MianjyA #probability
Stochastic PCA with 𝓁2 and 𝓁1 Regularization (PM, RA), pp. 3528–3536.
ICML-2018-MianjyAV #bias #on the
On the Implicit Bias of Dropout (PM, RA, RV), pp. 3537–3545.
ICML-2018-MichaelisBE #segmentation
One-Shot Segmentation in Clutter (CM, MB, ASE), pp. 3546–3555.
ICML-2018-MiconiSC #network
Differentiable plasticity: training plastic neural networks with backpropagation (TM, KOS, JC), pp. 3556–3565.
ICML-2018-MirmanDDGV
Training Neural Machines with Trace-Based Supervision (MM, DD, PD, TG, MTV), pp. 3566–3574.
ICML-2018-MirmanGV #abstract interpretation #network #robust
Differentiable Abstract Interpretation for Provably Robust Neural Networks (MM, TG, MTV), pp. 3575–3583.
ICML-2018-MishchenkoIMA #algorithm #distributed #learning
A Delay-tolerant Proximal-Gradient Algorithm for Distributed Learning (KM, FI, JM, MRA), pp. 3584–3592.
ICML-2018-Mitrovic0ZK #approach #scalability #summary
Data Summarization at Scale: A Two-Stage Submodular Approach (MM, EK0, MZ, AK), pp. 3593–3602.
ICML-2018-Moens #adaptation
The Hierarchical Adaptive Forgetting Variational Filter (VM), pp. 3603–3612.
ICML-2018-MokhtariHK #distributed
Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings (AM, HH, AK), pp. 3613–3622.
ICML-2018-MoreauOV #coordination #distributed #named
DICOD: Distributed Convolutional Coordinate Descent for Convolutional Sparse Coding (TM, LO, NV), pp. 3623–3631.
ICML-2018-MorvanV #algorithm #higher-order #interactive #modelling #named #set
WHInter: A Working set algorithm for High-dimensional sparse second order Interaction models (MLM, JPV), pp. 3632–3641.
ICML-2018-MouZGW #bound
Dropout Training, Data-dependent Regularization, and Generalization Bounds (WM, YZ, JG, LW0), pp. 3642–3650.
ICML-2018-MullerMI #kernel #matrix
Kernelized Synaptic Weight Matrices (LKM, JNPM, GI), pp. 3651–3660.
ICML-2018-MunkhdalaiYMT #adaptation #agile
Rapid Adaptation with Conditionally Shifted Neurons (TM, XY, SM, AT), pp. 3661–3670.
ICML-2018-MussmannL #fault #nondeterminism #on the #performance
On the Relationship between Data Efficiency and Error for Uncertainty Sampling (SM, PL), pp. 3671–3679.
ICML-2018-NachmaniPTW
Fitting New Speakers Based on a Short Untranscribed Sample (EN, AP, YT, LW), pp. 3680–3688.
ICML-2018-Nachum0TS #learning #policy
Smoothed Action Value Functions for Learning Gaussian Policies (ON, MN0, GT, DS), pp. 3689–3697.
ICML-2018-NarayanamurthyV #robust
Nearly Optimal Robust Subspace Tracking (PN, NV), pp. 3698–3706.
ICML-2018-NatoleYL #algorithm #probability
Stochastic Proximal Algorithms for AUC Maximization (MN, YY, SL), pp. 3707–3716.
ICML-2018-NeelR #adaptation #bias #difference #privacy
Mitigating Bias in Adaptive Data Gathering via Differential Privacy (SN, AR0), pp. 3717–3726.
ICML-2018-Nguyen0 #optimisation
Optimization Landscape and Expressivity of Deep CNNs (QN0, MH0), pp. 3727–3736.
ICML-2018-NguyenM0 #network
Neural Networks Should Be Wide Enough to Learn Disconnected Decision Regions (QN0, MCM, MH0), pp. 3737–3746.
ICML-2018-NguyenNDRST #bound #convergence #exclamation
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption (LMN, PHN, MvD, PR, KS, MT), pp. 3747–3755.
ICML-2018-NguyenRF #framework #performance #robust #testing
Active Testing: An Efficient and Robust Framework for Estimating Accuracy (PXN, DR, CCF), pp. 3756–3765.
ICML-2018-NguyenSH #learning #on the
On Learning Sparsely Used Dictionaries from Incomplete Samples (TVN, AS, CH), pp. 3766–3775.
ICML-2018-NickelK #geometry #learning
Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry (MN, DK), pp. 3776–3785.
ICML-2018-NickischSG #process
State Space Gaussian Processes with Non-Gaussian Likelihood (HN, AS, AG), pp. 3786–3795.
ICML-2018-NiculaeMBC #named
SparseMAP: Differentiable Sparse Structured Inference (VN, AFTM, MB, CC), pp. 3796–3805.
ICML-2018-NieZP #behaviour #visualisation
A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations (WN, YZ, AP), pp. 3806–3815.
ICML-2018-NitandaS #functional #network
Functional Gradient Boosting based on Residual Network Perception (AN, TS), pp. 3816–3825.
ICML-2018-Norouzi-FardTMZ #approximate #data type
Beyond 1/2-Approximation for Submodular Maximization on Massive Data Streams (ANF, JT, SM, AZ, AM, OS), pp. 3826–3835.
ICML-2018-ODonoghueOMM #equation #nondeterminism
The Uncertainty Bellman Equation and Exploration (BO, IO, RM, VM), pp. 3836–3845.
ICML-2018-OdenaBOBORG #generative #performance #question
Is Generator Conditioning Causally Related to GAN Performance? (AO, JB, CO, TBB, CO, CR, IJG), pp. 3846–3855.
ICML-2018-OglicG #kernel #learning
Learning in Reproducing Kernel Krein Spaces (DO, TG0), pp. 3856–3864.
ICML-2018-OhGW #kernel #optimisation
BOCK : Bayesian Optimization with Cylindrical Kernels (CO, EG, MW), pp. 3865–3874.
ICML-2018-OhGSL #learning #self
Self-Imitation Learning (JO, YG, SS, HL), pp. 3875–3884.
ICML-2018-OkunoHS #framework #learning #multi #network #probability
A probabilistic framework for multi-view feature learning with many-to-many associations via neural networks (AO, TH, HS), pp. 3885–3894.
ICML-2018-OlivaDZPSXS #network
Transformation Autoregressive Networks (JBO, AD, MZ, BP, RS, EPX, JS), pp. 3895–3904.
ICML-2018-OlofssonDM #data-driven #design
Design of Experiments for Model Discrimination Hybridising Analytical and Data-Driven Approaches (SO, MPD, RM), pp. 3905–3914.
ICML-2018-OordLBSVKDLCSCG #parallel #performance #speech #synthesis
Parallel WaveNet: Fast High-Fidelity Speech Synthesis (AvdO, YL, IB, KS, OV, KK, GvdD, EL, LCC, FS, NC, DG, SN, SD, EE, NK, HZ, AG, HK, TW, DB, DH), pp. 3915–3923.
ICML-2018-OsamaZS #learning #locality #modelling #streaming
Learning Localized Spatio-Temporal Models From Streaming Data (MO, DZ, TBS), pp. 3924–3932.
ICML-2018-OstrovskiDM #generative #modelling #network
Autoregressive Quantile Networks for Generative Modeling (GO, WD, RM), pp. 3933–3942.
ICML-2018-OstrovskiiH #adaptation #algorithm #first-order #performance
Efficient First-Order Algorithms for Adaptive Signal Denoising (DO, ZH), pp. 3943–3952.
ICML-2018-OttAGR #nondeterminism
Analyzing Uncertainty in Neural Machine Translation (MO, MA, DG, MR), pp. 3953–3962.
ICML-2018-Oymak #learning #network
Learning Compact Neural Networks with Regularization (SO), pp. 3963–3972.
ICML-2018-PaassenGMH #adaptation #distance #edit distance #learning
Tree Edit Distance Learning via Adaptive Symbol Embeddings (BP, CG, AM, BH), pp. 3973–3982.
ICML-2018-PanFWNGN #difference #equation #learning
Reinforcement Learning with Function-Valued Action Spaces for Partial Differential Equation Control (YP, AmF, MW, SN, PG, DN), pp. 3983–3992.
ICML-2018-PanS #learning #predict
Learning to Speed Up Structured Output Prediction (XP, VS), pp. 3993–4002.
ICML-2018-PanZD #analysis #learning
Theoretical Analysis of Image-to-Image Translation with Adversarial Learning (XP, MZ, DD), pp. 4003–4012.
ICML-2018-PangDZ #analysis #linear #network
Max-Mahalanobis Linear Discriminant Analysis Networks (TP, CD, JZ0), pp. 4013–4022.
ICML-2018-PapiniBCPR #policy #probability
Stochastic Variance-Reduced Policy Gradient (MP, DB, GC, MP, MR), pp. 4023–4032.
ICML-2018-ParascandoloKRS #independence #learning
Learning Independent Causal Mechanisms (GP, NK, MRC, BS), pp. 4033–4041.
ICML-2018-PardoTLK #learning
Time Limits in Reinforcement Learning (FP, AT, VL, PK), pp. 4042–4051.
ICML-2018-ParmarVUKSKT #image
Image Transformer (NP, AV, JU, LK, NS, AK, DT), pp. 4052–4061.
ICML-2018-ParmasR0D #flexibility #modelling #named #policy #robust
PIPPS: Flexible Model-Based Policy Search Robust to the Curse of Chaos (PP, CER, JP0, KD), pp. 4062–4071.
ICML-2018-PearceBZN #approach #learning #predict
High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach (TP, AB, MZ, AN), pp. 4072–4081.
ICML-2018-PedregosaG #adaptation
Adaptive Three Operator Splitting (FP, GG), pp. 4082–4091.
ICML-2018-PhamGZLD #architecture #parametricity #performance
Efficient Neural Architecture Search via Parameter Sharing (HP, MYG, BZ, QVL, JD), pp. 4092–4101.
ICML-2018-Pike-Burke0SG #feedback
Bandits with Delayed, Aggregated Anonymous Feedback (CPB, SA0, CS, SG), pp. 4102–4110.
ICML-2018-PleissGWW #predict #process
Constant-Time Predictive Distributions for Gaussian Processes (GP, JRG, KQW, AGW), pp. 4111–4120.
ICML-2018-PoonLS #convergence
Local Convergence Properties of SAGA/Prox-SVRG and Acceleration (CP, JL, CBS), pp. 4121–4129.
ICML-2018-Pouliot #equivalence #multi #performance #statistics
Equivalence of Multicategory SVM and Simplex Cone SVM: Fast Computations and Statistical Theory (GP), pp. 4130–4137.
ICML-2018-PretoriusKK #learning #linear
Learning Dynamics of Linear Denoising Autoencoders (AP, SK, HK), pp. 4138–4147.
ICML-2018-PuDGWWZHC #generative #learning #multi #named
JointGAN: Multi-Domain Joint Distribution Learning with Generative Adversarial Nets (YP, SD, ZG, WW, GW0, YZ, RH, LC), pp. 4148–4157.
ICML-2018-PuMSK #synthesis
Selecting Representative Examples for Program Synthesis (YP, ZM, ASL, LPK), pp. 4158–4167.
ICML-2018-QiJZ #earley #parsing #predict #sequence
Generalized Earley Parser: Bridging Symbolic Grammars and Sequence Data for Future Prediction (SQ, BJ, SCZ), pp. 4168–4176.
ICML-2018-Qiao #collaboration #question
Do Outliers Ruin Collaboration? (MQ), pp. 4177–4184.
ICML-2018-QiaoZ0WY #image #network #recognition #scalability
Gradually Updated Neural Networks for Large-Scale Image Recognition (SQ, ZZ, WS0, BW0, ALY), pp. 4185–4194.
ICML-2018-QiuCCS #named #network
DCFNet: Deep Neural Network with Decomposed Convolutional Filters (QQ, XC, ARC, GS), pp. 4195–4204.
ICML-2018-QuLX
Non-convex Conditional Gradient Sliding (CQ, YL, HX), pp. 4205–4214.
ICML-2018-RabinowitzPSZEB
Machine Theory of Mind (NCR, FP, HFS, CZ, SMAE, MB), pp. 4215–4224.
ICML-2018-RaeDDL #learning #parametricity #performance
Fast Parametric Learning with Activation Memorization (JWR, CD, PD, TPL), pp. 4225–4234.
ICML-2018-RaghuIAKLK #game studies #learning #question
Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games? (MR, AI, JA, RK, QVL, JMK), pp. 4235–4243.
ICML-2018-RaguetL #algorithm #graph
Cut-Pursuit Algorithm for Regularizing Nonsmooth Functionals with Graph Total Variation (HR, LL), pp. 4244–4253.
ICML-2018-RaileanuDSF #learning #modelling #multi #using
Modeling Others using Oneself in Multi-Agent Reinforcement Learning (RR, ED, AS, RF), pp. 4254–4263.
ICML-2018-RainforthCYW #monte carlo #on the
On Nesting Monte Carlo Estimators (TR, RC, HY, AW), pp. 4264–4273.
ICML-2018-RainforthKLMIWT #bound
Tighter Variational Bounds are Not Necessarily Better (TR, ARK, TAL, CJM, MI, FW, YWT), pp. 4274–4282.
ICML-2018-RamdasZWJ #adaptation #algorithm #named #online
SAFFRON: an Adaptive Algorithm for Online Control of the False Discovery Rate (AR, TZ, MJW, MIJ), pp. 4283–4291.
ICML-2018-RashidSWFFW #learning #multi #named
QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning (TR, MS, CSdW, GF, JNF, SW), pp. 4292–4301.
ICML-2018-RavivTDT #graph
Gradient Coding from Cyclic MDS Codes and Expander Graphs (NR, RT, AD, IT), pp. 4302–4310.
ICML-2018-RavuriMRV #generative #learning #modelling
Learning Implicit Generative Models with the Method of Learned Moments (SVR, SM, MR, OV), pp. 4311–4320.
ICML-2018-ReagenGAMRWB #encoding #named #network
Weightless: Lossy weight encoding for deep neural network compression (BR, UG, BA, MM, AMR, GYW, DB0), pp. 4321–4330.
ICML-2018-RenZYU #learning #robust
Learning to Reweight Examples for Robust Deep Learning (MR, WZ, BY, RU), pp. 4331–4340.
ICML-2018-RiedmillerHLNDW #game studies #learning
Learning by Playing Solving Sparse Reward Tasks from Scratch (MAR, RH, TL, MN, JD, TVdW, VM, NH, JTS), pp. 4341–4350.
ICML-2018-RitterWKJBPB
Been There, Done That: Meta-Learning with Episodic Recall (SR, JXW, ZKN, SMJ, CB, RP, MB), pp. 4351–4360.
ICML-2018-RobertsERHE #learning #music
A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music (AR, JHE, CR, CH, DE), pp. 4361–4370.
ICML-2018-RosenfeldBGS #combinator #learning
Learning to Optimize Combinatorial Functions (NR, EB, AG, YS), pp. 4371–4380.
ICML-2018-RuMGO #optimisation #performance
Fast Information-theoretic Bayesian Optimisation (BXR, MM, DG, MAO), pp. 4381–4389.
ICML-2018-RuffGDSVBMK #classification
Deep One-Class Classification (LR, NG, LD, SAS, RAV, AB, EM, MK), pp. 4390–4399.
ICML-2018-RuizTDB #category theory #probability #scalability
Augment and Reduce: Stochastic Inference for Large Categorical Distributions (FJRR, MKT, ABD, DMB), pp. 4400–4409.
ICML-2018-RukatHY #composition #probability
Probabilistic Boolean Tensor Decomposition (TR, CCH, CY), pp. 4410–4419.
ICML-2018-RyderGMP #black box #difference #equation #probability
Black-Box Variational Inference for Stochastic Differential Equations (TR, AG, ASM, DP), pp. 4420–4429.
ICML-2018-SafranS #network
Spurious Local Minima are Common in Two-Layer ReLU Neural Networks (IS, OS), pp. 4430–4438.
ICML-2018-SahooLM #equation #learning
Learning Equations for Extrapolation and Control (SSS, CHL, GM), pp. 4439–4447.
ICML-2018-SajjadiPMS #network
Tempered Adversarial Networks (MSMS, GP, AM, BS), pp. 4448–4456.
ICML-2018-SalaSGR #representation #trade-off
Representation Tradeoffs for Hyperbolic Embeddings (FS, CDS, AG, CR), pp. 4457–4466.
ICML-2018-Sanchez-Gonzalez #graph #network #physics
Graph Networks as Learnable Physics Engines for Inference and Control (ASG, NH, JTS, JM, MAR, RH, PWB), pp. 4467–4476.
ICML-2018-SantoroHBML #network #reasoning
Measuring abstract reasoning in neural networks (AS, FH, DGTB, ASM, TPL), pp. 4477–4486.
ICML-2018-SanturkarSM #classification
A Classification-Based Study of Covariate Shift in GAN Distributions (SS, LS, AM), pp. 4487–4496.
ICML-2018-SanyalKGK #as a service #named #predict
TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service (AS, MJK, AG, VK), pp. 4497–4506.
ICML-2018-Scarlett #bound #optimisation
Tight Regret Bounds for Bayesian Optimization in One Dimension (JS), pp. 4507–4515.
ICML-2018-SchmitJ #learning
Learning with Abandonment (SS, RJ), pp. 4516–4524.
ICML-2018-SchwabKMMSK #learning #multi
Not to Cry Wolf: Distantly Supervised Multitask Learning in Critical Care (PS, EK, CM, DJM, CS, WK), pp. 4525–4534.
ICML-2018-Schwarz0LGTPH #framework #learning #scalability
Progress & Compress: A scalable framework for continual learning (JS, WC0, JL, AGB, YWT, RP, RH), pp. 4535–4544.
ICML-2018-SenKS #black box #multi #optimisation
Multi-Fidelity Black-Box Optimization with Hierarchical Partitions (RS, KK, SS), pp. 4545–4554.
ICML-2018-SerraSMK
Overcoming Catastrophic Forgetting with Hard Attention to the Task (JS, DS, MM, AK), pp. 4555–4564.
ICML-2018-SerraTR #bound #linear #network
Bounding and Counting Linear Regions of Deep Neural Networks (TS, CT, SR), pp. 4565–4573.
ICML-2018-SewardUBJH #first-order #generative #network
First Order Generative Adversarial Networks (CS, TU, UB, NJ, SH), pp. 4574–4583.
ICML-2018-SharchilevUSR
Finding Influential Training Samples for Gradient Boosted Decision Trees (BS, YU, PS, MdR), pp. 4584–4592.
ICML-2018-SharmaNK #clique #problem #random #using
Solving Partial Assignment Problems using Random Clique Complexes (CS, DN, MK), pp. 4593–4602.
ICML-2018-ShazeerS #adaptation #learning #memory management #named #sublinear
Adafactor: Adaptive Learning Rates with Sublinear Memory Cost (NS, MS), pp. 4603–4611.
ICML-2018-Sheffet #testing
Locally Private Hypothesis Testing (OS), pp. 4612–4621.
ICML-2018-SheldonWS #automation #difference #integer #learning #modelling
Learning in Integer Latent Variable Models with Nested Automatic Differentiation (DS, KW, DS), pp. 4622–4630.
ICML-2018-ShenMZZQ #communication #convergence #distributed #learning #performance #probability #towards
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication (ZS, AM, TZ, PZ, HQ), pp. 4631–4640.
ICML-2018-ShenSWLZ #algorithm #framework #hybrid #metric
An Algorithmic Framework of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method (LS, PS, YW, WL0, TZ0), pp. 4641–4650.
ICML-2018-ShiS0 #approach #estimation
A Spectral Approach to Gradient Estimation for Implicit Distributions (JS, SS, JZ0), pp. 4651–4660.
ICML-2018-ShiarlisWSWP #composition #learning #named
TACO: Learning Task Decomposition via Temporal Alignment for Control (KS, MW, SS, SW, IP), pp. 4661–4670.
ICML-2018-SibliniMK #clustering #learning #multi #performance #random
CRAFTML, an Efficient Clustering-based Random Forest for Extreme Multi-label Learning (WS, FM, PK), pp. 4671–4680.
ICML-2018-SimsekliYNCR #optimisation #probability
Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization (US, CY, THN, ATC, GR), pp. 4681–4690.
ICML-2018-Sinha #clustering #matrix #random #using
K-means clustering using random matrix sparsification (KS), pp. 4691–4699.
ICML-2018-Skerry-RyanBXWS #speech #synthesis #towards
Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron (RJSR, EB, YX, YW, DS, JS, RJW, RC, RAS), pp. 4700–4709.
ICML-2018-SmithHP #learning #policy
An Inference-Based Policy Gradient Method for Learning Options (MS, HvH, JP), pp. 4710–4719.
ICML-2018-SongSE #higher-order
Accelerating Natural Gradient with Higher-Order Invariance (YS, JS, SE), pp. 4720–4729.
ICML-2018-SrinivasF #information management
Knowledge Transfer with Jacobian Matching (SS, FF), pp. 4730–4738.
ICML-2018-SrinivasJALF #learning #network
Universal Planning Networks: Learning Generalizable Representations for Visuomotor Control (AS, AJ, PA, SL, CF), pp. 4739–4748.
ICML-2018-SroujiZS #learning
Structured Control Nets for Deep Reinforcement Learning (MS, JZ, RS), pp. 4749–4758.
ICML-2018-Streeter #algorithm #approximate #modelling #predict
Approximation Algorithms for Cascading Prediction Models (MS), pp. 4759–4767.
ICML-2018-SuW #learning
Learning Low-Dimensional Temporal Representations (BS, YW), pp. 4768–4777.
ICML-2018-SuganumaOO #image #search-based #standard
Exploiting the Potential of Standard Convolutional Autoencoders for Image Restoration by Evolutionary Search (MS, MO, TO), pp. 4778–4787.
ICML-2018-SuiZBY #optimisation #process
Stagewise Safe Bayesian Optimization with Gaussian Processes (YS, VZ, JWB, YY), pp. 4788–4796.
ICML-2018-SunNSL #synthesis
Neural Program Synthesis from Diverse Demonstration Videos (SHS, HN, SS, JJL), pp. 4797–4806.
ICML-2018-SunP #approximate #scalability
Scalable Approximate Bayesian Inference for Particle Tracking Data (RS, LP), pp. 4807–4816.
ICML-2018-SunTLZ #adaptation #optimisation #visual notation
Graphical Nonconvex Optimization via an Adaptive Convex Relaxation (QS, KMT, HL0, TZ0), pp. 4817–4824.
ICML-2018-SunYDB #matrix #network
Convolutional Imputation of Matrix Networks (QS, MY, DLD, SPB), pp. 4825–4834.
ICML-2018-SunZWZLG #composition #kernel #learning #process
Differentiable Compositional Kernel Learning for Gaussian Processes (SS, GZ, CW, WZ, JL, RBG), pp. 4835–4844.
ICML-2018-Talvitie #learning
Learning the Reward Function for a Misspecified Model (ET), pp. 4845–4854.
ICML-2018-TangLYZL #distributed #named
D2: Decentralized Training over Decentralized Data (HT, XL, MY0, CZ, JL0), pp. 4855–4863.
ICML-2018-TaniaiM
Neural Inverse Rendering for General Reflectance Photometric Stereo (TT, TM), pp. 4864–4873.
ICML-2018-TanseyWBR #black box
Black Box FDR (WT, YW, DMB, RR), pp. 4874–4883.
ICML-2018-TaoBZ #dependence #identification #linear
Best Arm Identification in Linear Bandits with Linear Dimension Dependency (CT, SAB, YZ0), pp. 4884–4893.
ICML-2018-TaoCHFC #generative #network
Chi-square Generative Adversarial Network (CT, LC, RH, JF, LC), pp. 4894–4903.
ICML-2018-TaylorSL #automation #convergence #first-order
Lyapunov Functions for First-Order Methods: Tight Automated Convergence Guarantees (AT, BVS, LL), pp. 4904–4913.
ICML-2018-TeyeAS #estimation #network #nondeterminism #normalisation
Bayesian Uncertainty Estimation for Batch Normalized Deep Networks (MT, HA, KS0), pp. 4914–4923.
ICML-2018-ThomasDB #learning
Decoupling Gradient-Like Learning Rules from Representations (PST, CD, EB), pp. 4924–4932.
ICML-2018-TianZZ #learning #named
CoVeR: Learning Covariate-Specific Vector Representations with Tensor Decompositions (KT, TZ, JZ), pp. 4933–4942.
ICML-2018-TirinzoniSPR #learning
Importance Weighted Transfer of Samples in Reinforcement Learning (AT, AS, MP, MR), pp. 4943–4952.
ICML-2018-TongYAV #multi
Adversarial Regression with Multiple Learners (LT, SY, SA, YV), pp. 4953–4961.
ICML-2018-TouatiBPV #approximate #convergence
Convergent TREE BACKUP and RETRACE with Function Approximation (AT, PLB, DP, PV), pp. 4962–4971.
ICML-2018-TrinhDLL #dependence #learning
Learning Longer-term Dependencies in RNNs with Auxiliary Losses (THT, AMD, TL, QVL), pp. 4972–4981.
ICML-2018-TsakirisV #analysis #clustering
Theoretical Analysis of Sparse Subspace Clustering with Missing Entries (MCT, RV), pp. 4982–4991.
ICML-2018-TschannenKA #learning #multi #named
StrassenNets: Deep Learning with a Multiplication Budget (MT, AK, AA), pp. 4992–5001.
ICML-2018-TsuchidaRG
Invariance of Weight Distributions in Rectified MLPs (RT, FRK, MG), pp. 5002–5011.
ICML-2018-TuR #difference #learning #linear #polynomial
Least-Squares Temporal Difference Learning for the Linear Quadratic Regulator (ST, BR), pp. 5012–5021.
ICML-2018-TuckerBGTGL #learning
The Mirage of Action-Dependent Baselines in Reinforcement Learning (GT, SB, SG, RET, ZG, SL), pp. 5022–5031.
ICML-2018-UesatoOKO
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks (JU, BO, PK, AvdO), pp. 5032–5041.
ICML-2018-VahdatMBKA
DVAE++: Discrete Variational Autoencoders with Overlapping Transformations (AV, WGM, ZB, AK, EA), pp. 5042–5051.
ICML-2018-VermaMSKC #learning
Programmatically Interpretable Reinforcement Learning (AV, VM, RS, PK, SC), pp. 5052–5061.
ICML-2018-VogelBC #learning #optimisation #probability #similarity
A Probabilistic Theory of Supervised Similarity Learning for Pointwise ROC Curve Optimization (RV, AB, SC), pp. 5062–5071.
ICML-2018-WeiZHY #learning
Transfer Learning via Learning to Transfer (YW, YZ, JH, QY), pp. 5072–5081.
ICML-2018-WagnerGKM #data type #learning
Semi-Supervised Learning on Data Streams via Temporal Label Propagation (TW, SG, SPK, NM), pp. 5082–5091.
ICML-2018-WalderK #programming #similarity
Neural Dynamic Programming for Musical Self Similarity (CJW, DK0), pp. 5092–5100.
ICML-2018-WangC #combinator
Thompson Sampling for Combinatorial Semi-Bandits (SW, WC), pp. 5101–5109.
ICML-2018-WangGLWY #learning #predict #towards
PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning (YW, ZG, ML, JW0, PSY), pp. 5110–5119.
ICML-2018-WangJC #nearest neighbour #robust
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples (YW, SJ, KC), pp. 5120–5129.
ICML-2018-WangK #learning #multi
Competitive Multi-agent Inverse Reinforcement Learning with Sub-optimal Demonstrations (XW, DK), pp. 5130–5138.
ICML-2018-WangLS #matrix #multi
Coded Sparse Matrix Multiplication (SW, JL, NBS), pp. 5139–5147.
ICML-2018-WangSQ #learning #modelling #multi #performance #scalability #visual notation
A Fast and Scalable Joint Estimator for Integrating Additional Knowledge in Learning Multiple Related Sparse Gaussian Graphical Models (BW, AS, YQ), pp. 5148–5157.
ICML-2018-WangSL #streaming
Provable Variable Selection for Streaming Features (JW0, JS0, PL0), pp. 5158–5166.
ICML-2018-WangSZRBSXJRS #modelling #speech #synthesis
Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis (YW, DS, YZ, RJSR, EB, JS, YX, YJ, FR, RAS), pp. 5167–5176.
ICML-2018-WangVLGGZ #network
Adversarial Distillation of Bayesian Neural Network Posteriors (KCW, PV, JL, LG, RBG, RSZ), pp. 5177–5186.
ICML-2018-WangWY #multi
Minimax Concave Penalized Multi-Armed Bandit Model with High-Dimensional Convariates (XW, MMW, TY), pp. 5187–5195.
ICML-2018-WangYKN #online #taxonomy
Online Convolutional Sparse Coding with Sample-Dependent Dictionary (YW, QY, JTYK, LMN), pp. 5196–5205.
ICML-2018-WangZ0 #message passing #modelling #visual notation
Stein Variational Message Passing for Continuous Graphical Models (DW, ZZ, QL0), pp. 5206–5214.
ICML-2018-WangZLMM #approximate #parametricity #performance
Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions (SW, WZ, HL, AM, VSM), pp. 5215–5224.
ICML-2018-WehrmannCB #classification #multi #network
Hierarchical Multi-Label Classification Networks (JW, RC, RCB), pp. 5225–5234.
ICML-2018-WeinshallCA #education #learning #network
Curriculum Learning by Transfer Learning: Theory and Experiments with Deep Networks (DW, GC, DA), pp. 5235–5243.
ICML-2018-WeissGY #automaton #network #query #using
Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples (GW, YG, EY), pp. 5244–5253.
ICML-2018-WeiszGS #algorithm #approximate #bound #named
LEAPSANDBOUNDS: A Method for Approximately Optimal Algorithm Configuration (GW, AG, CS), pp. 5254–5262.
ICML-2018-WenHSZCL #network #predict #recognition
Deep Predictive Coding Network for Object Recognition (HW, KH, JS, YZ, EC, ZL), pp. 5263–5272.
ICML-2018-WengZCSHDBD #network #performance #robust #towards
Towards Fast Computation of Certified Robustness for ReLU Networks (TWW, HZ0, HC, ZS, CJH, LD, DSB, ISD), pp. 5273–5282.
ICML-2018-WongK
Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope (EW, JZK), pp. 5283–5292.
ICML-2018-WuCN #estimation
Local Density Estimation in High Dimensions (XW, MC, VN), pp. 5293–5301.
ICML-2018-WuGL #adaptation #trade-off
Adaptive Exploration-Exploitation Tradeoff for Opportunistic Bandits (HW, XG, XL0), pp. 5302–5310.
ICML-2018-WuHS #approach #collaboration #named #ranking
SQL-Rank: A Listwise Approach to Collaborative Ranking (LW, CJH, JS), pp. 5311–5320.
ICML-2018-Wu0H0 #distributed #fault #optimisation #scalability
Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization (JW, WH0, JH, TZ0), pp. 5321–5329.
ICML-2018-0001JCCJ #robust #using
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training (XW0, UJ, JC, LC, SJ), pp. 5330–5338.
ICML-2018-WuSHDR #algorithm #probability #programming #semantics
Discrete-Continuous Mixtures in Probabilistic Programming: Generalized Semantics and Inference Algorithms (YW, SS, NH, SD, SJR), pp. 5339–5348.
ICML-2018-WuW
Variance Regularized Counterfactual Risk Minimization via Variational Divergence Minimization (HW, MDW), pp. 5349–5358.
ICML-2018-WuWWWVL #clustering #parametricity
Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions (JW, YW, ZW, ZW, AV, YL), pp. 5359–5368.
ICML-2018-XiBG #multi
Bayesian Quadrature for Multiple Related Integrals (XX, FXB, MAG), pp. 5369–5378.
ICML-2018-XiaTTQYL #learning
Model-Level Dual Learning (YX, XT, FT, TQ, NY, TYL), pp. 5379–5388.
ICML-2018-XiaoBSSP #how #network
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10, 000-Layer Vanilla Convolutional Neural Networks (LX, YB, JSD, SSS, JP), pp. 5389–5398.
ICML-2018-XieWZX #analysis #distance #learning #metric
Orthogonality-Promoting Distance Metric Learning: Convex Relaxation and Theoretical Analysis (PX, WW, YZ, EPX), pp. 5399–5408.
ICML-2018-XieZZX
Nonoverlap-Promoting Variable Selection (PX, HZ, YZ, EPX), pp. 5409–5418.
ICML-2018-XieZCC #adaptation #learning #semantics
Learning Semantic Representations for Unsupervised Domain Adaptation (SX, ZZ, LC0, CC), pp. 5419–5428.
ICML-2018-Xu #convergence #estimation
Rates of Convergence of Spectral Methods for Graphon Estimation (JX), pp. 5429–5438.
ICML-2018-XuCZ #learning #process
Learning Registered Point Processes from Idiosyncratic Observations (HX, LC, HZ), pp. 5439–5448.
ICML-2018-XuLTSKJ #graph #learning #network #representation
Representation Learning on Graphs with Jumping Knowledge Networks (KX, CL, YT, TS, KiK, SJ), pp. 5449–5458.
ICML-2018-XuLZP #learning
Learning to Explore via Meta-Policy Gradient (TX, QL0, LZ, JP0), pp. 5459–5468.
ICML-2018-XuMBSD #parametricity
Nonparametric Regression with Comparisons: Escaping the Curse of Dimensionality with Ordinal Information (YX, HM, SB, AS, AD), pp. 5469–5478.
ICML-2018-XuSC #divide and conquer #kernel
Optimal Tuning for Divide-and-conquer Kernel Ridge Regression with Massive Data (GX, ZS, GC), pp. 5479–5487.
ICML-2018-XuWG #probability
Continuous and Discrete-time Accelerated Stochastic Mirror Descent for Strongly Convex Functions (PX0, TW0, QG), pp. 5488–5497.
ICML-2018-XuZFLB #learning #semantics
A Semantic Loss Function for Deep Learning with Symbolic Knowledge (JX, ZZ, TF, YL, GVdB), pp. 5498–5507.
ICML-2018-YabeHSIKFK
Causal Bandits with Propagating Inference (AY, DH, HS, SI, NK, TF, KiK), pp. 5508–5516.
ICML-2018-YanCJ #learning
Active Learning with Logged Data (SY, KC, TJ), pp. 5517–5526.
ICML-2018-YanKZR #classification #metric
Binary Classification with Karmic, Threshold-Quasi-Concave Metrics (BY, OK, KZ, PR), pp. 5527–5536.
ICML-2018-YangKU #equivalence #graph #learning
Characterizing and Learning Equivalence Classes of Causal DAGs under Interventions (KDY, AK, CU), pp. 5537–5546.
ICML-2018-YangK #modelling #network #process #relational
Dependent Relational Gamma Process Models for Longitudinal Networks (SY, HK), pp. 5547–5556.
ICML-2018-YangLRN #testing
Goodness-of-fit Testing for Discrete Distributions via Stein Discrepancy (JY, QL, VAR, JN), pp. 5557–5566.
ICML-2018-YangLLZZW #learning #multi
Mean Field Multi-Agent Reinforcement Learning (YY, RL, ML, MZ, WZ0, JW0), pp. 5567–5576.
ICML-2018-YaoVSG
Yes, but Did It Work?: Evaluating Variational Inference (YY, AV, DS, AG), pp. 5577–5586.
ICML-2018-YaratsL #generative
Hierarchical Text Generation and Planning for Strategic Dialogue (DY, ML), pp. 5587–5595.
ICML-2018-YaroslavtsevV #algorithm #clustering #parallel
Massively Parallel Algorithms and Hardness for Single-Linkage Clustering under 𝓁p Distances (GY, AV), pp. 5596–5605.
ICML-2018-YeA #performance
Communication-Computation Efficient Gradient Coding (MY, EA), pp. 5606–5615.
ICML-2018-YeS #approach #network
Variable Selection via Penalized Neural Network: a Drop-Out-One Loss Approach (MY, YS), pp. 5616–5625.
ICML-2018-YenKYHKR #composition #learning #performance #scalability
Loss Decomposition for Fast Learning in Large Output Spaces (IEHY, SK, FXY, DNHR, SK, PR), pp. 5626–5635.
ICML-2018-YinCRB #distributed #learning #statistics #towards
Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates (DY, YC0, KR, PLB), pp. 5636–5645.
ICML-2018-YinZ
Semi-Implicit Variational Inference (MY, MZ), pp. 5646–5655.
ICML-2018-LiM18a
Disentangled Sequential Autoencoder (YL, SM), pp. 5656–5665.
ICML-2018-YonaR #approximate #learning
Probably Approximately Metric-Fair Learning (GY, GNR), pp. 5666–5674.
ICML-2018-YoonJS #generative #named #using
GAIN: Missing Data Imputation using Generative Adversarial Nets (JY, JJ, MvdS), pp. 5675–5684.
ICML-2018-YoonJS18a #dataset #generative #modelling #multi #named #network #predict #using
RadialGAN: Leveraging multiple datasets to improve target-specific predictive models using Generative Adversarial Networks (JY, JJ, MvdS), pp. 5685–5693.
ICML-2018-YouYRHL #generative #graph #modelling #named
GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models (JY, RY, XR, WLH, JL), pp. 5694–5703.
ICML-2018-YuanST #algorithm #clustering #performance
An Efficient Semismooth Newton Based Algorithm for Convex Clustering (YY, DS, KCT), pp. 5704–5712.
ICML-2018-YurtseverFLC #framework #programming
A Conditional Gradient Framework for Composite Convex Minimization with Applications to Semidefinite Programming (AY, OF, FL, VC), pp. 5713–5722.
ICML-2018-ZadikMS #machine learning #orthogonal
Orthogonal Machine Learning: Power and Limitations (IZ, LWM, VS), pp. 5723–5731.
ICML-2018-ZanetteB #bound #identification #learning #problem
Problem Dependent Reinforcement Learning Bounds Which Can Identify Bandit Structure in MDPs (AZ, EB), pp. 5732–5740.
ICML-2018-ZhangCLC #optimisation #policy
Policy Optimization as Wasserstein Gradient Flows (RZ, CC, CL, LC), pp. 5741–5750.
ICML-2018-ZhangDG #induction #matrix #multi #performance
Fast and Sample Efficient Inductive Matrix Completion via Multi-Phase Procrustes Flow (XZ, SSD, QG), pp. 5751–5760.
ICML-2018-ZhangFS #estimation #matrix #scalability
Large-Scale Sparse Inverse Covariance Estimation via Thresholding and Max-Det Matrix Completion (RYZ, SF, SS), pp. 5761–5770.
ICML-2018-ZhangFL #performance
High Performance Zero-Memory Overhead Direct Convolutions (JZ, FF, TML), pp. 5771–5780.
ICML-2018-ZhangHMLZ
Safe Element Screening for Submodular Function Minimization (WZ, BH, LM0, WL0, TZ0), pp. 5781–5790.
ICML-2018-ZhangKL #algorithm #distributed #privacy
Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms (XZ, MMK, ML), pp. 5791–5800.
ICML-2018-ZhangLD #network #performance
Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization (JZ, QL, ISD), pp. 5801–5809.
ICML-2018-ZhangLSD #dependence #fourier #learning
Learning Long Term Dependencies via Fourier Recurrent Units (JZ, YL, ZS, ISD), pp. 5810–5818.
ICML-2018-ZhangNL #geometry #network
Tropical Geometry of Deep Neural Networks (LZ, GN, LHL), pp. 5819–5827.
ICML-2018-ZhangP #parametricity
Deep Bayesian Nonparametric Tracking (AZ, JWP), pp. 5828–5836.
ICML-2018-ZhangSLSF #composition
Composable Planning with Attributes (AZ, SS, AL, AS, RF), pp. 5837–5846.
ICML-2018-ZhangSDG
Noisy Natural Gradient as Variational Inference (GZ, SS, DD, RBG), pp. 5847–5856.
ICML-2018-ZhangWYG #analysis #matrix #rank
A Primal-Dual Analysis of Global Optimality in Nonconvex Low-Rank Matrix Recovery (XZ, LW, YY, QG), pp. 5857–5866.
ICML-2018-ZhangYL0B #distributed #learning #multi
Fully Decentralized Multi-Agent Reinforcement Learning with Networked Agents (KZ, ZY, HL0, TZ0, TB), pp. 5867–5876.
ICML-2018-ZhangYJZ #adaptation
Dynamic Regret of Strongly Adaptive Methods (LZ0, TY, RJ, ZHZ), pp. 5877–5886.
ICML-2018-ZhaoDBZ #learning #topic #word
Inter and Intra Topic Structure Learning with Word Embeddings (HZ, LD, WLB, MZ), pp. 5887–5896.
ICML-2018-ZhaoKZRL
Adversarially Regularized Autoencoders (JJZ, YK, KZ, AMR, YL), pp. 5897–5906.
ICML-2018-Zhao0FYW #estimation #feature model #learning
MSplit LBI: Realizing Feature Selection and Dense Estimation Simultaneously in Few-shot and Zero-shot Learning (BZ, XS0, YF, YY0, YW), pp. 5907–5916.
ICML-2018-ZhaoX #modelling #random
Composite Marginal Likelihood Methods for Random Utility Models (ZZ, LX), pp. 5917–5926.
ICML-2018-0004K #finite #infinity #lightweight #optimisation #probability
Lightweight Stochastic Optimization for Minimizing Finite Sums with Infinite Data (SZ0, JTYK), pp. 5927–5935.
ICML-2018-ZhengPF #approach #robust
A Robust Approach to Sequential Information Theoretic Planning (SZ, JP, JWFI), pp. 5936–5944.
ICML-2018-ZhitnikovMM #behaviour #statistics
Revealing Common Statistical Behaviors in Heterogeneous Populations (AZ, RM, TM), pp. 5945–5954.
ICML-2018-ZhouF #comprehension #optimisation #performance
Understanding Generalization and Optimization Performance of Deep CNNs (PZ, JF), pp. 5955–5964.
ICML-2018-ZhouMBGYLF #bound #distributed #how #optimisation #question
Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go? (ZZ, PM, NB, PWG, YY, LJL, LFF0), pp. 5965–5974.
ICML-2018-ZhouSC #algorithm #convergence #performance #probability
A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates (KZ, FS, JC), pp. 5975–5984.
ICML-2018-ZhouXG #polynomial #probability
Stochastic Variance-Reduced Cubic Regularized Newton Method (DZ, PX0, QG), pp. 5985–5994.
ICML-2018-ZhouZZ #algorithm #performance
Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors (YZ, JZ0, JZ), pp. 5995–6003.
ICML-2018-ZhuL #communication #constraints #distributed #parametricity
Distributed Nonparametric Regression under Communication Constraints (YZ, JL), pp. 6004–6012.
ICML-2018-ZhuoLSZCZ #message passing
Message Passing Stein Variational Gradient Descent (JZ, CL0, JS, JZ0, NC, BZ0), pp. 6013–6022.
ICML-2018-ZouXG #monte carlo #probability
Stochastic Variance-Reduced Hamilton Monte Carlo Methods (DZ, PX0, QG), pp. 6023–6032.
ICML-2018-WichersVEL #predict #video
Hierarchical Long-term Video Prediction without Supervision (NW, RV, DE, HL), pp. 6033–6041.

Bibliography of Software Language Engineering in Generated Hypertext (BibSLEIGH) is created and maintained by Dr. Vadim Zaytsev.
Hosted as a part of SLEBOK on GitHub.