Proceedings of the 34th International Conference on Machine Learning
BibSLEIGH corpus
BibSLEIGH tags
BibSLEIGH bundles
BibSLEIGH people
EDIT!
CC-BY
Open Knowledge
XHTML 1.0 W3C Rec
CSS 2.1 W3C CanRec
email twitter

Doina Precup, Yee Whye Teh
Proceedings of the 34th International Conference on Machine Learning
ICML, 2017.

KER
DBLP
Scholar
?EE?
Full names Links ISxN
@proceedings{ICML-2017,
	editor        = "Doina Precup and Yee Whye Teh",
	ee            = "http://proceedings.mlr.press/v70/",
	publisher     = "{PMLR}",
	series        = "{Proceedings of Machine Learning Research}",
	title         = "{Proceedings of the 34th International Conference on Machine Learning}",
	volume        = 70,
	year          = 2017,
}

Contents (434 items)

ICML-2017-AchabBGMM #multi
Uncovering Causality from Multivariate Hawkes Integrated Cumulants (MA, EB, SG, IM, JFM), pp. 1–10.
ICML-2017-AcharyaDOS #approach #symmetry
A Unified Maximum Likelihood Approach for Estimating Symmetric Properties of Discrete Distributions (JA, HD, AO, ATS), pp. 11–21.
ICML-2017-AchiamHTA #optimisation #policy
Constrained Policy Optimization (JA, DH, AT, PA), pp. 22–31.
ICML-2017-AgarwalS #difference #learning #online #privacy
The Price of Differential Privacy for Online Learning (NA, KS), pp. 32–40.
ICML-2017-AkrourS0N #optimisation
Local Bayesian Optimization of Motor Skills (RA, DS, JP0, GN), pp. 41–50.
ICML-2017-AksoylarOS #detection
Connected Subgraph Detection with Mirror Descent on SDPs (CA, LO, VS), pp. 51–59.
ICML-2017-AlaaHS #learning #process
Learning from Clinical Judgments: Semi-Markov-Modulated Marked Hawkes Processes for Risk Prognosis (AMA, SH, MvdS), pp. 60–69.
ICML-2017-AliWK #performance #programming
A Semismooth Newton Method for Fast, Generic Convex Programming (AA, EW, JZK), pp. 70–79.
ICML-2017-AllamanisCKS #learning #semantics
Learning Continuous Semantic Representations of Symbolic Expressions (MA, PC, PK, CAS), pp. 80–88.
ICML-2017-Allen-Zhu #named #optimisation #parametricity #performance #probability
Natasha: Faster Non-Convex Stochastic Optimization via Strongly Non-Convex Parameter (ZAZ), pp. 89–97.
ICML-2017-Allen-ZhuL #performance
Doubly Accelerated Methods for Faster CCA and Generalized Eigendecomposition (ZAZ, YL), pp. 98–106.
ICML-2017-Allen-ZhuL17a #approximate #component #matrix #performance
Faster Principal Component Regression and Stable Matrix Chebyshev Approximation (ZAZ, YL), pp. 107–115.
ICML-2017-Allen-ZhuL17b #learning #online #performance
Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU (ZAZ, YL), pp. 116–125.
ICML-2017-Allen-ZhuLSW #design
Near-Optimal Design of Experiments via Regret Minimization (ZAZ, YL, AS, YW), pp. 126–135.
ICML-2017-AmosK #named #network #optimisation
OptNet: Differentiable Optimization as a Layer in Neural Networks (BA, JZK), pp. 136–145.
ICML-2017-AmosXK #network
Input Convex Neural Networks (BA, LX, JZK), pp. 146–155.
ICML-2017-AndersonG #algorithm #approximate #online #performance #rank
An Efficient, Sparsity-Preserving, Online Algorithm for Low-Rank Approximation (DGA, MG0), pp. 156–165.
ICML-2017-AndreasKL #composition #learning #multi #policy #sketching
Modular Multitask Reinforcement Learning with Policy Sketches (JA, DK, SL), pp. 166–175.
ICML-2017-AnschelBS #learning #named #reduction
Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning (OA, NB, NS), pp. 176–185.
ICML-2017-AppelP #empirical #framework #multi
A Simple Multi-Class Boosting Framework with Theoretical Guarantees and Empirical Proficiency (RA, PP), pp. 186–194.
ICML-2017-ArikCCDGKLMNRSS #realtime
Deep Voice: Real-time Neural Text-to-Speech (SÖA, MC, AC, GFD, AG, YK, XL, JM, AYN, JR, SS, MS), pp. 195–204.
ICML-2017-ArjevaniS #complexity #higher-order #problem
Oracle Complexity of Second-Order Methods for Finite-Sum Problems (YA, OS), pp. 205–213.
ICML-2017-ArjovskyCB #generative #network
Wasserstein Generative Adversarial Networks (MA, SC, LB), pp. 214–223.
ICML-2017-Arora0LMZ #equilibrium #generative
Generalization and Equilibrium in Generative Adversarial Nets (GANs) (SA, RG0, YL, TM, YZ), pp. 224–232.
ICML-2017-ArpitJBKBKMFCBL #network
A Closer Look at Memorization in Deep Networks (DA, SJ, NB, DK, EB, MSK, TM, AF, ACC, YB, SLJ), pp. 233–242.
ICML-2017-AsadiL #learning
An Alternative Softmax Operator for Reinforcement Learning (KA, MLL), pp. 243–252.
ICML-2017-AvronKMMVZ #approximate #bound #fourier #kernel #random #statistics
Random Fourier Features for Kernel Ridge Regression: Approximation Bounds and Statistical Guarantees (HA, MK, CM, CM, AV, AZ), pp. 253–262.
ICML-2017-AzarOM #bound #learning
Minimax Regret Bounds for Reinforcement Learning (MGA, IO, RM), pp. 263–272.
ICML-2017-BachHRR #generative #learning #modelling
Learning the Structure of Generative Models without Labeled Data (SHB, BDH, AR, CR), pp. 273–282.
ICML-2017-BachemLH0 #bound #clustering
Uniform Deviation Bounds for k-Means Clustering (OB, ML, SHH, AK0), pp. 283–291.
ICML-2017-BachemL0 #constant #distributed
Distributed and Provably Good Seedings for k-Means in Constant Rounds (OB, ML, AK0), pp. 292–300.
ICML-2017-BachmanST #algorithm #learning
Learning Algorithms for Active Learning (PB, AS, AT), pp. 301–310.
ICML-2017-BackursT #algorithm #clique #performance
Improving Viterbi is Hard: Better Runtimes Imply Faster Clique Algorithms (AB, CT), pp. 311–321.
ICML-2017-BalcanDLMZ #clustering
Differentially Private Clustering in High-Dimensional Euclidean Spaces (MFB, TD, YL, WM, HZ0), pp. 322–331.
ICML-2017-Balduzzi
Strongly-Typed Agents are Guaranteed to Interact Safely (DB), pp. 332–341.
ICML-2017-BalduzziFLLMM #problem #question #what
The Shattered Gradients Problem: If resnets are the answer, then what is the question? (DB, MF, LL, JPL, KWDM, BM), pp. 342–350.
ICML-2017-BalduzziMB #approximate #convergence #network
Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks (DB, BM, TBY), pp. 351–360.
ICML-2017-BalleM #finite #learning #policy
Spectral Learning from a Single Trajectory under Finite-State Policies (BB, OAM), pp. 361–370.
ICML-2017-BalogTGW
Lost Relatives of the Gumbel Trick (MB, NT, ZG, AW), pp. 371–379.
ICML-2017-BamlerM #word
Dynamic Word Embeddings (RB, SM), pp. 380–389.
ICML-2017-BaramACM #learning
End-to-End Differentiable Adversarial Imitation Learning (NB, OA, IC, SM), pp. 390–399.
ICML-2017-BarmannPS #learning #online #optimisation
Emulating the Expert: Inverse Optimization through Online Learning (AB, SP, OS), pp. 400–410.
ICML-2017-BeckhamP #classification #probability
Unimodal Probability Distributions for Deep Ordinal Classification (CB, CJP), pp. 411–419.
ICML-2017-BegonJG
Globally Induced Forest: A Prepruning Compression Scheme (JMB, AJ, PG), pp. 420–428.
ICML-2017-BelangerYM #energy #learning #network #predict
End-to-End Learning for Structured Prediction Energy Networks (DB, BY, AM), pp. 429–439.
ICML-2017-BelilovskyKVB #learning #modelling #visual notation
Learning to Discover Sparse Graphical Models (EB, KK, GV, MBB), pp. 440–448.
ICML-2017-BellemareDM #learning
A Distributional Perspective on Reinforcement Learning (MGB, WD, RM), pp. 449–458.
ICML-2017-BelloZVL #learning
Neural Optimizer Search with Reinforcement Learning (IB, BZ, VV, QVL), pp. 459–468.
ICML-2017-BergmannJV #learning
Learning Texture Manifolds with the Periodic Spatial GAN (UB, NJ, RV), pp. 469–477.
ICML-2017-BernsteinMSSHM #learning #modelling #using #visual notation
Differentially Private Learning of Undirected Graphical Models Using Collective Graphical Models (GB, RM, TS, DS, MH, GM), pp. 478–487.
ICML-2017-BeygelzimerOZ #learning #multi #online #performance
Efficient Online Bandit Multiclass Learning with Õ(√T) Regret (AB, FO, CZ), pp. 488–497.
ICML-2017-BianB0T
Guarantees for Greedy Maximization of Non-submodular Functions with Applications (AAB, JMB, AK0, ST), pp. 498–507.
ICML-2017-BogunovicMSC #approach #clustering #robust
Robust Submodular Maximization: A Non-Uniform Partitioning Approach (IB, SM, JS, VC), pp. 508–516.
ICML-2017-BojanowskiJ #learning #predict
Unsupervised Learning by Predicting Noise (PB, AJ), pp. 517–526.
ICML-2017-BolukbasiWDS #adaptation #network #performance
Adaptive Neural Networks for Efficient Inference (TB, JW0, OD, VS), pp. 527–536.
ICML-2017-BoraJPD #generative #modelling #using
Compressed Sensing using Generative Models (AB, AJ, EP, AGD), pp. 537–546.
ICML-2017-BosnjakRNR #interpreter #programming
Programming with a Differentiable Forth Interpreter (MB, TR, JN, SR0), pp. 547–556.
ICML-2017-BotevRB #learning #optimisation
Practical Gauss-Newton Optimisation for Deep Learning (AB, HR, DB), pp. 557–565.
ICML-2017-BraunPZ #algorithm
Lazifying Conditional Gradient Algorithms (GB, SP, DZ), pp. 566–575.
ICML-2017-BravermanFLSY #clustering #data type
Clustering High Dimensional Dynamic Data Streams (VB, GF, HL, CS, LFY), pp. 576–585.
ICML-2017-BriolOCCG #kernel #on the #problem
On the Sampling Problem for Kernel Quadrature (FXB, CJO, JC, WYC, MAG), pp. 586–595.
ICML-2017-BrownS #convergence #game studies #performance
Reduced Space and Faster Convergence in Imperfect-Information Games via Pruning (NB, TS), pp. 596–604.
ICML-2017-BrutzkusG
Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs (AB, AG), pp. 605–614.
ICML-2017-BuddenMSCS #multi
Deep Tensor Convolution on Multicores (DMB, AM, SS, SRC, NS), pp. 615–624.
ICML-2017-Busa-FeketeSWM #multi #optimisation
Multi-objective Bandits: Optimizing the Generalized Gini Index (RBF, BS, PW, SM), pp. 625–634.
ICML-2017-CaiDK #performance #testing
Priv'IT: Private and Sample Efficient Identity Testing (BC, CD, GK0), pp. 635–644.
ICML-2017-CalandrielloLV #adaptation #higher-order #kernel #online #optimisation #sketching
Second-Order Kernel Online Convex Optimization with Adaptive Sketching (DC, AL, MV), pp. 645–653.
ICML-2017-CarmonDHS #quote
“Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions (YC, JCD, OH, AS), pp. 654–663.
ICML-2017-CarriereCO #diagrams #kernel #persistent #slicing
Sliced Wasserstein Kernel for Persistence Diagrams (MC, MC, SO), pp. 664–673.
ICML-2017-ChangCCCSD #clustering #multi #nondeterminism
Multiple Clustering Views from Multiple Uncertain Experts (YC, JC, MHC, PJC, EKS, JGD), pp. 674–683.
ICML-2017-ChaudhryXG #assessment #nondeterminism
Uncertainty Assessment and False Discovery Rate Control in High-Dimensional Granger Causal Inference (AC, PX0, QG), pp. 684–693.
ICML-2017-Chaudhuri0N
Active Heteroscedastic Regression (KC, PJ0, NN), pp. 694–702.
ICML-2017-ChebotarHZSSL #learning #modelling
Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning (YC, KH, MZ, GSS, SS, SL), pp. 703–711.
ICML-2017-ChenB #estimation #modelling #robust
Robust Structured Estimation with Single-Index Models (SC, AB), pp. 712–721.
ICML-2017-ChenC0Z #adaptation #identification #multi
Adaptive Multiple-Arm Identification (JC, XC0, QZ0, YZ0), pp. 722–730.
ICML-2017-ChenF
Dueling Bandits with Weak Regret (BC, PIF), pp. 731–739.
ICML-2017-ChenGWWYY #np-hard #optimisation
Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions (YC, DG, MW, ZW, YY, HY), pp. 740–747.
ICML-2017-ChenHCDLBF #learning
Learning to Learn without Gradient Descent by Gradient Descent (YC, MWH, SGC, MD, TPL, MB, NdF), pp. 748–756.
ICML-2017-ChenKB #equation #identification #linear #modelling #testing #using
Identification and Model Testing in Linear Structural Equation Models using Auxiliary Variables (BC, DK, EB), pp. 757–766.
ICML-2017-ChenLK #estimation #matrix #performance #towards
Toward Efficient and Accurate Covariance Matrix Estimation on Compressed Data (XC, MRL, IK), pp. 767–776.
ICML-2017-ChenYLZ #online #optimisation #performance #scalability
Online Partial Least Square Optimization: Dropping Convexity for Better Efficiency and Scalability (ZC, LFY, CJL, TZ), pp. 777–786.
ICML-2017-ChenZLHH #learning
Learning to Aggregate Ordinal Labels by Maximizing Separating Width (GC, SZ, DL, HH0, PAH), pp. 787–796.
ICML-2017-CherapanamjeriG #matrix #robust
Nearly Optimal Robust Matrix Completion (YC, KG, PJ0), pp. 797–805.
ICML-2017-ChierichettiG0L #algorithm #approximate #rank
Algorithms for lₚ Low-Rank Approximation (FC, SG, RK0, SL, RP, DPW), pp. 806–814.
ICML-2017-ChoB #named #network
MEC: Memory-efficient Convolution for Deep Neural Network (MC, DB), pp. 815–824.
ICML-2017-ChoiD #on the
On Relaxing Determinism in Arithmetic Circuits (AC, AD), pp. 825–833.
ICML-2017-ChouMS #learning #policy #probability #using
Improving Stochastic Policy Gradients in Continuous Control with Deep Reinforcement Learning using the Beta Distribution (PWC, DM, SAS), pp. 834–843.
ICML-2017-ChowdhuryG #kernel #multi #on the
On Kernelized Multi-armed Bandits (SRC, AG), pp. 844–853.
ICML-2017-CisseBGDU #network #robust
Parseval Networks: Improving Robustness to Adversarial Examples (MC, PB, EG, YND, NU), pp. 854–863.
ICML-2017-CongCLZ #adaptation #probability #topic
Deep Latent Dirichlet Allocation with Topic-Layer-Adaptive Stochastic Gradient Riemannian MCMC (YC, BC0, HL, MZ), pp. 864–873.
ICML-2017-CortesGKMY #adaptation #learning #named #network
AdaNet: Adaptive Structural Learning of Artificial Neural Networks (CC, XG, VK, MM, SY), pp. 874–883.
ICML-2017-CutajarBMF #process #random
Random Feature Expansions for Deep Gaussian Processes (KC, EVB, PM, MF), pp. 884–893.
ICML-2017-CuturiB #named
Soft-DTW: a Differentiable Loss Function for Time-Series (MC, MB), pp. 894–903.
ICML-2017-CzarneckiSJOVK #comprehension #interface
Understanding Synthetic Gradients and Decoupled Neural Interfaces (WMC, GS, MJ, SO, OV, KK), pp. 904–912.
ICML-2017-DaiGKHS #generative #probability
Stochastic Generative Hashing (BD, RG, SK, NH, LS), pp. 913–922.
ICML-2017-DaumeKLM
Logarithmic Time One-Against-Some (HDI, NK, JL0, PM), pp. 923–932.
ICML-2017-DauphinFAG #modelling #network
Language Modeling with Gated Convolutional Networks (YND, AF, MA, DG), pp. 933–941.
ICML-2017-DawsonHM #infinity #markov
An Infinite Hidden Markov Model With Similarity-Biased Transitions (CRD, CH, CTM), pp. 942–950.
ICML-2017-DaxbergerL #distributed #optimisation #process
Distributed Batch Gaussian Process Optimization (EAD, BKHL), pp. 951–960.
ICML-2017-DembczynskiKKN #analysis #classification #consistency #revisited
Consistency Analysis for Binary Classification Revisited (KD, WK, OK, NN), pp. 961–969.
ICML-2017-DempseyMSDGMR #named #predict
iSurvive: An Interpretable, Event-time Prediction Model for mHealth (WHD, AM, CKS, MLD, DHG, SAM, JMR), pp. 970–979.
ICML-2017-DengKLR #generative
Image-to-Markup Generation with Coarse-to-Fine Attention (YD, AK, JL, AMR), pp. 980–989.
ICML-2017-DevlinUBSMK #learning #named
RobustFill: Neural Program Learning under Noisy I/O (JD, JU, SB, RS, ArM, PK), pp. 990–998.
ICML-2017-DiakonikolasKK0 #robust
Being Robust (in High Dimensions) Can Be Practical (ID, GK0, DMK, JL0, AM, AS), pp. 999–1008.
ICML-2017-DinhBZM #monte carlo #probability
Probabilistic Path Hamiltonian Monte Carlo (VD, AB, CZ, FAMI), pp. 1009–1018.
ICML-2017-DinhPBB
Sharp Minima Can Generalize For Deep Nets (LD, RP, SB, YB), pp. 1019–1028.
ICML-2017-Domke #bound
A Divergence Bound for Hybrids of MCMC and Variational Inference and an Application to Langevin Dynamics and SGVI (JD), pp. 1029–1038.
ICML-2017-DonahueLM
Dance Dance Convolution (CD, ZCL, JJM), pp. 1039–1048.
ICML-2017-DuCLXZ #evaluation #policy #probability #reduction
Stochastic Variance Reduction Methods for Policy Evaluation (SSD, JC, LL0, LX, DZ), pp. 1049–1058.
ICML-2017-EcksteinGK #generative #using
Rule-Enhanced Penalized Regression by Column Generation using Rectangular Maximum Agreement (JE, NG, AK), pp. 1059–1067.
ICML-2017-EngelRRDNES #synthesis
Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders (JHE, CR, AR, SD, MN0, DE, KS), pp. 1068–1077.
ICML-2017-FahandarHC #ranking #statistics
Statistical Inference for Incomplete Ranking Data: The Case of Rank-Dependent Coarsening (MAF, EH, IC), pp. 1078–1087.
ICML-2017-FalahatgarOPS #ranking
Maximum Selection and Ranking under Noisy Comparisons (MF, AO, VP, ATS), pp. 1088–1096.
ICML-2017-FarajtabarYYXTK #process
Fake News Mitigation via Point Process Based Intervention (MF, JY, XY, HX, RT, EBK, SL0, LS, HZ), pp. 1097–1106.
ICML-2017-FarinaKS #behaviour #game studies
Regret Minimization in Behaviorally-Constrained Zero-Sum Games (GF, CK, TS), pp. 1107–1116.
ICML-2017-FeldmanOR #graph #network #summary
Coresets for Vector Summarization with Applications to Network Graphs (DF, SO, DR), pp. 1117–1125.
ICML-2017-FinnAL #adaptation #network #performance
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks (CF, PA, SL), pp. 1126–1135.
ICML-2017-FoersterGSCS #architecture #network
Input Switched Affine Networks: An RNN Architecture Designed for Interpretability (JNF, JG, JSD, JC, DS), pp. 1136–1145.
ICML-2017-FoersterNFATKW #experience #learning #multi
Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning (JNF, NN, GF, TA, PHST, PK, SW), pp. 1146–1155.
ICML-2017-ForneyPB #online
Counterfactual Data-Fusion for Online Reinforcement Learners (AF, JP, EB), pp. 1156–1164.
ICML-2017-FranceschiDFP #optimisation
Forward and Reverse Gradient-Based Hyperparameter Optimization (LF, MD, PF, MP), pp. 1165–1173.
ICML-2017-FutomaHH #classification #detection #learning #multi #process
Learning to Detect Sepsis with a Multitask Gaussian Process RNN Classifier (JF, SH, KAH), pp. 1174–1182.
ICML-2017-GalIG #image #learning
Deep Bayesian Active Learning with Image Data (YG, RI, ZG), pp. 1183–1192.
ICML-2017-GaoFC #learning #network
Local-to-Global Bayesian Network Structure Learning (TG, KPF, MC), pp. 1193–1202.
ICML-2017-GarberSS #algorithm #analysis #component #distributed #probability
Communication-efficient Algorithms for Distributed Stochastic Principal Component Analysis (DG, OS, NS), pp. 1203–1212.
ICML-2017-GauntBKT #library #source code
Differentiable Programs with Neural Libraries (ALG, MB, NK, DT), pp. 1213–1222.
ICML-2017-GautierBV #performance
Zonotope Hit-and-run for Efficient Sampling from Projection DPPs (GG, RB, MV), pp. 1223–1232.
ICML-2017-0001JZ #analysis #geometry #problem #rank
No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis (RG0, CJ, YZ), pp. 1233–1242.
ICML-2017-GehringAGYD #learning #sequence
Convolutional Sequence to Sequence Learning (JG, MA, DG, DY, YND), pp. 1243–1252.
ICML-2017-GentileLKKZE #clustering #on the
On Context-Dependent Clustering of Bandits (CG, SL, PK, AK, GZ, EE), pp. 1253–1262.
ICML-2017-GilmerSRVD #message passing #quantum
Neural Message Passing for Quantum Chemistry (JG, SSS, PFR, OV, GED), pp. 1263–1272.
ICML-2017-GoldsteinS #retrieval
Convex Phase Retrieval without Lifting via PhaseMax (TG, CS), pp. 1273–1281.
ICML-2017-GonzalezDDL #optimisation
Preferential Bayesian Optimization (JG, ZD, ACD, NDL), pp. 1282–1291.
ICML-2017-GorhamM #kernel #quality
Measuring Sample Quality with Kernels (JG, LWM), pp. 1292–1301.
ICML-2017-GraveJCGJ #approximate #performance
Efficient softmax approximation for GPUs (EG, AJ, MC, DG, HJ), pp. 1302–1310.
ICML-2017-GravesBMMK #automation #education #learning #network
Automated Curriculum Learning for Neural Networks (AG, MGB, JM, RM, KK), pp. 1311–1320.
ICML-2017-GuoPSW #network #on the
On Calibration of Modern Neural Networks (CG, GP, YS0, KQW), pp. 1321–1330.
ICML-2017-GuptaSGSPKGUV0 #named
ProtoNN: Compressed and Accurate kNN for Resource-scarce Devices (CG, ASS, AG, HVS, BP, AK, SG, RU, MV, PJ0), pp. 1331–1340.
ICML-2017-GygliNA #network
Deep Value Networks Learn to Evaluate and Iteratively Refine Structured Outputs (MG, MN0, AA), pp. 1341–1351.
ICML-2017-HaarnojaTAL #energy #learning #policy
Reinforcement Learning with Deep Energy-Based Policies (TH, HT, PA, SL), pp. 1352–1361.
ICML-2017-HadjeresPN #generative #named
DeepBach: a Steerable Model for Bach Chorales Generation (GH, FP, FN), pp. 1362–1371.
ICML-2017-HallakM #consistency #evaluation #online
Consistent On-Line Off-Policy Evaluation (AH, SM), pp. 1372–1383.
ICML-2017-HanKPS #performance #process
Faster Greedy MAP Inference for Determinantal Point Processes (IH, PK, KP, JS), pp. 1384–1393.
ICML-2017-HannaTSN #behaviour #evaluation #policy
Data-Efficient Policy Evaluation Through Behavior Policy Search (JPH, PST, PS, SN), pp. 1394–1403.
ICML-2017-HarandiSH #geometry #learning #metric #reduction
Joint Dimensionality Reduction and Metric Learning: A Geometric Take (MTH, MS, RIH), pp. 1404–1413.
ICML-2017-HartfordLLT #approach #flexibility #predict
Deep IV: A Flexible Approach for Counterfactual Prediction (JSH, GL, KLB, MT), pp. 1414–1423.
ICML-2017-HassidimS #algorithm #probability #robust
Robust Guarantees of Stochastic Greedy Algorithms (AH, YS), pp. 1424–1432.
ICML-2017-HazanSZ #game studies #performance
Efficient Regret Minimization in Non-Convex Games (EH, KS, CZ), pp. 1433–1441.
ICML-2017-HeLMWSYR #kernel
Kernelized Support Tensor Machines (LH0, CTL, GM, SW, LS, PSY, ABR), pp. 1442–1451.
ICML-2017-HeckelR #collaboration #complexity #online
The Sample Complexity of Online One-Class Collaborative Filtering (RH, KR), pp. 1452–1460.
ICML-2017-HenriquesV #performance
Warped Convolutions: Efficient Invariance to Spatial Transformations (JFH, AV), pp. 1461–1469.
ICML-2017-Hernandez-Lobato #distributed #parallel #scalability
Parallel and Distributed Thompson Sampling for Large-scale Accelerated Exploration of Chemical Space (JMHL, JR, EOPK, AAG), pp. 1470–1479.
ICML-2017-HigginsPRMBPBBL #learning #named
DARLA: Improving Zero-Shot Transfer in Reinforcement Learning (IH, AP, AAR, LM, CB, AP, MB, CB, AL), pp. 1480–1490.
ICML-2017-HirayamaHK #named
SPLICE: Fully Tractable Hierarchical Extension of ICA with Pooling (JH, AH, MK), pp. 1491–1500.
ICML-2017-HoNYBHP #clustering #multi
Multilevel Clustering via Wasserstein Means (NH, XN, MY, HHB, VH, DQP), pp. 1501–1509.
ICML-2017-Hoffman #learning #markov #modelling #monte carlo
Learning Deep Latent Gaussian Models with Markov Chain Monte Carlo (MDH), pp. 1510–1519.
ICML-2017-HonerNBMG #detection #robust #trust
Minimizing Trust Leaks for Robust Sybil Detection (JH, SN, AB0, KRM, NG), pp. 1520–1528.
ICML-2017-HongHZ #algorithm #distributed #learning #named #network #optimisation #performance
Prox-PDA: The Proximal Primal-Dual Algorithm for Fast Distributed Nonconvex Optimization and Learning Over Networks (MH, DH, MMZ), pp. 1529–1538.
ICML-2017-HornakovaLA #analysis #graph #multi #optimisation
Analysis and Optimization of Graph Decompositions by Lifted Multicuts (AH, JHL, BA), pp. 1539–1548.
ICML-2017-HuL
Dissipativity Theory for Nesterov's Accelerated Method (BH, LL), pp. 1549–1557.
ICML-2017-HuMTMS #learning #self
Learning Discrete Representations via Information Maximizing Self-Augmented Training (WH, TM, ST, EM, MS), pp. 1558–1567.
ICML-2017-HuQ #memory management #network
State-Frequency Memory Recurrent Neural Networks (HH, GJQ), pp. 1568–1577.
ICML-2017-HuRC #generative #modelling #relational
Deep Generative Models for Relational Data with Side Information (CH, PR, LC), pp. 1578–1586.
ICML-2017-HuYLSX #generative #towards
Toward Controlled Generation of Text (ZH, ZY, XL, RS, EPX), pp. 1587–1596.
ICML-2017-ImaizumiH #composition
Tensor Decomposition with Smoothness (MI, KH), pp. 1597–1606.
ICML-2017-IngrahamM #modelling
Variational Inference for Sparse and Undirected Models (JI, DSM), pp. 1607–1616.
ICML-2017-JabbariJKMR #learning
Fairness in Reinforcement Learning (SJ, MJ, MJK, JM, AR0), pp. 1617–1626.
ICML-2017-JaderbergCOVGSK #interface #using
Decoupled Neural Interfaces using Synthetic Gradients (MJ, WMC, SO, OV, AG, DS, KK), pp. 1627–1635.
ICML-2017-JainMR #generative #learning #modelling #multi #scalability
Scalable Generative Models for Multi-label Learning with Missing Labels (VJ, NM, PR), pp. 1636–1644.
ICML-2017-JaquesGBHTE #generative #modelling #sequence
Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control (NJ, SG, DB, JMHL, RET, DE), pp. 1645–1654.
ICML-2017-JenattonAGS #dependence #optimisation
Bayesian Optimization with Tree-structured Dependencies (RJ, CA, JG, MWS), pp. 1655–1664.
ICML-2017-JerniteCS #classification #estimation #learning
Simultaneous Learning of Trees and Representations for Extreme Classification and Density Estimation (YJ, AC, DAS), pp. 1665–1674.
ICML-2017-JiHS #generative #image #parametricity
From Patches to Images: A Nonparametric Generative Model (GJ0, MCH, EBS), pp. 1675–1683.
ICML-2017-Jiang #estimation #set
Density Level Set Estimation on Manifolds with DBSCAN (HJ), pp. 1684–1693.
ICML-2017-Jiang17a #convergence #estimation #kernel
Uniform Convergence Rates for Kernel Density Estimation (HJ), pp. 1694–1703.
ICML-2017-JiangKALS #process #rank
Contextual Decision Processes with low Bellman rank are PAC-Learnable (NJ, AK, AA, JL0, RES), pp. 1704–1713.
ICML-2017-JiangMCSMG #performance
Efficient Nonmyopic Active Search (SJ, GM, GC, AS, BM, RG), pp. 1714–1723.
ICML-2017-Jin0NKJ #how
How to Escape Saddle Points Efficiently (CJ, RG0, PN, SMK, MIJ), pp. 1724–1732.
ICML-2017-JingSDPSLTS #network #performance
Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs (LJ, YS, TD, JP, SAS, YL, MT, MS), pp. 1733–1741.
ICML-2017-Jitkrittum0G #adaptation #independence #kernel
An Adaptive Test of Independence with Analytic Kernel Embeddings (WJ, ZS0, AG), pp. 1742–1751.
ICML-2017-JohnsonG #coordination #named
StingyCD: Safely Avoiding Wasteful Updates in Coordinate Descent (TBJ, CG), pp. 1752–1760.
ICML-2017-KakizakiFS
Differentially Private Chi-squared Test by Unit Circle Mechanism (KK, KF, JS), pp. 1761–1770.
ICML-2017-KalchbrennerOSD #network #video
Video Pixel Networks (NK, AvdO, KS, ID, OV, AG, KK), pp. 1771–1779.
ICML-2017-KaleKLP #adaptation #feature model #linear #online #performance
Adaptive Feature Selection: Computationally Efficient Online Sparse Linear Regression under RIP (SK, ZSK, TL, DP), pp. 1780–1788.
ICML-2017-Kallus #clustering #personalisation #recursion #using
Recursive Partitioning for Personalization using Observational Data (NK), pp. 1789–1798.
ICML-2017-KandasamyDSP #approximate #multi #optimisation
Multi-fidelity Bayesian Optimisation with Continuous Approximations (KK, GD, JGS, BP), pp. 1799–1808.
ICML-2017-KanskySMELLDSPG #generative #network #physics
Schema Networks: Zero-shot Transfer with a Generative Causal Model of Intuitive Physics (KK, TS, DAM, ME, MLG, XL, ND, SS, DSP, DG), pp. 1809–1818.
ICML-2017-KattOA #learning #monte carlo
Learning in POMDPs with Monte Carlo Tree Search (SK, FAO, CA), pp. 1819–1827.
ICML-2017-KearnsRW
Meritocratic Fairness for Cross-Population Selection (MJK, AR0, ZSW), pp. 1828–1836.
ICML-2017-KhannaEDGN #approximate #on the #optimisation #rank
On Approximation Guarantees for Greedy Low Rank Optimization (RK, ERE, AGD, JG, SNN), pp. 1837–1846.
ICML-2017-KhasanovaF #graph #invariant #learning #representation
Graph-based Isometry Invariant Representation Learning (RK, PF), pp. 1847–1856.
ICML-2017-KimCKLK #generative #learning #network
Learning to Discover Cross-Domain Relations with Generative Adversarial Networks (TK, MC, HK, JKL, JK), pp. 1857–1865.
ICML-2017-KimPKH #learning #named #network #parallel #parametricity #reduction #semantics
SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization (JK, YP, GK, SJH), pp. 1866–1874.
ICML-2017-KocaogluDV #graph #learning
Cost-Optimal Learning of Causal Graphs (MK, AD, SV), pp. 1875–1884.
ICML-2017-KohL #black box #comprehension #predict
Understanding Black-box Predictions via Influence Functions (PWK, PL), pp. 1885–1894.
ICML-2017-KohlerL #optimisation #polynomial
Sub-sampled Cubic Regularization for Non-convex Optimization (JMK, AL), pp. 1895–1904.
ICML-2017-KolesnikovL #image #modelling
PixelCNN Models with Auxiliary Variables for Natural Image Modeling (AK, CHL), pp. 1905–1914.
ICML-2017-KrishnamurthyAH #classification #learning
Active Learning for Cost-Sensitive Classification (AK, AA, TKH, HDI, JL0), pp. 1915–1924.
ICML-2017-KucukelbirWB #modelling
Evaluating Bayesian Models with Posterior Dispersion Indices (AK, YW, DMB), pp. 1925–1934.
ICML-2017-KumarGV #internet #machine learning #ram
Resource-efficient Machine Learning in 2 KB RAM for the Internet of Things (AK, SG, MV), pp. 1935–1944.
ICML-2017-KusnerPH
Grammar Variational Autoencoder (MJK, BP, JMHL), pp. 1945–1954.
ICML-2017-LaclauRMBB #clustering
Co-clustering through Optimal Transport (CL, IR, BM, YB, VB), pp. 1955–1964.
ICML-2017-LanPZZ #lazy evaluation #probability
Conditional Accelerated Lazy Stochastic Gradient Descent (GL, SP, YZ, DZ), pp. 1965–1974.
ICML-2017-LattanziV #clustering #consistency
Consistent k-Clustering (SL, SV), pp. 1975–1984.
ICML-2017-LawUZ #clustering #learning
Deep Spectral Clustering Learning (MTL, RU, RSZ), pp. 1985–1994.
ICML-2017-LeY0L #coordination #learning #multi
Coordinated Multi-Agent Imitation Learning (HML0, YY, PC0, PL), pp. 1995–2003.
ICML-2017-LeeHGJC #graph #random
Bayesian inference on random simple graphs with power law degree distributions (JL, CH, ZG, LFJ, SC), pp. 2004–2013.
ICML-2017-LeeHPS #learning #multi
Confident Multiple Choice Learning (KL, CH, KP, JS), pp. 2014–2023.
ICML-2017-LeiJBJ #architecture #graph #kernel #sequence
Deriving Neural Architectures from Sequence and Graph Kernels (TL0, WJ, RB, TSJ), pp. 2024–2033.
ICML-2017-LeiYWDR #coordination #empirical
Doubly Greedy Primal-Dual Coordinate Descent for Sparse Empirical Risk Minimization (QL, IEHY, CYW, ISD, PR), pp. 2034–2042.
ICML-2017-LevyW #learning #source code
Learning to Align the Source Code to the Compiled Object Code (DL, LW), pp. 2043–2051.
ICML-2017-LiG #network
Dropout Inference in Bayesian Neural Networks with Alpha-divergences (YL, YG), pp. 2052–2061.
ICML-2017-LiL #correlation #matrix
Provable Alternating Gradient Descent for Non-negative Matrix Factorization with Strong Correlations (YL, YL), pp. 2062–2070.
ICML-2017-LiLZ #algorithm #linear
Provably Optimal Algorithms for Generalized Linear Contextual Bandits (LL0, YL, DZ), pp. 2071–2080.
ICML-2017-LiM #nearest neighbour #performance
Fast k-Nearest Neighbour Search via Prioritized DCI (KL, JM), pp. 2081–2090.
ICML-2017-LiM17a #robust
Forest-type Regression with General Losses and Robust Forest (AHL, AM), pp. 2091–2100.
ICML-2017-LiTE #adaptation #algorithm #equation #probability
Stochastic Modified Equations and Adaptive Stochastic Gradient Algorithms (QL, CT, WE), pp. 2101–2110.
ICML-2017-LiZLV #analysis #convergence #optimisation
Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization (QL, YZ, YL, PKV), pp. 2111–2119.
ICML-2017-LindgrenDK
Exact MAP Inference by Avoiding Fractional Vertices (EML, AGD, ARK), pp. 2120–2129.
ICML-2017-LiporB #clustering
Leveraging Union of Subspace Structure to Improve Constrained Clustering (JL, LB), pp. 2130–2139.
ICML-2017-LiuB #exponential #product line
Zero-Inflated Exponential Family Embeddings (LPL, DMB), pp. 2140–2148.
ICML-2017-LiuDHTYSRS #education
Iterative Machine Teaching (WL, BD, AH, CT, CY0, LBS, JMR, LS), pp. 2149–2158.
ICML-2017-LiuLNT #algorithm #complexity
Algorithmic Stability and Hypothesis Complexity (TL, GL, GN, DT), pp. 2159–2167.
ICML-2017-LiuWY #multi
Analogical Inference for Multi-relational Embeddings (HL, YW, YY), pp. 2168–2178.
ICML-2017-LiuYWLM
Dual Iterative Hard Thresholding: From Non-convex Sparse Minimization to Non-smooth Concave Maximization (BL0, XTY, LW, QL0, DNM), pp. 2179–2187.
ICML-2017-LiuZLS #automation #composition #named #sequence
Gram-CTC: Automatic Unit Selection and Target Decomposition for Sequence Labelling (HL, ZZ, XL, SS), pp. 2188–2197.
ICML-2017-LivniCG #infinity #kernel #learning #network
Learning Infinite Layer Networks Without the Kernel Trick (RL, DC, AG), pp. 2198–2207.
ICML-2017-LongZ0J #adaptation #learning #network
Deep Transfer Learning with Joint Adaptation Networks (ML, HZ, JW0, MIJ), pp. 2208–2217.
ICML-2017-LouizosW #multi #network #normalisation
Multiplicative Normalizing Flows for Variational Bayesian Neural Networks (CL, MW), pp. 2218–2227.
ICML-2017-Loukas #how #matrix #question
How Close Are the Eigenvectors of the Sample and Actual Covariance Matrices? (AL), pp. 2228–2237.
ICML-2017-Luo #architecture #learning #network
Learning Deep Architectures via Generalized Whitened Neural Networks (PL0), pp. 2238–2246.
ICML-2017-LvJL #learning
Learning Gradient Descent: Better Generalization and Longer Horizons (KL, SJ, JL), pp. 2247–2255.
ICML-2017-Lyu #approximate #kernel
Spherical Structured Feature Maps for Kernel Approximation (YL), pp. 2256–2264.
ICML-2017-MaFF #markov #modelling #probability
Stochastic Gradient MCMC Methods for Hidden Markov Models (YAM, NJF, EBF), pp. 2265–2274.
ICML-2017-MaMXLD #self
Self-Paced Co-training (FM, DM, QX, ZL, XD), pp. 2275–2284.
ICML-2017-MacGlashanHLPWR #feedback #interactive #learning
Interactive Learning from Policy-Dependent Human Feedback (JM, MKH, RTL, BP, GW, DLR, MET, MLL), pp. 2285–2294.
ICML-2017-MachadoBB #framework #learning
A Laplacian Framework for Option Discovery in Reinforcement Learning (MCM, MGB, MHB), pp. 2295–2304.
ICML-2017-MairBB
Frame-based Data Factorizations (SM, AB, UB), pp. 2305–2313.
ICML-2017-MalherbeV #optimisation
Global optimization of Lipschitz functions (CM, NV), pp. 2314–2323.
ICML-2017-MaoSC #matrix #on the #symmetry
On Mixed Memberships and Symmetric Nonnegative Matrix Factorizations (XM0, PS, DC), pp. 2324–2333.
ICML-2017-MasegosaNLRSM #data type #modelling
Bayesian Models of Data Streams with Hierarchical Power Priors (ARM, TDN, HL, DRL, AS, ALM), pp. 2334–2343.
ICML-2017-MaystreG #approach #effectiveness #exclamation #learning
Just Sort It! A Simple and Effective Approach to Active Preference Learning (LM, MG), pp. 2344–2353.
ICML-2017-MaystreG17a #identification #named #network
ChoiceRank: Identifying Preferences from Node Traffic in Networks (LM, MG), pp. 2354–2362.
ICML-2017-McGillP #how #network
Deciding How to Decide: Dynamic Routing in Artificial Neural Networks (MM, PP), pp. 2363–2372.
ICML-2017-McNamaraB #bound
Risk Bounds for Transferring Representations With and Without Fine-Tuning (DM, MFB), pp. 2373–2381.
ICML-2017-MeiCGH #matrix
Nonnegative Matrix Factorization for Time Series Recovery From a Few Temporal Aggregates (JM, YdC, YG, GH), pp. 2382–2390.
ICML-2017-MeschederNG #generative #network
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks (LMM, SN, AG), pp. 2391–2400.
ICML-2017-MhammediHRB #network #orthogonal #performance #using
Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections (ZM, ADH, AR, JB0), pp. 2401–2409.
ICML-2017-MiaoGB #topic
Discovering Discrete Latent Topics with Neural Variational Inference (YM, EG, PB), pp. 2410–2419.
ICML-2017-MillerFA #approximate
Variational Boosting: Iteratively Refining Posterior Approximations (ACM, NJF, RPA), pp. 2420–2429.
ICML-2017-MirhoseiniPLSLZ #learning #optimisation
Device Placement Optimization with Reinforcement Learning (AM, HP, QVL, BS, RL0, YZ, NK, MN0, SB, JD), pp. 2430–2439.
ICML-2017-MirrokniLVW #approximate #bound
Tight Bounds for Approximate Carathéodory and Beyond (VSM, RPL, AV, SCwW), pp. 2440–2448.
ICML-2017-MirzasoleimanK0 #summary
Deletion-Robust Submodular Maximization: Data Summarization with “the Right to be Forgotten” (BM, AK, AK0), pp. 2449–2458.
ICML-2017-MishraAM #modelling #predict
Prediction and Control with Temporal Segment Models (NM, PA, IM), pp. 2459–2468.
ICML-2017-MitliagkasM #quality
Improving Gibbs Sampler Scan Quality with DoGS (IM, LWM), pp. 2469–2477.
ICML-2017-MitrovicB0K #summary
Differentially Private Submodular Maximization: Data Summarization in Disguise (MM, MB, AK0, AK), pp. 2478–2487.
ICML-2017-MohajerSE #learning #rank
Active Learning for Top-K Rank Aggregation from Noisy Comparisons (SM, CS, AE), pp. 2488–2497.
ICML-2017-MolchanovAV #network
Variational Dropout Sparsifies Deep Neural Networks (DM, AA, DPV), pp. 2498–2507.
ICML-2017-MollaysaSK #modelling #using
Regularising Non-linear Models Using Feature Side-information (AM, PS, AK), pp. 2508–2517.
ICML-2017-MouLLJ #distributed #execution #natural language #query #symbolic computation
Coupling Distributed and Symbolic Execution for Natural Language Queries (LM, ZL, HL0, ZJ), pp. 2518–2526.
ICML-2017-MrouehSG #named
McGan: Mean and Covariance Feature Matching GAN (YM, TS, VG), pp. 2527–2535.
ICML-2017-MuellerGJ #combinator #sequence
Sequence to Better Sequence: Continuous Revision of Combinatorial Structures (JM, DKG, TSJ), pp. 2536–2544.
ICML-2017-MukkamalaH #bound
Variants of RMSProp and Adagrad with Logarithmic Regret Bounds (MCM, MH0), pp. 2545–2553.
ICML-2017-MunkhdalaiY #network
Meta Networks (TM, HY0), pp. 2554–2563.
ICML-2017-NagamineM #case study #comprehension #multi #recognition #representation #speech
Understanding the Representation and Computation of Multilayer Perceptrons: A Case Study in Speech Recognition (TN, NM), pp. 2564–2573.
ICML-2017-NamkoongSYD #adaptation #optimisation
Adaptive Sampling Probabilities for Non-Smooth Optimization (HN, AS, SY, JCD), pp. 2574–2583.
ICML-2017-NeilLDL #network
Delta Networks for Optimized Recurrent Network Computation (DN, JL, TD, SCL), pp. 2584–2593.
ICML-2017-NeiswangerX
Post-Inference Prior Swapping (WN, EPX), pp. 2594–2602.
ICML-2017-NguyenH #network
The Loss Surface of Deep and Wide Neural Networks (QN0, MH0), pp. 2603–2612.
ICML-2017-NguyenLST #machine learning #named #novel #probability #problem #recursion #using
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient (LMN, JL, KS, MT), pp. 2613–2621.
ICML-2017-NiQWC #clustering #modelling #persistent #visual notation
Composing Tree Graphical Models with Persistent Homology Features for Clustering Mixed-Type Data (XN, NQ, YW, CC0), pp. 2622–2631.
ICML-2017-OchiaiWHH #multi #recognition #speech
Multichannel End-to-end Speech Recognition (TO, SW, TH, JRH), pp. 2632–2641.
ICML-2017-OdenaOS #classification #image #synthesis
Conditional Image Synthesis with Auxiliary Classifier GANs (AO, CO, JS), pp. 2642–2651.
ICML-2017-OglicG #kernel
Nyström Method with Kernel K-means++ Samples as Landmarks (DO, TG0), pp. 2652–2660.
ICML-2017-OhSLK #learning #multi
Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning (JO, SPS, HL, PK), pp. 2661–2670.
ICML-2017-OlivaPS #statistics
The Statistical Recurrent Unit (JBO, BP, JGS), pp. 2671–2680.
ICML-2017-OmidshafieiPAHV #distributed #learning #multi
Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability (SO, JP, CA, JPH, JV), pp. 2681–2690.
ICML-2017-OngieWNB #algebra #matrix #modelling #rank
Algebraic Variety Models for High-Rank Matrix Completion (GO, RW, RDN, LB), pp. 2691–2700.
ICML-2017-OsbandR #learning #question #why
Why is Posterior Sampling Better than Optimism for Reinforcement Learning? (IO, BVR), pp. 2701–2710.
ICML-2017-OsogamiKS #bidirectional #learning #modelling
Bidirectional Learning for Time-series Models with Hidden Units (TO, HK, TS), pp. 2711–2720.
ICML-2017-OstrovskiBOM #modelling
Count-Based Exploration with Neural Density Models (GO, MGB, AvdO, RM), pp. 2721–2730.
ICML-2017-PadSCTU #learning #taxonomy
Dictionary Learning Based on Sparse Distribution Tomography (PP, FS, LEC, PT, MU), pp. 2731–2740.
ICML-2017-PakmanGCP #probability
Stochastic Bouncy Particle Sampler (AP, DG, DEC, LP), pp. 2741–2750.
ICML-2017-PallaKG #process
A Birth-Death Process for Feature Allocation (KP, DAK, ZG), pp. 2751–2759.
ICML-2017-PanYTB #nondeterminism #predict #process
Prediction under Uncertainty in Sparse Spectrum Gaussian Processes with Applications to Filtering and Control (YP, XY, EAT, BB), pp. 2760–2768.
ICML-2017-PanahiDJB #algorithm #clustering #convergence #incremental #probability
Clustering by Sum of Norms: Stochastic Incremental Algorithm, Convergence and Cluster Recovery (AP, DPD, FDJ, CB), pp. 2769–2777.
ICML-2017-PathakAED #predict #self
Curiosity-driven Exploration by Self-supervised Prediction (DP, PA, AAE, TD), pp. 2778–2787.
ICML-2017-PengZZQ #distributed #process
Asynchronous Distributed Variational Gaussian Process for Regression (HP, SZ, XZ, YQ), pp. 2788–2797.
ICML-2017-PenningtonB #geometry #matrix #network #random
Geometry of Neural Network Loss Surfaces via Random Matrix Theory (JP, YB), pp. 2798–2806.
ICML-2017-PentinaL #learning #multi
Multi-task Learning with Labeled and Unlabeled Tasks (AP, CHL), pp. 2807–2816.
ICML-2017-PintoDSG #learning #robust
Robust Adversarial Reinforcement Learning (LP, JD, RS, AG0), pp. 2817–2826.
ICML-2017-PritzelUSBVHWB
Neural Episodic Control (AP, BU, SS, APB, OV, DH, DW, CB), pp. 2827–2836.
ICML-2017-RaffelLLWE #linear #online
Online and Linear-Time Attention by Enforcing Monotonic Alignments (CR, MTL, PJL, RJW, DE), pp. 2837–2846.
ICML-2017-RaghuPKGS #network #on the #power of
On the Expressive Power of Deep Neural Networks (MR, BP, JMK, SG, JSD), pp. 2847–2854.
ICML-2017-RaghunathanVZ #multi
Estimating the unseen from multiple populations (AR, GV, JZ), pp. 2855–2863.
ICML-2017-RahmaniA #performance #robust
Coherence Pursuit: Fast, Simple, and Robust Subspace Recovery (MR, GKA), pp. 2864–2873.
ICML-2017-RahmaniA17a #approach #clustering #problem
Innovation Pursuit: A New Approach to the Subspace Clustering Problem (MR, GKA), pp. 2874–2882.
ICML-2017-RanaL0NV #optimisation #process
High Dimensional Bayesian Optimization with Elastic Gaussian Process (SR, CL0, SG0, VN0, SV), pp. 2883–2891.
ICML-2017-RavanbakhshSP
Equivariance Through Parameter-Sharing (SR, JGS, BP), pp. 2892–2901.
ICML-2017-RealMSSSTLK #classification #evolution #image #scalability
Large-Scale Evolution of Image Classifiers (ER, SM, AS, SS, YLS, JT, QVL, AK), pp. 2902–2911.
ICML-2017-ReedOKCWCBF #estimation #parallel
Parallel Multiscale Autoregressive Density Estimation (SER, AvdO, NK, SGC, ZW0, YC, DB, NdF), pp. 2912–2921.
ICML-2017-RippelB #adaptation #image #realtime
Real-Time Adaptive Image Compression (OR, LDB), pp. 2922–2930.
ICML-2017-RiquelmeGL #estimation #learning #linear #modelling
Active Learning for Accurate Estimation of Linear Models (CR, MG, AL), pp. 2931–2939.
ICML-2017-RitterBSB #bias #case study #network
Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study (SR, DGTB, AS, MMB), pp. 2940–2949.
ICML-2017-RubinsteinA #difference #privacy #random
Pain-Free Random Differential Privacy with Sensitivity Sampling (BIPR, FA), pp. 2950–2959.
ICML-2017-Ruggieri
Enumerating Distinct Decision Trees (SR), pp. 2960–2968.
ICML-2017-RukatHTY #matrix
Bayesian Boolean Matrix Factorisation (TR, CCH, MKT, CY), pp. 2969–2978.
ICML-2017-SafranS #approximate #network #trade-off
Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks (IS, OS), pp. 2979–2987.
ICML-2017-SaitoUH #adaptation #symmetry
Asymmetric Tri-training for Unsupervised Domain Adaptation (KS, YU, TH), pp. 2988–2997.
ICML-2017-SakaiPNS #classification
Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data (TS, MCdP, GN, MS), pp. 2998–3006.
ICML-2017-SakrKS #network #precise
Analytical Guarantees on Numerical Precision of Deep Neural Networks (CS, YK0, NRS), pp. 3007–3016.
ICML-2017-SaxeER #composition #multi
Hierarchy Through Composition with Multitask LMDPs (AMS, ACE, BR), pp. 3017–3026.
ICML-2017-ScamanBBLM #algorithm #distributed #network #optimisation
Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks (KS, FRB, SB, YTL, LM), pp. 3027–3036.
ICML-2017-SchlegelPCW #adaptation #kernel #online #using
Adapting Kernel Representations Online Using Submodular Maximization (MS, YP, JC, MW), pp. 3037–3046.
ICML-2017-SelsamLD #machine learning
Developing Bug-Free Machine Learning Systems With Formal Mathematics (DS, PL, DLD), pp. 3047–3056.
ICML-2017-SenSDS #identification #online
Identifying Best Interventions through Online Importance Sampling (RS, KS, AGD, SS), pp. 3057–3066.
ICML-2017-Shalev-ShwartzS #learning
Failures of Gradient-Based Deep Learning (SSS, OS, SS), pp. 3067–3075.
ICML-2017-ShalitJS #algorithm #bound
Estimating individual treatment effect: generalization bounds and algorithms (US, FDJ, DAS), pp. 3076–3085.
ICML-2017-ShamirS #feedback #learning #online #permutation
Online Learning with Local Permutations and Delayed Feedback (OS, LS), pp. 3086–3094.
ICML-2017-SharanV #algorithm #composition
Orthogonalized ALS: A Theoretically Principled Tensor Decomposition Algorithm for Practical Use (VS, GV), pp. 3095–3104.
ICML-2017-Sheffet
Differentially Private Ordinary Least Squares (OS), pp. 3105–3114.
ICML-2017-ShenL #complexity #on the
On the Iteration Complexity of Support Recovery via Hard Thresholding Pursuit (JS0, PL0), pp. 3115–3124.
ICML-2017-ShenLYM #algorithm #multi #named #optimisation
GSOS: Gauss-Seidel Operator Splitting Algorithm for Multi-Term Nonsmooth Convex Composite Optimization (LS, WL0, GY, SM), pp. 3125–3134.
ICML-2017-ShiKFHL #framework #platform
World of Bits: An Open-Domain Platform for Web-Based Agents (TS, AK, LF, JH, PL), pp. 3135–3144.
ICML-2017-ShrikumarGK #difference #learning
Learning Important Features Through Propagating Activation Differences (AS, PG, AK), pp. 3145–3153.
ICML-2017-Shrivastava #performance
Optimal Densification for Fast and Accurate Minwise Hashing (AS), pp. 3154–3163.
ICML-2017-ShuBG #estimation
Bottleneck Conditional Density Estimation (RS, HHB, MG), pp. 3164–3172.
ICML-2017-ShyamGD
Attentive Recurrent Comparators (PS, SG, AD), pp. 3173–3181.
ICML-2017-SiZKMDH
Gradient Boosted Decision Trees for High Dimensional Sparse Output (SS, HZ0, SSK, DM, ISD, CJH), pp. 3182–3190.
ICML-2017-SilverHHSGHDRRB #learning #predict
The Predictron: End-To-End Learning and Planning (DS, HvH, MH, TS, AG, TH, GDA, DPR, NCR, AB, TD), pp. 3191–3199.
ICML-2017-Simsekli #difference #equation #markov #monte carlo #probability
Fractional Langevin Monte Carlo: Exploring Levy Driven Stochastic Differential Equations for Markov Chain Monte Carlo (US), pp. 3200–3209.
ICML-2017-0005P #estimation
Nonparanormal Information Estimation (SS0, BP), pp. 3210–3219.
ICML-2017-SivakumarB
High-Dimensional Structured Quantile Regression (VS, AB), pp. 3220–3229.
ICML-2017-StaibJ #robust
Robust Budget Allocation via Continuous Submodular Functions (MS, SJ), pp. 3230–3240.
ICML-2017-StanZ0K #probability
Probabilistic Submodular Maximization in Sub-Linear Time (SS, MZ, AK0, AK), pp. 3241–3250.
ICML-2017-StichRJ #approximate #coordination
Approximate Steepest Coordinate Descent (SUS, AR, MJ), pp. 3251–3259.
ICML-2017-SuggalaYR #modelling #visual notation
Ordinal Graphical Models: A Tale of Two Approaches (ASS, EY, PR), pp. 3260–3269.
ICML-2017-SugiyamaNT #statistics
Tensor Balancing on Statistical Manifold (MS, HN, KT), pp. 3270–3279.
ICML-2017-SunDK #algorithm
Safety-Aware Algorithms for Adversarial Contextual Bandit (WS, DD, AK), pp. 3280–3288.
ICML-2017-0001N #composition #learning #modelling #scalability
Relative Fisher Information and Natural Gradient for Learning Large Modular Models (KS0, FN), pp. 3289–3298.
ICML-2017-SunRMW #learning #named
meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting (XS0, XR, SM, HW), pp. 3299–3308.
ICML-2017-SunVGBB #learning #predict
Deeply AggreVaTeD: Differentiable Imitation Learning for Sequential Prediction (WS0, AV, GJG, BB, JAB), pp. 3309–3318.
ICML-2017-SundararajanTY #axiom #network
Axiomatic Attribution for Deep Networks (MS, AT, QY), pp. 3319–3328.
ICML-2017-SureshYKM #communication #distributed #estimation
Distributed Mean Estimation with Limited Communication (ATS, FXY, SK, HBM), pp. 3329–3337.
ICML-2017-SuzumuraNUTT #higher-order #interactive #modelling
Selective Inference for Sparse High-Order Interaction Models (SS, KN, YU, KT, IT), pp. 3338–3347.
ICML-2017-TaiebTH #probability
Coherent Probabilistic Forecasts for Hierarchical Time Series (SBT, JWT, RJH), pp. 3348–3357.
ICML-2017-TanM #learning #modelling
Partitioned Tensor Factorizations for Learning Mixed Membership Models (ZT, SM0), pp. 3358–3367.
ICML-2017-TandonLDK #distributed #learning
Gradient Coding: Avoiding Stragglers in Distributed Learning (RT, QL, AGD, NK), pp. 3368–3376.
ICML-2017-TangGD #scalability #sketching
Gradient Projection Iterative Sketch for Large-Scale Constrained Least-Squares (JT, MG, MED), pp. 3377–3386.
ICML-2017-Telgarsky #network
Neural Networks and Rational Functions (MT), pp. 3387–3393.
ICML-2017-ThiLNT #classification #probability #problem
Stochastic DCA for the Large-sum of Non-convex Functions Problem and its Application to Group Variable Selection in Classification (HALT, HML, PDN, BT), pp. 3394–3403.
ICML-2017-Tian #analysis #convergence #network
An Analytical Formula of Population Gradient for two-layered ReLU network and its Applications in Convergence and Critical Point Analysis (YT), pp. 3404–3413.
ICML-2017-TokuiS
Evaluating the Variance of Likelihood-Ratio Gradient Estimators (ST, IS), pp. 3414–3423.
ICML-2017-TompsonSSP #network #simulation
Accelerating Eulerian Fluid Simulation With Convolutional Networks (JT, KS, PS, KP), pp. 3424–3433.
ICML-2017-TosattoPDR
Boosted Fitted Q-Iteration (ST, MP, CD, MR), pp. 3434–3443.
ICML-2017-ToshD #learning
Diameter-Based Active Learning (CT, SD), pp. 3444–3452.
ICML-2017-TripuraneniRGT #monte carlo
Magnetic Hamiltonian Monte Carlo (NT, MR, ZG, RET), pp. 3453–3461.
ICML-2017-TrivediDWS #graph #named #reasoning
Know-Evolve: Deep Temporal Reasoning for Dynamic Knowledge Graphs (RT, HD, YW0, LS), pp. 3462–3471.
ICML-2017-TsakirisV #clustering #component
Hyperplane Clustering via Dual Principal Component Pursuit (MCT, RV), pp. 3472–3481.
ICML-2017-TuVWGJR #locality
Breaking Locality Accelerates Block Gauss-Seidel (ST, SV, ACW, AG, MIJ, BR), pp. 3482–3491.
ICML-2017-UbaruM #classification #multi #testing
Multilabel Classification with Group Testing and Codes (SU, AM), pp. 3492–3501.
ICML-2017-UmlauftH #learning #probability
Learning Stable Stochastic Nonlinear Dynamical Systems (JU, SH), pp. 3502–3510.
ICML-2017-UrschelBMR #learning #process
Learning Determinantal Point Processes with Moments and Cycles (JU, VEB, AM, PR), pp. 3511–3520.
ICML-2017-ValeraG #automation #dataset #statistics
Automatic Discovery of the Statistical Types of Variables in a Dataset (IV, ZG), pp. 3521–3529.
ICML-2017-VaswaniKWGLS #independence #learning #online
Model-Independent Online Learning for Influence Maximization (SV, BK, ZW, MG, LVSL, MS), pp. 3530–3539.
ICML-2017-VezhnevetsOSHJS #learning #network
FeUdal Networks for Hierarchical Reinforcement Learning (ASV, SO, TS, NH, MJ, DS, KK), pp. 3540–3549.
ICML-2017-Villacampa-Calvo #classification #multi #process #scalability #using
Scalable Multi-Class Gaussian Process Classification using Expectation Propagation (CVC, DHL), pp. 3550–3559.
ICML-2017-VillegasYZSLL #learning #predict
Learning to Generate Long-term Future via Hierarchical Prediction (RV, JY, YZ, SS, XL, HL), pp. 3560–3569.
ICML-2017-VorontsovTKP #dependence #learning #network #on the #orthogonal
On orthogonality and learning recurrent networks with long term dependencies (EV, CT, SK, CP), pp. 3570–3578.
ICML-2017-WalderB #estimation #performance #process
Fast Bayesian Intensity Estimation for the Permanental Process (CJW, ANB), pp. 3579–3588.
ICML-2017-WangAD #adaptation #evaluation
Optimal and Adaptive Off-policy Evaluation in Contextual Bandits (YXW, AA, MD), pp. 3589–3597.
ICML-2017-WangFHMR #capacity #locality
Capacity Releasing Diffusion for Speed and Locality (DW0, KF, MH, MWM, SR), pp. 3598–3607.
ICML-2017-WangGM #optimisation #sketching #statistics
Sketched Ridge Regression: Optimization Perspective, Statistical Perspective, and Model Averaging (SW, AG, MWM), pp. 3608–3616.
ICML-2017-WangG #estimation #robust #visual notation
Robust Gaussian Graphical Model Estimation with Arbitrary Corruption (LW, QG), pp. 3617–3626.
ICML-2017-WangJ #optimisation #performance
Max-value Entropy Search for Efficient Bayesian Optimization (ZW, SJ), pp. 3627–3635.
ICML-2017-WangKS0 #distributed #learning #performance
Efficient Distributed Learning with Sparsity (JW, MK, NS, TZ0), pp. 3636–3645.
ICML-2017-WangKB #modelling #probability #robust
Robust Probabilistic Modeling with Bayesian Data Reweighting (YW, AK, DMB), pp. 3646–3655.
ICML-2017-WangLJK #kernel #learning #optimisation
Batched High-dimensional Bayesian Optimization via Structural Kernel Learning (ZW, CL, SJ, PK), pp. 3656–3664.
ICML-2017-WangL #composition
Tensor Decomposition via Simultaneous Power Iteration (PAW, CJL), pp. 3665–3673.
ICML-2017-WangWHMZD #modelling #sequence
Sequence Modeling via Segmentations (CW, YW, PSH, AM, DZ, LD0), pp. 3674–3683.
ICML-2017-WangWTS #policy #process
Variational Policy for Guiding Point Processes (YW0, GW, EAT, LS), pp. 3684–3693.
ICML-2017-WangX #algorithm #first-order
Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms (JW, LX), pp. 3694–3702.
ICML-2017-WangX0T
Beyond Filters: Compact Feature Map for Portable Deep Model (YW, CX0, CX0, DT), pp. 3703–3711.
ICML-2017-WangZG #framework #matrix #rank
A Unified Variance Reduction-Based Framework for Nonconvex Low-Rank Matrix Recovery (LW, XZ, QG), pp. 3712–3721.
ICML-2017-WeiSKOG #multi #process #similarity
Source-Target Similarity Modelings for Multi-Source Transfer Gaussian Process Regression (PW, RS, YK, YSO, CKG), pp. 3722–3731.
ICML-2017-WenMBY #modelling
Latent Intention Dialogue Models (THW, YM, PB, SJY), pp. 3732–3741.
ICML-2017-White #learning #specification
Unifying Task Specification in Reinforcement Learning (MW), pp. 3742–3750.
ICML-2017-WichrowskaMHCDF #scalability
Learned Optimizers that Scale and Generalize (OW, NM, MWH, SGC, MD, NdF, JSD), pp. 3751–3760.
ICML-2017-WinnerSS #integer #modelling
Exact Inference for Integer Latent-Variable Models (KW, DS, DS), pp. 3761–3770.
ICML-2017-WrigleyLY
Tensor Belief Propagation (AW, WSL, NY), pp. 3771–3779.
ICML-2017-WuZ #metric #multi #performance
A Unified View of Multi-Label Performance Measures (XZW, ZHZ), pp. 3780–3788.
ICML-2017-XiaQCBYL #learning
Dual Supervised Learning (YX, TQ, WC0, JB0, NY, TYL), pp. 3789–3798.
ICML-2017-XieDZKYZX #constraints #learning #modelling
Learning Latent Space Models with Angular Constraints (PX, YD, YZ, AK, YY, JZ, EPX), pp. 3799–3810.
ICML-2017-XieSX
Uncorrelation and Evenness: a New Diversity-Promoting Regularizer (PX, AS, EPX), pp. 3811–3820.
ICML-2017-XuLY #convergence #optimisation #performance #probability
Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence (YX, QL, TY), pp. 3821–3830.
ICML-2017-XuLZ #learning #process #sequence
Learning Hawkes Processes from Short Doubly-Censored Event Sequences (HX, DL, HZ), pp. 3831–3840.
ICML-2017-0002TLFYG #adaptation #distributed #optimisation
Adaptive Consensus ADMM for Distributed Optimization (ZX0, GT, HL0, MATF, XY, TG), pp. 3841–3850.
ICML-2017-YangBL #estimation #modelling
High-dimensional Non-Gaussian Single Index Models via Thresholded Score Function Estimation (ZY, KB, HL0), pp. 3851–3860.
ICML-2017-YangFSH #clustering #learning #towards
Towards K-means-friendly Spaces: Simultaneous Deep Learning and Clustering (BY, XF0, NDS, MH), pp. 3861–3870.
ICML-2017-YangGKSFL #on the #set
On The Projection Operator to A Three-view Cardinality Constrained Set (HY, SG, CK, DS, RF, JL0), pp. 3871–3880.
ICML-2017-YangHSB #modelling #using
Improved Variational Autoencoders for Text Modeling using Dilated Convolutions (ZY, ZH, RS, TBK), pp. 3881–3890.
ICML-2017-YangKT #classification #network #video
Tensor-Train Recurrent Neural Networks for Video Classification (YY, DK, VT), pp. 3891–3900.
ICML-2017-YangL0 #optimisation
A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates (TY, QL, LZ0), pp. 3901–3910.
ICML-2017-YangL #modelling #statistics
Sparse + Group-Sparse Dirty Models: Statistical Guarantees without Unreasonable Conditions and a Case for Non-Convexity (EY, ACL), pp. 3911–3920.
ICML-2017-YangRS #scalability
Scalable Bayesian Rule Lists (HY, CR, MS), pp. 3921–3930.
ICML-2017-YeLZ #approximate #convergence
Approximate Newton Methods and Their Local Convergence (HY, LL, ZZ), pp. 3931–3939.
ICML-2017-YeWL
A Simulated Annealing Based Inexact Oracle for Wasserstein Loss Minimization (JY, JZW, JL0), pp. 3940–3948.
ICML-2017-YenLCSLR
Latent Feature Lasso (IEHY, WCL, SEC, ASS, SDL, PR), pp. 3949–3957.
ICML-2017-YoonH #network
Combined Group and Exclusive Sparsity for Deep Neural Networks (JY, SJH), pp. 3958–3966.
ICML-2017-ZaheerAS #clustering #modelling #sequence
Latent LSTM Allocation: Joint Clustering and Non-Linear Dynamic Modeling of Sequence Data (MZ, AA, AJS), pp. 3967–3976.
ICML-2017-ZaheerKAMS #performance
Canopy Fast Sampling with Cover Trees (MZ, SK, AA, JMFM, AJS), pp. 3977–3986.
ICML-2017-ZenkePG #learning
Continual Learning Through Synaptic Intelligence (FZ, BP, SG), pp. 3987–3995.
ICML-2017-ZhangCGHC #probability
Stochastic Gradient Monomial Gamma Sampler (YZ, CC, ZG, RH, LC), pp. 3996–4005.
ICML-2017-ZhangGFCHSC #generative
Adversarial Feature Matching for Text Generation (YZ, ZG, KF, ZC, RH, DS, LC), pp. 4006–4015.
ICML-2017-ZhangHLYCHW #reduction #scalability
Scaling Up Sparse Support Vector Machines by Simultaneous Feature and Sample Reduction (WZ, BH, WL0, JY, DC, XH0, JW0), pp. 4016–4025.
ICML-2017-ZhangHTC #learning
Re-revisiting Learning on Hypergraphs: Confidence Interval and Subgradient Method (CZ, SH, ZGT, THHC), pp. 4026–4034.
ICML-2017-Zhang0KALZ #learning #linear #modelling #named #precise
ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning (HZ, JL0, KK, DA, JL0, CZ), pp. 4035–4043.
ICML-2017-ZhangLW #network
Convexified Convolutional Neural Networks (YZ0, PL, MJW), pp. 4044–4053.
ICML-2017-ZhangZZHZ #distributed #learning #network #online
Projection-free Distributed Online Learning in Networks (WZ0, PZ, WZ0, SCHH, TZ), pp. 4054–4062.
ICML-2017-ZhangZ #multi
Multi-Class Optimal Margin Distribution Machine (TZ, ZHZ), pp. 4063–4071.
ICML-2017-ZhaoDB #relational
Leveraging Node Attributes for Incomplete Relational Data (HZ, LD, WLB), pp. 4072–4081.
ICML-2017-ZhaoLW0TY #matrix #network #rank
Theoretical Properties for Neural Networks with Weight Matrices of Low Displacement Rank (LZ, SL, YW, ZL0, JT0, BY0), pp. 4082–4090.
ICML-2017-ZhaoSE #generative #learning #modelling
Learning Hierarchical Features from Deep Generative Models (SZ, JS, SE), pp. 4091–4099.
ICML-2017-ZhaoYKJB #architecture #learning
Learning Sleep Stages from Radio Signals: A Conditional Adversarial Architecture (MZ, SY, DK, TSJ, MTB), pp. 4100–4109.
ICML-2017-0004K #learning
Follow the Moving Leader in Deep Learning (SZ0, JTK), pp. 4110–4119.
ICML-2017-ZhengMWCYML #probability
Asynchronous Stochastic Gradient Descent with Delay Compensation (SZ, QM, TW, WC0, NY, ZM, TYL), pp. 4120–4129.
ICML-2017-0007MW #effectiveness #learning
Collect at Once, Use Effectively: Making Non-interactive Locally Private Learning Possible (KZ0, WM, LW0), pp. 4130–4139.
ICML-2017-ZhongS0BD #network
Recovery Guarantees for One-hidden-layer Neural Networks (KZ, ZS, PJ0, PLB, ISD), pp. 4140–4149.
ICML-2017-ZhouGG #adaptation #probability
Stochastic Adaptive Quasi-Newton Methods for Minimizing Expected Values (CZ, WG, DG), pp. 4150–4159.
ICML-2017-ZhouLZ #equilibrium #game studies #identification #nash #random
Identify the Nash Equilibrium in Static Games with Random Payoffs (YZ, JL, JZ0), pp. 4160–4169.
ICML-2017-ZhouZIJWS #dataset #multi #testing
When can Multi-Site Datasets be Pooled for Regression? Hypothesis Tests, l₂-consistency and Neuroscience Applications (HHZ, YZ, VKI, SCJ, GW, VS), pp. 4170–4179.
ICML-2017-ZhuWZG #algorithm #probability
High-Dimensional Variance-Reduced Stochastic Gradient Expectation-Maximization Algorithm (RZ, LW, CZ, QG), pp. 4180–4188.
ICML-2017-ZillySKS #network
Recurrent Highway Networks (JGZ, RKS, JK, JS), pp. 4189–4198.
ICML-2017-ZoghiTGKSW #learning #modelling #online #probability #rank
Online Learning to Rank in Stochastic Click Models (MZ, TT, MG, BK, CS, ZW), pp. 4199–4208.

Bibliography of Software Language Engineering in Generated Hypertext (BibSLEIGH) is created and maintained by Dr. Vadim Zaytsev.
Hosted as a part of SLEBOK on GitHub.