Maria-Florina Balcan, Kilian Q. Weinberger
Proceedings of the 33rd International Conference on Machine Learning
ICML, 2016.
Contents (322 items)
- ICML-2016-ShahZ #crowdsourcing #self
- No Oops, You Won't Do It Again: Mechanisms for Self-correction in Crowdsourcing (NBS, DZ), pp. 1–10.
- ICML-2016-ShahBGW #modelling #statistics #transitive
- Stochastically Transitive Models for Pairwise Comparisons: Statistical and Computational Issues (NBS, SB, AG, MJW), pp. 11–20.
- ICML-2016-Weller #modelling #visual notation
- Uprooting and Rerooting Graphical Models (AW), pp. 21–29.
- ICML-2016-ShahamCDJNCK #approach #learning
- A Deep Learning Approach to Unsupervised Ensemble Learning (US, XC, OD, AJ, BN, JTC, YK), pp. 30–39.
- ICML-2016-YangCS #graph #learning
- Revisiting Semi-Supervised Learning with Graph Embeddings (ZY, WWC, RS), pp. 40–48.
- ICML-2016-FinnLA #learning #optimisation #policy
- Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization (CF, SL, PA), pp. 49–58.
- ICML-2016-XieZX #learning #modelling
- Diversity-Promoting Bayesian Learning of Latent Variable Models (PX, JZ0, EPX), pp. 59–68.
- ICML-2016-KandasamyY #approximate #parametricity
- Additive Approximations in High Dimensional Nonparametric Regression via the SALSA (KK, YY), pp. 69–78.
- ICML-2016-LeeLO #probability #process
- Hawkes Processes with Stochastic Excitations (YL, KWL, CSO), pp. 79–88.
- ICML-2016-KhetanO #data-driven #performance #rank
- Data-driven Rank Breaking for Efficient Rank Aggregation (AK, SO), pp. 89–98.
- ICML-2016-BuloPK
- Dropout distillation (SRB, LP, PK), pp. 99–107.
- ICML-2016-FantiKORV
- Metadata-conscious anonymous messaging (GCF, PK, SO, KR, PV), pp. 108–116.
- ICML-2016-LiuZO #education #linear
- The Teaching Dimension of Linear Learners (JL, XZ0, HO), pp. 117–126.
- ICML-2016-CaragiannisPS
- Truthful Univariate Estimators (IC, ADP, NS0), pp. 127–135.
- ICML-2016-ArpitZNG #question #representation #why
- Why Regularized Auto-Encoders learn Sparse Representation? (DA, YZ, HQN0, VG), pp. 136–144.
- ICML-2016-NockCBN
- k-variates++: more pluses in the k-means++ (RN, RC, RB, FN), pp. 145–154.
- ICML-2016-RosenskiSS #approach #multi
- Multi-Player Bandits - a Musical Chairs Approach (JR, OS, LS), pp. 155–163.
- ICML-2016-SteegG
- The Information Sieve (GVS, AG), pp. 164–172.
- ICML-2016-AmodeiABCCCCCCD #recognition #speech
- Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin (DA, SA, RA, JB, EB, CC, JC, BC, JC, MC, AC, GD, EE, JHE, LF, CF, AYH, BJ, TH, PL, XL, LL, SN, AYN, SO, RP, SQ, JR, SS, DS, SS, CW0, YW, ZW, BX, YX, DY, JZ, ZZ), pp. 173–182.
- ICML-2016-ZhangGR #consistency #feature model #on the
- On the Consistency of Feature Selection With Lasso for Non-linear Targets (YZ, WG, SR), pp. 183–191.
- ICML-2016-Metzen #multi #optimisation
- Minimum Regret Search for Single- and Multi-Task Optimization (JHM), pp. 192–200.
- ICML-2016-Gilad-BachrachD #named #network #throughput
- CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy (RGB, ND, KL, KEL, MN, JW), pp. 201–210.
- ICML-2016-VladymyrovC #problem #scalability
- The Variational Nystrom method for large-scale spectral problems (MV, MÁCP), pp. 211–220.
- ICML-2016-LiOW #multi #network
- Multi-Bias Non-linear Activation in Deep Neural Networks (HL, WO, XW0), pp. 221–229.
- ICML-2016-LeeYH #learning #multi #symmetry
- Asymmetric Multi-task Learning based on Task Relatedness and Confidence (GL, EY, SJH), pp. 230–238.
- ICML-2016-Fan #estimation #fault #performance #robust
- Accurate Robust and Efficient Error Estimation for Decision Trees (LF), pp. 239–247.
- ICML-2016-Shamir #algorithm #convergence #performance #probability
- Fast Stochastic Algorithms for SVD and PCA: Convergence Properties and Convexity (OS), pp. 248–256.
- ICML-2016-Shamir16a #convergence #probability
- Convergence of Stochastic Gradient Descent for PCA (OS), pp. 257–265.
- ICML-2016-LanGBS #education #named
- Dealbreaker: A Nonlinear Latent Variable Model for Educational Data (ASL, TG, RGB, CS), pp. 266–275.
- ICML-2016-LiuLJ #kernel #testing
- A Kernelized Stein Discrepancy for Goodness-of-fit Tests (QL, JDL, MIJ), pp. 276–284.
- ICML-2016-XueEBGS #fourier
- Variable Elimination in the Fourier Domain (YX, SE, RLB, CPG, BS), pp. 285–294.
- ICML-2016-LiCLYSC #approximate #matrix #rank
- Low-Rank Matrix Approximation with Stability (DL, CC0, QL, JY, LS, SMC), pp. 295–303.
- ICML-2016-MenonO #estimation
- Linking losses for density ratio and class-probability estimation (AKM, CSO), pp. 304–313.
- ICML-2016-ReddiHSPS #optimisation #probability #reduction
- Stochastic Variance Reduction for Nonconvex Optimization (SJR, AH, SS, BP, AJS), pp. 314–323.
- ICML-2016-RanganathTB #modelling
- Hierarchical Variational Models (RR, DT, DMB), pp. 324–333.
- ICML-2016-AdamsSTPKM #data type #random #smarttech
- Hierarchical Span-Based Conditional Random Fields for Labeling and Segmenting Events in Wearable Sensor Data Streams (RJA, NS, ET, AP, SK0, BMM), pp. 334–343.
- ICML-2016-ChoromanskaCBJK
- Binary embeddings with structured hashed projections (AC, KC, MB, TJ, SK, YL), pp. 344–353.
- ICML-2016-MandtHB #algorithm #analysis #probability
- A Variational Analysis of Stochastic Gradient Algorithms (SM, MDH, DMB), pp. 354–363.
- ICML-2016-Gopal #adaptation
- Adaptive Sampling for SGD by Exploiting Side Information (SG), pp. 364–372.
- ICML-2016-YuL #learning #multi #performance
- Learning from Multiway Data: Simple and Efficient Tensor Regression (RY, YL0), pp. 373–381.
- ICML-2016-HoangHL #distributed #framework #modelling #parallel #process
- A Distributed Variational Inference Framework for Unifying Parallel Sparse Gaussian Process Regression Models (TNH, QMH, BKHL), pp. 382–391.
- ICML-2016-ZhangYJXZ #feedback #linear #online #optimisation #probability
- Online Stochastic Linear Optimization under One-bit Feedback (LZ0, TY, RJ, YX, ZHZ), pp. 392–401.
- ICML-2016-JenattonHA #adaptation #algorithm #constraints #online #optimisation
- Adaptive Algorithms for Online Convex Optimization with Long-term Constraints (RJ, JCH, CA), pp. 402–411.
- ICML-2016-SinglaTK #elicitation #learning
- Actively Learning Hemimetrics with Applications to Eliciting User Preferences (AS, ST, AK0), pp. 412–420.
- ICML-2016-ZarembaMJF #algorithm #learning
- Learning Simple Algorithms from Examples (WZ, TM, AJ, RF), pp. 421–429.
- ICML-2016-LererGF #learning #physics
- Learning Physical Intuition of Block Towers by Example (AL, SG, RF), pp. 430–438.
- ICML-2016-LiuSSF #learning #markov #network
- Structure Learning of Partitioned Markov Networks (SL0, TS, MS, KF), pp. 439–448.
- ICML-2016-YangZJY #learning #online
- Tracking Slowly Moving Clairvoyant: Optimal Dynamic Regret of Online Learning with True and Noisy Gradient (TY, LZ0, RJ, JY), pp. 449–457.
- ICML-2016-PodosinnikovaBL #modelling #multi
- Beyond CCA: Moment Matching for Multi-View Models (AP, FRB, SLJ), pp. 458–467.
- ICML-2016-UbaruS #matrix #performance #rank #scalability
- Fast methods for estimating the Numerical rank of large matrices (SU, YS), pp. 468–477.
- ICML-2016-XieGF #analysis #clustering
- Unsupervised Deep Embedding for Clustering Analysis (JX, RBG, AF), pp. 478–487.
- ICML-2016-Kasiviswanathan #empirical #learning #performance
- Efficient Private Empirical Risk Minimization for High-dimensional Learning (SPK, HJ), pp. 488–497.
- ICML-2016-VojnovicY #estimation #modelling #parametricity
- Parameter Estimation for Generalized Thurstone Choice Models (MV, SYY), pp. 498–506.
- ICML-2016-LiuWYY #network
- Large-Margin Softmax Loss for Convolutional Neural Networks (WL, YW, ZY, MY0), pp. 507–516.
- ICML-2016-CouilletWAS #approach #matrix #network #random
- A Random Matrix Approach to Echo-State Neural Networks (RC, GW, HTA, HS), pp. 517–525.
- ICML-2016-JohnsonZ #categorisation #using
- Supervised and Semi-Supervised Text Categorization using LSTM for Region Embeddings (RJ, TZ0), pp. 526–534.
- ICML-2016-OkOSY #classification #crowdsourcing
- Optimality of Belief Propagation for Crowdsourced Classification (JO, SO, JS, YY), pp. 535–544.
- ICML-2016-VinogradskaBNRS #modelling #process
- Stability of Controllers for Gaussian Process Forward Models (JV, BB, DNT, AR, HS, JP0), pp. 545–554.
- ICML-2016-HammCB #learning #multi
- Learning privately from multiparty data (JH, YC, MB), pp. 555–563.
- ICML-2016-WeiWRC #morphism #network
- Network Morphism (TW, CW, YR, CWC), pp. 564–572.
- ICML-2016-GrosseM #approximate #matrix
- A Kronecker-factored approximate Fisher matrix for convolution layers (RBG, JM), pp. 573–582.
- ICML-2016-RaviIJS #design #linear #modelling
- Experimental Design on a Budget for Sparse Linear Models and Applications (SNR, VKI, SCJ, VS), pp. 583–592.
- ICML-2016-OsokinALDL #optimisation
- Minding the Gaps for Block Frank-Wolfe Optimization of Structured SVMs (AO, JBA, IL, PKD, SLJ), pp. 593–602.
- ICML-2016-GaoLZ #crowdsourcing
- Exact Exponent in Optimal Rates for Crowdsourcing (CG, YL, DZ), pp. 603–611.
- ICML-2016-ZhangLL #classification #image #network #scalability
- Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification (YZ, KL, HL), pp. 612–621.
- ICML-2016-ShenLX #clustering #online #rank #taxonomy
- Online Low-Rank Subspace Clustering by Basis Dictionary Pursuit (JS0, PL0, HX), pp. 622–631.
- ICML-2016-Curtis #algorithm #optimisation #probability #self
- A Self-Correcting Variable-Metric Algorithm for Stochastic Optimization (FC), pp. 632–641.
- ICML-2016-SimsekliBCR #monte carlo #probability
- Stochastic Quasi-Newton Langevin Monte Carlo (US, RB, ATC, GR), pp. 642–651.
- ICML-2016-JiangL #evaluation #learning #robust
- Doubly Robust Off-policy Value Evaluation for Reinforcement Learning (NJ, LL0), pp. 652–661.
- ICML-2016-QuXO #algorithm #analysis #optimisation #performance #probability
- Fast Rate Analysis of Some Stochastic Optimization Algorithms (CQ, HX, CJO), pp. 662–670.
- ICML-2016-LiM #nearest neighbour #performance
- Fast k-Nearest Neighbour Search via Dynamic Continuous Indexing (KL, JM), pp. 671–679.
- ICML-2016-LeKYC #learning #online #predict #sequence
- Smooth Imitation Learning for Online Sequence Prediction (HML0, AK, YY, PC0), pp. 680–688.
- ICML-2016-ChenKST #community #graph #locality
- Community Recovery in Graphs with Locality (YC0, GMK, CS, DT), pp. 689–698.
- ICML-2016-ZhuH #optimisation #performance #reduction
- Variance Reduction for Faster Non-Convex Optimization (ZAZ, EH), pp. 699–707.
- ICML-2016-PatriniNNC #learning #robust
- Loss factorization, weakly supervised learning and label noise robustness (GP, FN, RN, MC), pp. 708–717.
- ICML-2016-WangMCBPRGUA #analysis #matrix #network
- Analysis of Deep Neural Networks with Extended Data Jacobian Matrix (SW, ArM, RC, JAB, MP, MR, KG, GU, ÖA), pp. 718–726.
- ICML-2016-ImaizumiH #parametricity
- Doubly Decomposing Nonparametric Tensor Regression (MI, KH), pp. 727–736.
- ICML-2016-Pedregosa #approximate #optimisation
- Hyperparameter optimization with approximate gradient (FP), pp. 737–746.
- ICML-2016-Shalev-Shwartz
- SDCA without Duality, Regularization, and Individual Convexity (SSS), pp. 747–754.
- ICML-2016-AnavaM #sequence
- Heteroscedastic Sequences: Beyond Gaussianity (OA, SM), pp. 755–763.
- ICML-2016-ZhengTDZ #approach #collaboration
- A Neural Autoregressive Approach to Collaborative Filtering (YZ, BT, WD, HZ), pp. 764–773.
- ICML-2016-SafranS #network #on the #quality
- On the Quality of the Initial Basin in Overspecified Neural Networks (IS, OS), pp. 774–782.
- ICML-2016-DunnerFTJ
- Primal-Dual Rates and Certificates (CD, SF, MT, MJ), pp. 783–792.
- ICML-2016-Shalev-ShwartzW #how #why
- Minimizing the Maximal Loss: How and Why (SSS, YW), pp. 793–801.
- ICML-2016-Pimentel-Alarcon #clustering #requirements
- The Information-Theoretic Requirements of Subspace Clustering with Missing Data (DLPA, RDN), pp. 802–810.
- ICML-2016-CohenHK #feedback #graph #learning #online
- Online Learning with Feedback Graphs Without the Graphs (AC, TH, TK), pp. 811–819.
- ICML-2016-GlaudeP #automaton #learning #probability
- PAC learning of Probabilistic Automaton based on the Method of Moments (HG, OP), pp. 820–829.
- ICML-2016-MelnykB #modelling
- Estimating Structured Vector Autoregressive Models (IM, AB), pp. 830–839.
- ICML-2016-Tosh #strict
- Mixing Rates for the Alternating Gibbs Sampler over Restricted Boltzmann Machines and Friends (CT), pp. 840–849.
- ICML-2016-BlondelIFU #algorithm #network #performance #polynomial
- Polynomial Networks and Factorization Machines: New Insights and Efficient Training Algorithms (MB, MI, AF, NU), pp. 850–858.
- ICML-2016-GermainHLM #adaptation
- A New PAC-Bayesian Perspective on Domain Adaptation (PG, AH, FL, EM), pp. 859–868.
- ICML-2016-PuleoM #bound #clustering #correlation #fault
- Correlation Clustering and Biclustering with Locally Bounded Errors (GJP, OM), pp. 869–877.
- ICML-2016-DavidS #algorithm #bound #performance #problem
- PAC Lower Bounds and Efficient Algorithms for The Max K-Armed Bandit Problem (YD, NS), pp. 878–887.
- ICML-2016-ElhoseinyEBE #analysis #categorisation #comparative #estimation #modelling #multi
- A Comparative Analysis and Study of Multiview CNN Models for Joint Object Categorization and Pose Estimation (ME, TEG, AB, AME), pp. 888–897.
- ICML-2016-CarrGL #energy #named #optimisation
- BASC: Applying Bayesian Optimization to the Search for Global Minima on Potential Energy Surfaces (SC, RG, CL), pp. 898–907.
- ICML-2016-ArjevaniS #algorithm #complexity #first-order #on the #optimisation
- On the Iteration Complexity of Oblivious First-Order Optimization Algorithms (YA, OS), pp. 908–916.
- ICML-2016-LiZALH #learning #optimisation #probability
- Stochastic Variance Reduced Optimization for Nonconvex Sparse Learning (XL, TZ, RA, HL0, JDH), pp. 917–925.
- ICML-2016-Wipf #analysis #estimation #rank
- Analysis of Variational Bayesian Factorizations for Sparse and Low-Rank Estimation (DPW), pp. 926–935.
- ICML-2016-NewlingF #bound #performance
- Fast k-means with accurate bounds (JN, FF), pp. 936–944.
- ICML-2016-RavanbakhshPG #matrix #message passing
- Boolean Matrix Factorization and Noisy Completion via Message Passing (SR, BP, RG), pp. 945–954.
- ICML-2016-CohenS #network
- Convolutional Rectifier Networks as Generalized Tensor Decompositions (NC, AS), pp. 955–963.
- ICML-2016-TuBSSR #equation #linear #matrix #rank
- Low-rank Solutions of Linear Matrix Equations via Procrustes Flow (ST, RB, MS, MS, BR), pp. 964–973.
- ICML-2016-JunN #multi #using
- Anytime Exploration for Multi-armed Bandits using Confidence Information (KSJ, RDN), pp. 974–982.
- ICML-2016-BelangerM #energy #network #predict
- Structured Prediction Energy Networks (DB, AM), pp. 983–992.
- ICML-2016-ZhangLJ #network #polynomial
- L1-regularized Neural Networks are Improperly Learnable in Polynomial Time (YZ0, JDL, MIJ), pp. 993–1001.
- ICML-2016-TremblayPGV #clustering
- Compressive Spectral Clustering (NT, GP, RG, PV), pp. 1002–1011.
- ICML-2016-KasaiM #approach #rank
- Low-rank tensor completion: a Riemannian manifold preconditioning approach (HK, BM), pp. 1012–1021.
- ICML-2016-ZhangCL #retrieval
- Provable Non-convex Phase Retrieval with Outliers: Median TruncatedWirtinger Flow (HZ, YC, YL), pp. 1022–1031.
- ICML-2016-DEramoRN #approximate
- Estimating Maximum Expected Value through Gaussian Approximation (CD, MR, AN), pp. 1032–1040.
- ICML-2016-OswalCRRN #learning #network #similarity
- Representational Similarity Learning with Application to Brain Networks (UO, CRC, MALR, TTR, RDN), pp. 1041–1049.
- ICML-2016-GalG #approximate #learning #nondeterminism #representation
- Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning (YG, ZG), pp. 1050–1059.
- ICML-2016-ReedAYLSL #generative #image #synthesis
- Generative Adversarial Text to Image Synthesis (SER, ZA, XY, LL, BS, HL), pp. 1060–1069.
- ICML-2016-PrabhakaranACP #process
- Dirichlet Process Mixture Model for Correcting Technical Variation in Single-Cell Gene Expression Data (SP, EA, AC, DP), pp. 1070–1079.
- ICML-2016-ZhuY
- Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives (ZAZ, YY), pp. 1080–1089.
- ICML-2016-BhowmikGK #parametricity
- Sparse Parameter Recovery from Aggregated Data (AB, JG, OK), pp. 1090–1099.
- ICML-2016-ZhaiCLZ #detection #energy #modelling
- Deep Structured Energy Based Models for Anomaly Detection (SZ, YC, WL, ZZ), pp. 1100–1109.
- ICML-2016-ZhuQRY #coordination #performance #using
- Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling (ZAZ, ZQ, PR, YY), pp. 1110–1119.
- ICML-2016-ArjovskySB #evolution #network
- Unitary Evolution Recurrent Neural Networks (MA, AS, YB), pp. 1120–1128.
- ICML-2016-ZhangP #feature model #markov #modelling
- Markov Latent Feature Models (AZ, JWP), pp. 1129–1137.
- ICML-2016-WangWP #probability
- The Knowledge Gradient for Sequential Decision Making with Stochastic Binary Feedbacks (YW, CW, WBP), pp. 1138–1147.
- ICML-2016-AsterisKKP #algorithm
- A Simple and Provable Algorithm for Sparse Diagonal CCA (MA, AK, OK, RAP), pp. 1148–1157.
- ICML-2016-LiuWS #constraints #convergence #linear #optimisation #orthogonal #polynomial
- Quadratic Optimization with Orthogonality Constraints: Explicit Lojasiewicz Exponent and Linear Convergence of Line-Search Methods (HL, WW, AMCS), pp. 1158–1167.
- ICML-2016-ArpitZKG #network #normalisation #parametricity
- Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks (DA, YZ, BUK, VG), pp. 1168–1176.
- ICML-2016-LiZZ #learning #memory management
- Learning to Generate with Memory (CL, JZ0, BZ0), pp. 1177–1186.
- ICML-2016-FernandoG #classification #learning #video
- Learning End-to-end Video Classification with Rank-Pooling (BF, SG), pp. 1187–1196.
- ICML-2016-SunVBB #learning #predict
- Learning to Filter with Predictive State Inference Machines (WS0, AV, BB, JAB), pp. 1197–1205.
- ICML-2016-RahmaniA #approach #composition #learning #matrix #performance
- A Subspace Learning Approach for High Dimensional Matrix Decomposition with Efficient Column/Row Sampling (MR, GKA), pp. 1206–1214.
- ICML-2016-KatariyaKSW #learning #multi #rank
- DCM Bandits: Learning to Rank with Multiple Clicks (SK, BK, CS, ZW), pp. 1215–1224.
- ICML-2016-HardtRS #performance #probability
- Train faster, generalize better: Stability of stochastic gradient descent (MH, BR, YS), pp. 1225–1234.
- ICML-2016-KomiyamaHN #algorithm #bound #performance #problem
- Copeland Dueling Bandit Problem: Regret Lower Bound, Optimal Algorithm, and Computationally Efficient Algorithm (JK, JH, HN), pp. 1235–1244.
- ICML-2016-LiWZC #combinator
- Contextual Combinatorial Cascading Bandits (SL0, BW, SZ, WC), pp. 1245–1253.
- ICML-2016-WuSLS
- Conservative Bandits (YW, RS, TL, CS), pp. 1254–1262.
- ICML-2016-HazanL #optimisation #probability
- Variance-Reduced and Projection-Free Stochastic Optimization (EH, HL), pp. 1263–1271.
- ICML-2016-SongGC #learning #network #sequence
- Factored Temporal Sigmoid Belief Networks for Sequence Learning (JS, ZG, LC), pp. 1272–1281.
- ICML-2016-XuXCY #assessment #crowdsourcing #quality #ranking #statistics
- False Discovery Rate Control and Statistical Quality Assessment of Annotators in Crowdsourced Ranking (QX, JX, XC, YY0), pp. 1282–1291.
- ICML-2016-BalduzziG #network
- Strongly-Typed Recurrent Neural Networks (DB, MG), pp. 1292–1300.
- ICML-2016-KordaSL #clustering #distributed #linear #network
- Distributed Clustering of Linear Bandits in Peer to Peer Networks (NK, BS, SL), pp. 1301–1309.
- ICML-2016-ZhaoAGA #network
- Collapsed Variational Inference for Sum-Product Networks (HZ0, TA, GJG, BA), pp. 1310–1318.
- ICML-2016-KhandelwalLNS #analysis #monte carlo #on the
- On the Analysis of Complex Backup Strategies in Monte Carlo Tree Search (PK, EL, SN, PS), pp. 1319–1328.
- ICML-2016-DuanCHSA #benchmark #learning #metric
- Benchmarking Deep Reinforcement Learning for Continuous Control (YD, XC0, RH, JS, PA), pp. 1329–1338.
- ICML-2016-DingLHL #clustering #distributed
- K-Means Clustering with Distributed Dimensions (HD, YL, LH, JL0), pp. 1339–1348.
- ICML-2016-UlyanovLVL #image #network #synthesis
- Texture Networks: Feed-forward Synthesis of Textures and Stylized Images (DU, VL, AV, VSL), pp. 1349–1357.
- ICML-2016-MirzasoleimanBK #performance #personalisation #summary
- Fast Constrained Submodular Maximization: Personalized Data Summarization (BM, AB, AK), pp. 1358–1367.
- ICML-2016-WangGL #on the #statistics
- On the Statistical Limits of Convex Relaxations (ZW, QG, HL0), pp. 1368–1377.
- ICML-2016-KumarIOIBGZPS #memory management #natural language #network
- Ask Me Anything: Dynamic Memory Networks for Natural Language Processing (AK, OI, PO, MI, JB0, IG, VZ, RP, RS), pp. 1378–1387.
- ICML-2016-ColinBSC #distributed #optimisation
- Gossip Dual Averaging for Decentralized Optimization of Pairwise Functions (IC, AB, JS, SC), pp. 1388–1396.
- ICML-2016-GonenOS #sketching #using
- Solving Ridge Regression using Sketched Preconditioned SVRG (AG, FO, SSS), pp. 1397–1405.
- ICML-2016-AJFMS #cumulative #learning #predict
- Cumulative Prospect Theory Meets Reinforcement Learning: Prediction and Control (PLA, CJ, MCF0, SIM, CS), pp. 1406–1415.
- ICML-2016-PlataniosDM #approach
- Estimating Accuracy from Unlabeled Data: A Bayesian Approach (EAP, AD, TMM), pp. 1416–1425.
- ICML-2016-BhattacharyaGKP #matrix
- Non-negative Matrix Factorization under Heavy Noise (CB, NG, RK, JP), pp. 1426–1434.
- ICML-2016-JasinskaDBPKH #probability #using
- Extreme F-measure Maximization using Sparse Probability Estimates (KJ, KD, RBF, KP, TK, EH), pp. 1435–1444.
- ICML-2016-MaaloeSSW #generative #modelling
- Auxiliary Deep Generative Models (LM, CKS, SKS, OW), pp. 1445–1453.
- ICML-2016-CanevetJF #empirical #scalability
- Importance Sampling Tree for Large-scale Empirical Expectation (OC, CJ, FF), pp. 1454–1462.
- ICML-2016-DaneshmandLH #adaptation #learning
- Starting Small - Learning with Adaptive Sample Sizes (HD, AL, TH), pp. 1463–1471.
- ICML-2016-BuiHHLT #approximate #process #using
- Deep Gaussian Processes for Regression using Approximate Expectation Propagation (TDB, DHL, JMHL, YL, RET), pp. 1472–1481.
- ICML-2016-MitrovicST #approximate #kernel #named
- DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression (JM, DS, YWT), pp. 1482–1491.
- ICML-2016-Hernandez-Lobato #multi #optimisation #predict
- Predictive Entropy Search for Multi-objective Bayesian Optimization (DHL, JMHL, AS, RPA), pp. 1492–1501.
- ICML-2016-GeZ #analysis #component
- Rich Component Analysis (RG0, JZ), pp. 1502–1510.
- ICML-2016-Hernandez-Lobato16a #black box
- Black-Box Alpha Divergence Minimization (JMHL, YL, MR, TDB, DHL, RET), pp. 1511–1520.
- ICML-2016-RezendeMDGW #generative #modelling
- One-Shot Generalization in Deep Generative Models (DJR, SM, ID, KG, DW), pp. 1521–1529.
- ICML-2016-NatarajanKRD #classification #multi
- Optimal Classification with Multivariate Losses (NN, OK, PR, ISD), pp. 1530–1538.
- ICML-2016-MalherbeCV #approach #optimisation #ranking
- A ranking approach to global optimization (CM, EC, NV), pp. 1539–1547.
- ICML-2016-WangSDNSX #algorithm #coordination #distributed #parallel
- Parallel and Distributed Block-Coordinate Frank-Wolfe Algorithms (YXW, VS, WD0, WN, SS, EPX), pp. 1548–1557.
- ICML-2016-LarsenSLW #encoding #metric #similarity #using
- Autoencoding beyond pixels using a learned similarity metric (ABLL, SKS, HL, OW), pp. 1558–1566.
- ICML-2016-SaRO #agile #bias
- Ensuring Rapid Mixing and Low Bias for Asynchronous Gibbs Sampling (CDS, CR, KO), pp. 1567–1576.
- ICML-2016-ShibagakiKHT #modelling
- Simultaneous Safe Screening of Features and Samples in Doubly Sparse Modeling (AS, MK, KH, IT), pp. 1577–1586.
- ICML-2016-DegenneP #algorithm #multi #probability
- Anytime optimal algorithms in stochastic multi-armed bandits (RD, VP), pp. 1587–1595.
- ICML-2016-HoilesS #bound #design #education #evaluation #recommendation
- Bounded Off-Policy Evaluation with Missing Data for Course Recommendation and Curriculum Design (WH, MvdS), pp. 1596–1604.
- ICML-2016-PandeyD #metric #on the #random #representation
- On collapsed representation of hierarchical Completely Random Measures (GP0, AD), pp. 1605–1613.
- ICML-2016-MartinsA #classification #multi
- From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification (AFTM, RFA), pp. 1614–1623.
- ICML-2016-BubeckL #black box #optimisation
- Black-box Optimization with a Politician (SB, YTL), pp. 1624–1631.
- ICML-2016-KanagawaSKST #parametricity #process
- Gaussian process nonparametric tensor estimator and its minimax optimality (HK, TS, HK, NS, YT), pp. 1632–1641.
- ICML-2016-MedinaY #algorithm #linear
- No-Regret Algorithms for Heavy-Tailed Linear Bandits (AMM, SY), pp. 1642–1650.
- ICML-2016-BonillaSR
- Extended and Unscented Kitchen Sinks (EVB, DMS, AR0), pp. 1651–1659.
- ICML-2016-XuZCL #matrix #optimisation #probability
- Matrix Eigen-decomposition via Doubly Stochastic Riemannian Optimization (ZX, PZ, JC, XL0), pp. 1660–1669.
- ICML-2016-SchnabelSSCJ #evaluation #learning #recommendation
- Recommendations as Treatments: Debiasing Learning and Evaluation (TS, AS, AS, NC, TJ), pp. 1670–1679.
- ICML-2016-YoonAHS #named #predict
- ForecastICU: A Prognostic Decision Support System for Timely Prediction of Intensive Care Unit Admission (JY, AMA, SH, MvdS), pp. 1680–1689.
- ICML-2016-LocatelliGC #algorithm #problem
- An optimal algorithm for the Thresholding Bandit Problem (AL, MG, AC), pp. 1690–1698.
- ICML-2016-NiuRFH #parametricity #performance #using
- Fast Parameter Inference in Nonlinear Dynamical Systems using Iterative Gradient Matching (MN, SR, MF, DH), pp. 1699–1707.
- ICML-2016-LouizosW #learning #matrix #performance
- Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors (CL, MW), pp. 1708–1716.
- ICML-2016-XuFZ #learning #process
- Learning Granger Causality for Hawkes Processes (HX, MF, HZ), pp. 1717–1726.
- ICML-2016-MiaoYB
- Neural Variational Inference for Text Processing (YM, LY, PB), pp. 1727–1736.
- ICML-2016-MenschMTV #learning #matrix #taxonomy
- Dictionary Learning for Massive Matrix Factorization (AM, JM, BT, GV), pp. 1737–1746.
- ICML-2016-OordKK #network
- Pixel Recurrent Neural Networks (AvdO, NK, KK), pp. 1747–1756.
- ICML-2016-SimsekAK #problem #why
- Why Most Decisions Are Easy in Tetris - And Perhaps in Other Sequential Decision Problems, As Well (ÖS, SA, AK), pp. 1757–1765.
- ICML-2016-LiSJ #matrix
- Gaussian quadrature for matrix inverse forms with applications (CL, SS, SJ), pp. 1766–1775.
- ICML-2016-MeshiMWS #predict
- Train and Test Tightness of LP Relaxations in Structured Prediction (OM, MM, AW, DAS), pp. 1776–1785.
- ICML-2016-AroraMM #learning #multi #optimisation #probability #representation #using
- Stochastic Optimization for Multiview Representation Learning using Partial Least Squares (RA, PM, TVM), pp. 1786–1794.
- ICML-2016-BasbugE
- Hierarchical Compound Poisson Factorization (MEB, BEE), pp. 1795–1803.
- ICML-2016-HeB #learning #modelling
- Opponent Modeling in Deep Reinforcement Learning (HH0, JLBG), pp. 1804–1813.
- ICML-2016-WangDL #linear #modelling
- No penalty no tears: Least squares in high-dimensional linear models (XW0, DBD, CL), pp. 1814–1822.
- ICML-2016-QuRTF #empirical #named #probability
- SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization (ZQ, PR, MT, OF), pp. 1823–1832.
- ICML-2016-HazanLS #on the #optimisation #probability #problem
- On Graduated Optimization for Stochastic Non-Convex Problems (EH, KYL, SSS), pp. 1833–1841.
- ICML-2016-SantoroBBWL #network
- Meta-Learning with Memory-Augmented Neural Networks (AS, SB, MB, DW, TPL), pp. 1842–1850.
- ICML-2016-DaiB #multi
- The knockoff filter for FDR control in group-sparse and multitask regression (RD, RB), pp. 1851–1859.
- ICML-2016-PerolatPGSP #approximate #game studies #markov #policy
- Softened Approximate Policy Iteration for Markov Games (JP, BP, MG, BS, OP), pp. 1860–1868.
- ICML-2016-GowerGR #probability
- Stochastic Block BFGS: Squeezing More Curvature out of Data (RMG, DG, PR), pp. 1869–1878.
- ICML-2016-BaiRWS #classification #difference #geometry #learning
- Differential Geometric Regularization for Supervised Learning of Classifiers (QB, SR, ZW, SS), pp. 1879–1888.
- ICML-2016-DielemanFK #network #symmetry
- Exploiting Cyclic Symmetry in Convolutional Neural Networks (SD, JDF, KK), pp. 1889–1898.
- ICML-2016-ZahavyBM #black box #comprehension
- Graying the black box: Understanding DQNs (TZ, NBZ, SM), pp. 1899–1908.
- ICML-2016-FriesenD #learning #modelling #theorem
- The Sum-Product Theorem: A Foundation for Learning Tractable Models (ALF, PMD), pp. 1909–1918.
- ICML-2016-ShahG #correlation #learning
- Pareto Frontier Learning with Expensive Correlated Objectives (AS, ZG), pp. 1919–1927.
- ICML-2016-MnihBMGLHSK #learning
- Asynchronous Methods for Deep Reinforcement Learning (VM, APB, MM, AG, TPL, TH, DS, KK), pp. 1928–1937.
- ICML-2016-VeldtGM
- A Simple and Strongly-Local Flow-Based Method for Cut Improvement (NV, DFG, MWM), pp. 1938–1947.
- ICML-2016-SuLCC #learning #modelling #statistics #visual notation
- Nonlinear Statistical Learning with Truncated Gaussian Graphical Models (QS, XL, CC, LC), pp. 1948–1957.
- ICML-2016-KawakitaT #learning
- Barron and Cover's Theory in Supervised Learning and its Application to Lasso (MK, JT), pp. 1958–1966.
- ICML-2016-MichaeliWL #analysis #canonical #correlation #parametricity
- Nonparametric Canonical Correlation Analysis (TM, WW, KL), pp. 1967–1976.
- ICML-2016-RakhlinS #named #performance
- BISTRO: An Efficient Relaxation-Based Method for Contextual Bandits (AR, KS), pp. 1977–1985.
- ICML-2016-DanihelkaWUKG #memory management
- Associative Long Short-Term Memory (ID, GW, BU, NK, AG), pp. 1986–1994.
- ICML-2016-WangSHHLF #architecture #learning #network
- Dueling Network Architectures for Deep Reinforcement Learning (ZW0, TS, MH, HvH, ML, NdF), pp. 1995–2003.
- ICML-2016-KusanoHF #data analysis #kernel #persistent
- Persistence weighted Gaussian kernel for topological data analysis (GK, YH, KF), pp. 2004–2013.
- ICML-2016-NiepertAK #graph #learning #network
- Learning Convolutional Neural Networks for Graphs (MN, MA, KK), pp. 2014–2023.
- ICML-2016-DiamosSCCCEEHS #persistent
- Persistent RNNs: Stashing Recurrent Weights On-Chip (GD, SS, BC, MC, AC, EE, JHE, AYH, SS), pp. 2024–2033.
- ICML-2016-HenaffSL #network #orthogonal
- Recurrent Orthogonal Networks and Long-Memory Tasks (MH, AS, YL), pp. 2034–2042.
- ICML-2016-BauerSP #multi
- The Arrow of Time in Multivariate Time Series (SB, BS, JP), pp. 2043–2051.
- ICML-2016-RamaswamyST #estimation #kernel
- Mixture Proportion Estimation via Kernel Embeddings of Distributions (HGR, CS, AT), pp. 2052–2060.
- ICML-2016-LiJS #kernel #performance
- Fast DPP Sampling for Nystrom with Application to Kernel Methods (CL, SJ, SS), pp. 2061–2070.
- ICML-2016-TrouillonWRGB #predict
- Complex Embeddings for Simple Link Prediction (TT, JW, SR0, ÉG, GB), pp. 2071–2080.
- ICML-2016-VikramD #clustering #interactive
- Interactive Bayesian Hierarchical Clustering (SV, SD), pp. 2081–2090.
- ICML-2016-AllamanisPS #network #source code #summary
- A Convolutional Attention Network for Extreme Summarization of Source Code (MA, HP, CAS), pp. 2091–2100.
- ICML-2016-KapralovPW #how #matrix #multi
- How to Fake Multiply by a Gaussian Matrix (MK, VKP, DPW), pp. 2101–2110.
- ICML-2016-RogersVLG #independence #testing
- Differentially Private Chi-Squared Hypothesis Testing: Goodness of Fit and Independence Testing (MG, HWL, RMR, SPV), pp. 2111–2120.
- ICML-2016-ErraqabiVCM
- Pliable Rejection Sampling (AE, MV, AC, OAM), pp. 2121–2129.
- ICML-2016-BalleGP #evaluation #policy
- Differentially Private Policy Evaluation (BB, MG, DP), pp. 2130–2138.
- ICML-2016-ThomasB #evaluation #learning #policy
- Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning (PST, EB), pp. 2139–2148.
- ICML-2016-WiatowskiTSGB #architecture #feature model
- Discrete Deep Feature Extraction: A Theory and New Architectures (TW, MT, AS, PG, HB), pp. 2149–2158.
- ICML-2016-SyrgkanisKS #algorithm #learning #performance
- Efficient Algorithms for Adversarial Contextual Learning (VS, AK, RES), pp. 2159–2168.
- ICML-2016-SongSZU #network
- Training Deep Neural Networks via Direct Loss Minimization (YS, AGS, RSZ, RU), pp. 2169–2177.
- ICML-2016-HwangS #sequence
- Sequence to Sequence Training of CTC-RNNs with Partial Windowing (KH, WS), pp. 2178–2187.
- ICML-2016-MnihR #monte carlo
- Variational Inference for Monte Carlo Objectives (AM, DJR), pp. 2188–2196.
- ICML-2016-DalalGM #grid
- Hierarchical Decision Making In Electricity Grid Management (GD, EG, SM), pp. 2197–2206.
- ICML-2016-BalkanskiMKS #combinator #learning
- Learning Sparse Combinatorial Representations via Two-stage Submodular Maximization (EB, BM, AK0, YS), pp. 2207–2216.
- ICML-2016-ShangSAL #comprehension #linear #network
- Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units (WS, KS, DA, HL), pp. 2217–2225.
- ICML-2016-WangXDS #process
- Isotonic Hawkes Processes (YW0, BX0, ND, LS), pp. 2226–2234.
- ICML-2016-LiuY #learning #multi
- Cross-Graph Learning of Multi-Relational Associations (HL, YY), pp. 2235–2243.
- ICML-2016-PanRAG #process
- Markov-modulated Marked Poisson Processes for Check-in Data (JP, VAR, PKA, AEG), pp. 2244–2253.
- ICML-2016-AchimSE #analysis #constraints #fourier
- Beyond Parity Constraints: Fourier Analysis of Hash Functions for Inference (TA, AS, SE), pp. 2254–2262.
- ICML-2016-Papakonstantinou #learning #on the
- On the Power and Limits of Distance-Based Learning (PAP, JX0, GY), pp. 2263–2271.
- ICML-2016-YenLZRD #approach #multi #sequence
- A Convex Atomic-Norm Approach to Multiple Sequence Alignment and Motif Discovery (IEHY, XL, JZ, PR, ISD), pp. 2272–2280.
- ICML-2016-FazayeliB #estimation
- Generalized Direct Change Estimation in Ising Model Structure (FF, AB), pp. 2281–2290.
- ICML-2016-ChiangHD #analysis #component #robust
- Robust Principal Component Analysis with Side Information (KYC, CJH, ISD), pp. 2291–2299.
- ICML-2016-GuiHG #estimation #matrix #performance #rank #towards
- Towards Faster Rates and Oracle Property for Low-Rank Matrix Estimation (HG, JH0, QG), pp. 2300–2309.
- ICML-2016-SangnierGR #detection #proximity #reliability #representation #using
- Early and Reliable Event Detection Using Proximity Space Representation (MS, JG, AR), pp. 2310–2319.
- ICML-2016-LibertyLS #machine learning
- Stratified Sampling Meets Machine Learning (EL, KJL, KS), pp. 2320–2329.
- ICML-2016-GuanRW #learning #markov #multi #performance #process #recognition #using
- Efficient Multi-Instance Learning for Activity Recognition from Time Series Data Using an Auto-Regressive Hidden Markov Model (XG, RR, WKW), pp. 2330–2339.
- ICML-2016-LinCR #multi
- Generalization Properties and Implicit Regularization for Multiple Passes SGM (JL, RC, LR), pp. 2340–2348.
- ICML-2016-FrostigMMS #analysis #component
- Principal Component Projection Without Principal Component Analysis (RF, CM, CM, AS), pp. 2349–2357.
- ICML-2016-LiLR #approximate #rank
- Recovery guarantee of weighted low-rank approximation via alternating minimization (YL, YL, AR), pp. 2358–2367.
- ICML-2016-PezeshkiFBCB #architecture #network
- Deconstructing the Ladder Network Architecture (MP, LF, PB, ACC, YB), pp. 2368–2376.
- ICML-2016-OsbandRW #random
- Generalization and Exploration via Randomized Value Functions (IO, BVR, ZW), pp. 2377–2386.
- ICML-2016-KantchelianTJ #classification
- Evasion and Hardening of Tree Ensemble Classifiers (AK, JDT, ADJ), pp. 2387–2396.
- ICML-2016-XiongMS #memory management #network #visual notation
- Dynamic Memory Networks for Visual and Textual Question Answering (CX, SM, RS), pp. 2397–2406.
- ICML-2016-RavanbakhshOFPH #matter #parametricity
- Estimating Cosmological Parameters from the Dark Matter Distribution (SR, JBO, SF, LP, SH, JGS, BP), pp. 2407–2416.
- ICML-2016-HashimotoGJ #generative #learning
- Learning Population-Level Diffusions with Generative RNNs (TBH, DKG, TSJ), pp. 2417–2426.
- ICML-2016-PanS #network
- Expressiveness of Rectifier Networks (XP, VS), pp. 2427–2435.
- ICML-2016-KairouzBR #estimation #privacy
- Discrete Distribution Estimation under Local Privacy (PK, KB, DR), pp. 2436–2444.
- ICML-2016-InouyeRD #dependence #exponential #modelling #multi #product line #visual notation
- Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies (DII, PR, ISD), pp. 2445–2453.
- ICML-2016-LimW #approach #permutation #problem
- A Box-Constrained Approach for Hard Permutation Problems (CHL, SW), pp. 2454–2463.
- ICML-2016-ZadehHS #geometry #learning #metric
- Geometric Mean Metric Learning (PZ, RH, SS), pp. 2464–2471.
- ICML-2016-YangWLEZ #estimation #parametricity
- Sparse Nonlinear Regression: Parameter Estimation under Nonconvexity (ZY, ZW, HL0, YCE, TZ0), pp. 2472–2481.
- ICML-2016-LiWPA #classification #multi
- Conditional Bernoulli Mixtures for Multi-label Classification (CL, BW, VP, JAA), pp. 2482–2491.
- ICML-2016-ChenG #multi #problem #scalability
- Scalable Discrete Sampling as a Multi-Armed Bandit Problem (YC, ZG), pp. 2492–2501.
- ICML-2016-ChoromanskiS #kernel #sublinear
- Recycling Randomness with Structure for Sublinear time Kernel Expansions (KC, VS), pp. 2502–2510.
- ICML-2016-BornscheinSFB #bidirectional
- Bidirectional Helmholtz Machines (JB, SS, AF, YB), pp. 2511–2519.
- ICML-2016-AbernethyH #optimisation #performance
- Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier (JDA, EH), pp. 2520–2528.
- ICML-2016-CutajarOCF #kernel #matrix
- Preconditioning Kernel Matrices (KC, MAO, JPC, MF), pp. 2529–2538.
- ICML-2016-AltschulerBFMRZ #algorithm #bound #distributed #set
- Greedy Column Subset Selection: New Bounds and Distributed Algorithms (JA, AB, GF, VSM, AR, MZ), pp. 2539–2548.
- ICML-2016-AlmahairiBCZLC #capacity #network
- Dynamic Capacity Networks (AA, NB, TC, YZ, HL, ACC), pp. 2549–2558.
- ICML-2016-HeidariMSVY
- Pricing a Low-regret Seller (HH, MM, US, SV, SY), pp. 2559–2567.
- ICML-2016-RaghunathanFDL #estimation #linear
- Estimation from Indirect Supervision with Linear Moments (AR, RF, JCD, PL), pp. 2568–2577.
- ICML-2016-BotteschBK #approximate
- Speeding up k-means by approximating Euclidean distances via block vectors (TB, TB, MK), pp. 2578–2586.
- ICML-2016-MussmannE #learning
- Learning and Inference via Maximum Inner Product Search (SM, SE), pp. 2587–2596.
- ICML-2016-RodomanovK #finite #optimisation
- A Superlinearly-Convergent Proximal Newton-type Method for the Optimization of Finite Sums (AR, DK), pp. 2597–2605.
- ICML-2016-ChwialkowskiSG #kernel
- A Kernel Test of Goodness of Fit (KC, HS, AG), pp. 2606–2615.
- ICML-2016-RainforthNLPMDW #markov #monte carlo
- Interacting Particle Markov Chain Monte Carlo (TR, CAN, FL, BP, JWvdM, AD, FDW), pp. 2616–2625.
- ICML-2016-GarberHJKMNS #performance
- Faster Eigenvector Computation via Shift-and-Invert Preconditioning (DG, EH, CJ, SMK, CM, PN, AS), pp. 2626–2634.
- ICML-2016-XieLZW #formal method #generative
- A Theory of Generative ConvNet (JX, YL0, SCZ, YNW), pp. 2635–2644.
- ICML-2016-YaoK #learning #performance #product line
- Efficient Learning with a Family of Nonconvex Regularizers by Redistributing Nonconvexity (QY, JTK), pp. 2645–2654.
- ICML-2016-SiHD #approximate #performance #using
- Computationally Efficient Nyström Approximation using Fast Transforms (SS, CJH, ISD), pp. 2655–2663.
- ICML-2016-PeyreCS #distance #kernel #matrix
- Gromov-Wasserstein Averaging of Kernel and Distance Matrices (GP, MC, JS), pp. 2664–2672.
- ICML-2016-RoychowdhuryKP #monte carlo #robust #using
- Robust Monte Carlo Sampling using Riemannian Nosé-Poincaré Hamiltonian Dynamics (AR, BK, SP0), pp. 2673–2681.
- ICML-2016-SaeediHJA #infinity #performance
- The Segmented iHMM: A Simple, Efficient Hierarchical Infinite HMM (AS, MDH, MJJ0, RPA), pp. 2682–2691.
- ICML-2016-UstinovskiyFGS #learning
- Meta-Gradient Boosted Decision Tree Model for Weight and Target Learning (YU, VF, GG, PS), pp. 2692–2701.
- ICML-2016-DaiDS #modelling
- Discriminative Embeddings of Latent Variable Models for Structured Data (HD, BD, LS), pp. 2702–2711.
- ICML-2016-GuhaMRS #detection #random #robust
- Robust Random Cut Forest Based Anomaly Detection on Streams (SG, NM, GR, OS), pp. 2712–2721.
- ICML-2016-TaylorBXSPG #approach #network #scalability
- Training Neural Networks Without Gradients: A Scalable ADMM Approach (GT, RB, ZX0, BS, ABP, TG), pp. 2722–2731.
- ICML-2016-ChenQ #category theory #clustering
- Clustering High Dimensional Categorical Data via Topographical Features (CC0, NQ), pp. 2732–2740.
- ICML-2016-GeJKNS #algorithm #analysis #canonical #correlation #performance #scalability
- Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis (RG0, CJ, SMK, PN, AS), pp. 2741–2750.
- ICML-2016-BaiIWB #algorithm #optimisation
- Algorithms for Optimizing the Ratio of Submodular Functions (WB, RKI, KW, JAB), pp. 2751–2759.
- ICML-2016-HoGE #learning #optimisation #policy
- Model-Free Imitation Learning with Policy Optimization (JH, JKG, SE), pp. 2760–2769.
- ICML-2016-CisseAB #architecture #named
- ADIOS: Architectures Deep In Output Space (MC, MAS, SB), pp. 2770–2779.
- ICML-2016-GaoKOV #axiom #capacity #dependence
- Conditional Dependence via Shannon Capacity: Axioms, Estimators and Applications (WG, SK, SO, PV), pp. 2780–2789.
- ICML-2016-OhCSL #memory management
- Control of Memory, Active Perception, and Action in Minecraft (JO, VC, SPS, HL), pp. 2790–2799.
- ICML-2016-SuhZA #classification #complexity
- The Label Complexity of Mixed-Initiative Classifier Training (JS, XZ0, SA), pp. 2800–2809.
- ICML-2016-ScheinZBW #composition #learning
- Bayesian Poisson Tucker Decomposition for Learning the Structure of International Relations (AS, MZ, DMB, HMW), pp. 2810–2819.
- ICML-2016-ColomboV #composition #matrix
- Tensor Decomposition via Joint Matrix Schur Decomposition (NC, NV), pp. 2820–2828.
- ICML-2016-GuLSL #modelling
- Continuous Deep Q-Learning with Model-based Acceleration (SG, TPL, IS, SL), pp. 2829–2838.
- ICML-2016-GongZLTGS #adaptation #component
- Domain Adaptation with Conditional Transferable Components (MG, KZ0, TL, DT, CG, BS), pp. 2839–2848.
- ICML-2016-LinTA #fixpoint #network
- Fixed Point Quantization of Deep Convolutional Networks (DDL, SST, VSA), pp. 2849–2858.
- ICML-2016-AroraGKMM #algorithm #modelling #topic
- Provable Algorithms for Inference in Topic Models (SA, RG0, FK, TM, AM), pp. 2859–2867.
- ICML-2016-WangWK #performance #programming
- Epigraph projections for fast general convex programming (PWW, MW, JZK), pp. 2868–2877.
- ICML-2016-AcharyaDLS #algorithm #performance
- Fast Algorithms for Segmented Regression (JA, ID, JL0, LS), pp. 2878–2886.
- ICML-2016-ThomasSDB
- Energetic Natural Gradient Descent (PST, BCdS, CD, EB), pp. 2887–2895.
- ICML-2016-CarlsonSPP
- Partition Functions from Rao-Blackwellized Tempered Sampling (DEC, PS, AP, LP), pp. 2896–2905.
- ICML-2016-ZhaoPX #learning #modelling
- Learning Mixtures of Plackett-Luce Models (ZZ, PP, LX), pp. 2906–2914.
- ICML-2016-AbelHL #abstraction #approximate #behaviour
- Near Optimal Behavior via Approximate State Abstraction (DA, DEH, MLL), pp. 2915–2923.
- ICML-2016-LeiF #order #power of #testing
- Power of Ordered Hypothesis Testing (LL, WF), pp. 2924–2932.
- ICML-2016-BielikRV #named #probability
- PHOG: Probabilistic Model for Code (PB, VR, MTV), pp. 2933–2942.
- ICML-2016-GyorgyS #matrix
- Shifting Regret, Mirror Descent, and Matrices (AG, CS), pp. 2943–2951.
- ICML-2016-LuketinaRBG #scalability
- Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters (JL, TR, MB, KG), pp. 2952–2960.
- ICML-2016-AkrourNAA #learning #optimisation
- Model-Free Trajectory Optimization for Reinforcement Learning (RA, GN, HA, AA), pp. 2961–2970.
- ICML-2016-JiaoKS #distance
- Controlling the distance to a Kemeny consensus without computing it (YJ, AK, ES), pp. 2971–2980.
- ICML-2016-LucicBZK #scalability
- Horizontally Scalable Submodular Maximization (ML, OB, MZ, AK0), pp. 2981–2989.
- ICML-2016-CohenW #network
- Group Equivariant Convolutional Networks (TC, MW), pp. 2990–2999.
- ICML-2016-PiatkowskiM #probability
- Stochastic Discrete Clenshaw-Curtis Quadrature (NP, KM), pp. 3000–3009.
- ICML-2016-RiemerVCHHK #multi
- Correcting Forecasts with Multifactor Neural Attention (MR, AV, FdPC, FFTHI, RH0, EK), pp. 3010–3019.
- ICML-2016-JohanssonSS #learning
- Learning Representations for Counterfactual Inference (FDJ, US, DAS), pp. 3020–3029.
- ICML-2016-HwangTC #automation #modelling #multi #parametricity #relational
- Automatic Construction of Nonparametric Relational Regression Models for Multiple Time Series (YH, AT, JC), pp. 3030–3039.
- ICML-2016-PaigeW #modelling #monte carlo #network #visual notation
- Inference Networks for Sequential Monte Carlo in Graphical Models (BP, FDW), pp. 3040–3049.
- ICML-2016-Bloem-ReddyC #slicing
- Slice Sampling on Hamiltonian Trajectories (BBR, JC), pp. 3050–3058.
- ICML-2016-GulcehreMDB
- Noisy Activation Functions (ÇG, MM, MD, YB), pp. 3059–3068.
- ICML-2016-YenHRZD #approach #classification #multi
- PD-Sparse : A Primal and Dual Sparse Approach to Extreme Multiclass and Multilabel Classification (IEHY, XH, PR, KZ, ISD), pp. 3069–3077.