BibSLEIGH
BibSLEIGH corpus
BibSLEIGH tags
BibSLEIGH bundles
BibSLEIGH people
EDIT!
CC-BY
Open Knowledge
XHTML 1.0 W3C Rec
CSS 2.1 W3C CanRec
email twitter
Travelled to:
1 × Australia
1 × Finland
14 × USA
2 × Canada
3 × China
3 × France
Collaborated with:
S.Ji J.Chen J.Liu L.Yuan J.Wang J.Zhou W.Fan L.Sun S.Xiang P.Gong Q.Li J.Liu Z.Wang P.Wonka R.Janardan Q.Sun I.Davidson Z.Zhao S.Panchanathan S.Yang T.Xiong R.Chattopadhyay H.Liu Z.Lu C.Zhang R.Jin B.Ceran L.Tang T.Wang X.He V.A.Narayan F.Wang X.Shen H.Park Y.Wang P.M.Thompson J.Li K.Chen T.Wu E.Reiman B.Lin N.Ramakrishnan R.Fujimaki Y.Liu T.Yang X.Tong L.Yang B.Shen S.Kumar J.Yang J.Hu Y.Zhu S.Yu R.Patel S.Chakraborty V.N.Balasubramanian A.R.Sankar J.Huang Y.Hu D.Zhang Z.Xu M.R.Lyu I.King J.Wang Y.Chang C.Kuo X.Wang P.B.Walker O.T.Carmichael P.Chakraborty S.R.Mekaru J.S.Brownstein L.Zhao F.Chen C.Lu M.Lai H.Davulcu Q.Li X.Zhang J.Sun Y.Lai Y.Li Z.Zhou M.Wu H.Xiong V.Kumar M.S.Amin B.Yan C.Martell V.Markman A.Bhasin W.Zhang R.Li T.Zeng S.Huang A.Fleisher J.Bi V.Cherkassky C.Kambhamettu P.Chen D.Kadetotad Z.Xu A.Mohanty S.B.K.Vrudhula J.Seo Y.Cao S.Yu M.Bae G.E.Alexander
Talks about:
learn (25) multi (13) analysi (11) featur (11) spars (9) task (9) data (9) discrimin (8) algorithm (8) effici (8)

Person: Jieping Ye

DBLP DBLP: Ye:Jieping

Contributed to:

DATE 20152015
ICML 20152015
KDD 20152015
ICML c1 20142014
ICML c2 20142014
KDD 20142014
ICML c1 20132013
ICML c2 20132013
ICML c3 20132013
KDD 20132013
KDD 20122012
KDD 20112011
KDD 20102010
ICML 20092009
KDD 20092009
ICML 20082008
KDD 20082008
ICML 20072007
KDD 20072007
CIKM 20062006
ICML 20062006
KDD 20062006
ICML 20042004
KDD 20042004

Wrote 71 papers:

DATE-2015-ChenKXMLYVSCY #algorithm #array #learning
Technology-design co-optimization of resistive cross-point array for accelerating learning algorithms on chip (PYC, DK, ZX, AM, BL, JY, SBKV, JsS, YC, SY), pp. 854–859.
ICML-2015-GongY #analysis #convergence #memory management
A Modified Orthant-Wise Limited Memory Quasi-Newton Method with Convergence Analysis (PG, JY), pp. 276–284.
ICML-2015-WangY #learning #matrix #multi
Safe Screening for Multi-Task Feature Learning with Multiple Data Matrices (JW, JY), pp. 1747–1756.
KDD-2015-ChakrabortyBSPY #classification #framework #learning #named #novel
BatchRank: A Novel Batch Mode Active Learning Framework for Hierarchical Classification (SC, VNB, ARS, SP, JY), pp. 99–108.
KDD-2015-KuoWWCYD #graph #multi #segmentation
Unified and Contrasting Cuts in Multiple Graphs: Application to Medical Imaging Segmentation (CTK, XW, PBW, OTC, JY, ID), pp. 617–626.
KDD-2015-SunAYMMBY #classification #learning
Transfer Learning for Bilingual Content Classification (QS, MSA, BY, CM, VM, AB, JY), pp. 2147–2156.
KDD-2015-WangCMBYR #predict
Dynamic Poisson Autoregression for Influenza-Like-Illness Case Count Prediction (ZW, PC, SRM, JSB, JY, NR), pp. 1285–1294.
KDD-2015-YangSJWDY #learning #visual notation
Structural Graphical Lasso for Learning Mouse Brain Connectivity (SY, QS, SJ, PW, ID, JY), pp. 1385–1394.
KDD-2015-ZhangLZSKYJ #analysis #biology #image #learning #modelling #multi
Deep Model Based Transfer and Multi-Task Learning for Biological Image Analysis (WZ, RL, TZ, QS, SK, JY, SJ), pp. 1475–1484.
KDD-2015-ZhaoSYCLR #learning #multi
Multi-Task Learning for Spatio-Temporal Event Forecasting (LZ, QS, JY, FC, CTL, NR), pp. 1503–1512.
ICML-c1-2014-LiuYF #algorithm #constraints
Forward-Backward Greedy Algorithms for General Convex Smooth Functions over A Cardinality Constraint (JL, JY, RF), pp. 503–511.
ICML-c2-2014-LinYHY #distance #learning
Geodesic Distance Function Learning via Heat Flow on Vector Fields (BL, JY, XH, JY), pp. 145–153.
ICML-c2-2014-LiuZWY
Safe Screening with Variational Inequalities and Its Application to Lasso (JL, ZZ, JW, JY), pp. 289–297.
ICML-c2-2014-WangLLFDY #matrix
Rank-One Matrix Pursuit for Matrix Completion (ZW, MJL, ZL, WF, HD, JY), pp. 91–99.
ICML-c2-2014-WangLYFWY #algorithm #modelling #parallel #scalability
A Highly Scalable Parallel Algorithm for Isotropic Total Variation Models (JW, QL, SY, WF, PW, JY), pp. 235–243.
ICML-c2-2014-WangWY #reduction #scalability
Scaling SVM and Least Absolute Deviations via Exact Data Reduction (JW, PW, JY), pp. 523–531.
KDD-2014-GongZFY #learning #multi #performance
Efficient multi-task feature learning with calibration (PG, JZ, WF, JY), pp. 761–770.
KDD-2014-LiuWY #algorithm #performance
An efficient algorithm for weak hierarchical lasso (YL, JW, JY), pp. 283–292.
KDD-2014-XiangYY
Simultaneous feature and feature group selection through hard thresholding (SX, TY, JY), pp. 532–541.
KDD-2014-ZhouWHY #data-driven #metaprogramming
From micro to macro: data driven phenotyping by densification of longitudinal electronic medical records (JZ, FW, JH, JY), pp. 135–144.
ICML-c1-2013-XiangTY #feature model #optimisation #performance
Efficient Sparse Group Feature Selection via Nonconvex Optimization (SX, XT, JY), pp. 284–292.
ICML-c2-2013-GongZLHY #algorithm #optimisation #problem
A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems (PG, CZ, ZL, JH, JY), pp. 37–45.
ICML-c3-2013-0002YY #linear
Guaranteed Sparse Recovery under Linear Transformation (JL, LY, JY), pp. 91–99.
ICML-c3-2013-ChattopadhyayFDPY #learning
Joint Transfer and Batch-mode Active Learning (RC, WF, ID, SP, JY), pp. 253–261.
KDD-2013-SunXY #analysis #component #robust
Robust principal component analysis via capped norms (QS, SX, JY), pp. 311–319.
KDD-2013-WangY #learning #query
Querying discriminative and representative samples for batch mode active learning (ZW, JY), pp. 158–166.
KDD-2013-XiangYFWTY #learning #multi #predict
Multi-source learning with block-wise missing data for Alzheimer’s disease prediction (SX, LY, WF, YW, PMT, JY), pp. 185–193.
KDD-2013-YangWFZWY #algorithm #multi #performance #problem
An efficient ADMM algorithm for multidimensional anisotropic total variation regularization problems (SY, JW, WF, XZ, PW, JY), pp. 641–649.
KDD-2013-ZhouLSYWY #identification #named
FeaFiner: biomarker identification from medical data through feature generalization and selection (JZ, ZL, JS, LY, FW, JY), pp. 1034–1042.
KDD-2012-ChattopadhyayWFDPY #probability
Batch mode active sampling based on marginal probability distribution matching (RC, ZW, WF, ID, SP, JY), pp. 741–749.
KDD-2012-GongYZ #learning #multi #robust
Robust multi-task feature learning (PG, JY, CZ), pp. 895–903.
KDD-2012-HuZLYH #matrix
Accelerated singular value thresholding for matrix completion (YH, DZ, JL, JY, XH), pp. 298–306.
KDD-2012-XiangZSY #rank
Optimal exact least squares rank minimization (SX, YZ, XS, JY), pp. 480–488.
KDD-2012-YangYLSWY #graph
Feature grouping and selection over an undirected graph (SY, LY, YCL, XS, PW, JY), pp. 922–930.
KDD-2012-YuanWTNY #analysis #learning #multi
Multi-source learning for joint analysis of incomplete multi-modality neuroimaging data (LY, YW, PMT, VAN, JY), pp. 1149–1157.
KDD-2012-ZhouLNY #modelling
Modeling disease progression via fused sparse group lasso (JZ, JL, VAN, JY), pp. 1095–1103.
KDD-2011-ChattopadhyayYPFD #adaptation #detection #multi
Multi-source domain adaptation and its application to early detection of fatigue (RC, JY, SP, WF, ID), pp. 717–725.
KDD-2011-ChenZY #learning #multi #rank #robust
Integrating low-rank and group-sparse structures for robust multi-task learning (JC, JZ, JY), pp. 42–50.
KDD-2011-HuangLYFCWR #effectiveness #modelling #network
Brain effective connectivity modeling for alzheimer’s disease by sparse gaussian bayesian network (SH, JL, JY, AF, KC, TW, ER), pp. 931–939.
KDD-2011-ZhouYLY #learning #multi #predict
A multi-task learning formulation for predicting disease progression (JZ, LY, JL, JY), pp. 814–822.
KDD-2010-ChenLY #learning #multi #rank
Learning incoherent sparse and low-rank patterns from multiple tasks (JC, JL, JY), pp. 1179–1188.
KDD-2010-LiuYY #algorithm #performance #problem
An efficient algorithm for a class of fused lasso problems (JL, LY, JY), pp. 323–332.
KDD-2010-SunCY #approach #reduction #scalability
A scalable two-stage approach for a class of dimensionality reduction techniques (LS, BC, JY), pp. 313–322.
ICML-2009-ChenTLY #learning #multi
A convex formulation for learning shared structures from multiple tasks (JC, LT, JL, JY), pp. 137–144.
ICML-2009-JiY
An accelerated gradient method for trace norm minimization (SJ, JY), pp. 457–464.
ICML-2009-LiuY #linear #performance
Efficient Euclidean projections in linear time (JL, JY), pp. 657–664.
ICML-2009-SunJY #machine learning #problem
A least squares formulation for a class of generalized eigenvalue problems in machine learning (LS, SJ, JY), pp. 977–984.
ICML-2009-XuJYLK #feature model
Non-monotonic feature selection (ZX, RJ, JY, MRL, IK), pp. 1145–1152.
ICML-2009-YangJY #learning #online
Online learning by ellipsoid method (LY, RJ, JY), pp. 1153–1160.
KDD-2009-JiYLZKY #interactive #using
Drosophila gene expression pattern annotation using sparse features and term-term interactions (SJ, LY, YXL, ZHZ, SK, JY), pp. 407–416.
KDD-2009-LiuCY #scalability
Large-scale sparse logistic regression (JL, JC, JY), pp. 547–556.
KDD-2009-ShenJY #matrix #mining
Mining discrete patterns via binary matrix factorization (BHS, SJ, JY), pp. 757–766.
KDD-2009-SunPLCWLRY #estimation #mining
Mining brain region connectivity for alzheimer’s disease study via sparse inverse covariance estimation (LS, RP, JL, KC, TW, JL, ER, JY), pp. 1335–1344.
ICML-2008-ChenY #kernel
Training SVM with indefinite kernels (JC, JY), pp. 136–143.
ICML-2008-SunJY #analysis #canonical #correlation
A least squares formulation for canonical correlation analysis (LS, SJ, JY), pp. 1024–1031.
KDD-2008-ChenJCLWY #classification #kernel #learning
Learning subspace kernels for classification (JC, SJ, BC, QL, MW, JY), pp. 106–114.
KDD-2008-JiTYY #classification #multi
Extracting shared subspace for multi-label classification (SJ, LT, SY, JY), pp. 381–389.
KDD-2008-SunJY #classification #learning #multi
Hypergraph spectral learning for multi-label classification (LS, SJ, JY), pp. 668–676.
KDD-2008-YeCWLZPBJLAR #data fusion #semistructured data
Heterogeneous data fusion for alzheimer’s disease study (JY, KC, TW, JL, ZZ, RP, MB, RJ, HL, GEA, ER), pp. 1025–1033.
KDD-2008-ZhaoWLYC #data flow #identification #multi #semistructured data
Identifying biologically relevant genes via multiple heterogeneous data sources (ZZ, JW, HL, JY, YC), pp. 839–847.
ICML-2007-Ye #analysis #linear
Least squares linear discriminant analysis (JY), pp. 1087–1093.
ICML-2007-YeCJ #kernel #learning #parametricity #programming
Discriminant kernel and regularization parameter learning via semidefinite programming (JY, JC, SJ), pp. 1095–1102.
KDD-2007-ChenZYL #adaptation #clustering #distance #learning #metric
Nonlinear adaptive distance metric learning for clustering (JC, ZZ, JY, HL), pp. 123–132.
KDD-2007-YeJC #analysis #kernel #learning #matrix #polynomial #programming
Learning the kernel matrix in discriminant analysis via quadratically constrained quadratic programming (JY, SJ, JC), pp. 854–863.
CIKM-2006-YeXLJBCK #analysis #linear #performance
Efficient model selection for regularized linear discriminant analysis (JY, TX, QL, RJ, JB, VC, CK), pp. 532–539.
ICML-2006-YeX #analysis #linear #null #orthogonal
Null space versus orthogonal linear discriminant analysis (JY, TX), pp. 1073–1080.
KDD-2006-YeW #analysis
Regularized discriminant analysis for high dimensional, low sample size data (JY, TW), pp. 454–463.
ICML-2004-Ye #approximate #matrix #rank
Generalized low rank approximations of matrices (JY).
ICML-2004-YeJLP #analysis #feature model #linear
Feature extraction via generalized uncorrelated linear discriminant analysis (JY, RJ, QL, HP).
KDD-2004-YeJL #image #named #performance #reduction #retrieval
GPCA: an efficient dimension reduction scheme for image compression and retrieval (JY, RJ, QL), pp. 354–363.
KDD-2004-YePLJXK #algorithm #composition #incremental #named #reduction
IDR/QR: an incremental dimension reduction algorithm via QR decomposition (JY, QL, HX, HP, RJ, VK), pp. 364–373.

Bibliography of Software Language Engineering in Generated Hypertext (BibSLEIGH) is created and maintained by Dr. Vadim Zaytsev.
Hosted as a part of SLEBOK on GitHub.