BibSLEIGH
BibSLEIGH corpus
BibSLEIGH tags
BibSLEIGH bundles
BibSLEIGH people
EDIT!
CC-BY
Open Knowledge
XHTML 1.0 W3C Rec
CSS 2.1 W3C CanRec
email twitter
Travelled to:
1 × Australia
1 × Canada
1 × Finland
1 × Ireland
1 × Singapore
1 × The Netherlands
2 × China
2 × United Kingdom
7 × USA
Collaborated with:
S.E.Robertson Z.Dou M.P.Kato T.Yamamoto M.Iwata R.Song T.Kokubu T.Miyanishi K.Nogami K.S.Jones J.Wen C.L.A.Clarke Y.Song Y.Chen W.Webber A.Moffat J.Zobel G.J.F.Jones M.Kajiura K.Sumita Y.Qian J.Ye Q.Zheng C.Li K.Zhou M.Lalmas R.Cummins J.M.Jose Y.Saito Y.Ichimura M.Koyama M.Ekstrand-Abueg V.Pavlu C.Yu K.Tanaka Z.Ma J.Lu Y.Liu S.Nishio Y.Liu M.Zhang T.Kitani Y.Ogawa T.Ishikawa H.Kimoto I.Keshi J.Toyoura T.Fukushima K.Matsui Y.Ueda T.Tokunaga H.Tsuruoka H.Nakawatase T.Agata
Talks about:
evalu (12) search (8) japanes (6) retriev (5) queri (5) system (4) metric (4) intent (4) compar (4) relev (4)

Person: Tetsuya Sakai

DBLP DBLP: Sakai:Tetsuya

Facilitated 1 volumes:

SIGIR 2013Ed

Contributed to:

CIKM 20142014
CIKM 20132013
SIGIR 20132013
CIKM 20122012
SIGIR 20122012
CIKM 20112011
SIGIR 20112011
SIGIR 20092009
CIKM 20082008
SIGIR 20082008
SIGIR 20072007
SIGIR 20062006
SIGIR 20042004
SIGIR 20032003
SIGIR 20022002
SIGIR 20012001
SIGIR 19981998

Wrote 31 papers:

CIKM-2014-Sakai #design
Designing Test Collections for Comparing Many Systems (TS), pp. 61–70.
CIKM-2013-QianSYZL #mining #query
Dynamic query intent mining from a search log stream (YnQ, TS, JY, QZ, CL), pp. 1205–1208.
CIKM-2013-ZhouLSCJ #metric #on the #reliability
On the reliability and intuitiveness of aggregated search metrics (KZ, ML, TS, RC, JMJ), pp. 689–698.
SIGIR-2013-Ekstrand-AbuegPKSYI #automation #evaluation
Exploring semi-automatic nugget extraction for Japanese one click access evaluation (MEA, VP, MPK, TS, TY, MI), pp. 749–752.
SIGIR-2013-KatoSYI #bound #evaluation #robust
Report from the NTCIR-10 1CLICK-2 Japanese subtask: baselines, upperbounds and evaluation robustness (MPK, TS, TY, MI), pp. 753–756.
SIGIR-2013-MiyanishiS #query
Time-aware structured query suggestion (TM, TS), pp. 809–812.
SIGIR-2013-SakaiD #evaluation #framework #information management #retrieval #summary
Summaries, ranked retrieval and sessions: a unified framework for information access evaluation (TS, ZD), pp. 473–482.
SIGIR-2013-SakaiDC #evaluation
The impact of intent selection on diversified search evaluation (TS, ZD, CLAC), pp. 921–924.
SIGIR-2013-SakaiDYLZKSI #mining #summary #topic
Summary of the NTCIR-10 INTENT-2 task: subtopic mining and search result diversification (TS, ZD, TY, YL, MZ, MPK, RS, MI), pp. 761–764.
CIKM-2012-YamamotoSIYWT #clustering #mining #query
The wisdom of advertisers: mining subgoals via query clustering (TY, TS, MI, CY, JRW, KT), pp. 505–514.
SIGIR-2012-IwataSYCLWN #named #visualisation #web
AspecTiles: tile-based visualization of diversified web search results (MI, TS, TY, YC, YL, JRW, SN), pp. 85–94.
SIGIR-2012-MaCSSLW #assessment #query
New assessment criteria for query suggestion (ZM, YC, RS, TS, JL, JRW), pp. 1109–1110.
SIGIR-2012-Sakai #evaluation #information retrieval #mobile #towards #what
Towards zero-click mobile IR evaluation: knowing what and knowing when (TS), pp. 1157–1158.
CIKM-2011-SakaiKS #information management
Click the search button and be happy: evaluating direct and immediate information access (TS, MPK, YIS), pp. 621–630.
SIGIR-2011-SakaiS #using
Evaluating diversified search results using per-intent graded relevance (TS, RS), pp. 1043–1052.
SIGIR-2009-SakaiN #analysis #query #wiki
Serendipitous search via wikipedia: a query log analysis (TS, KN), pp. 780–781.
CIKM-2008-Sakai #bias #metric #robust
Comparing metrics across TREC and NTCIR: the robustness to system bias (TS), pp. 581–590.
SIGIR-2008-Sakai #bias #metric #robust
Comparing metrics across TREC and NTCIR: : the robustness to pool depth bias (TS), pp. 691–692.
SIGIR-2008-WebberMZS
Precision-at-ten considered redundant (WW, AM, JZ, TS), pp. 695–696.
SIGIR-2007-Sakai
Alternatives to Bpref (TS), pp. 71–78.
SIGIR-2006-Sakai #evaluation #metric
Evaluating evaluation metrics based on the bootstrap (TS), pp. 525–532.
SIGIR-2006-Sakai06a #documentation
Give me just one highly relevant document: P-measure (TS), pp. 695–696.
SIGIR-2004-SakaiSIKK #evaluation
The effect of back-formulating questions in question answering evaluation (TS, YS, YI, TK, MK), pp. 474–475.
SIGIR-2003-Sakai #evaluation #multi #performance #retrieval
Average gain ratio: a simple retrieval performance measure for evaluation with multiple relevance levels (TS), pp. 417–418.
SIGIR-2003-SakaiK #performance #question #retrieval #what
Evaluating retrieval performance for Japanese question answering: what are best passages? (TS, TK), pp. 429–430.
SIGIR-2002-SakaiR #case study #comparative #information retrieval
Relative and absolute term selection criteria: a comparative study for English and Japanese IR (TS, SER), pp. 411–412.
SIGIR-2001-SakaiJ #information retrieval #summary
Generic Summaries for Indexing in Information Retrieval (TS, KSJ), pp. 190–198.
SIGIR-2001-SakaiR #feedback #flexibility #optimisation #pseudo #using
Flexible Pseudo-Relevance Feedback Using Optimization Tables (TS, SER), pp. 396–397.
SIGIR-1998-JonesSKS #retrieval #using
Experiments in Japanese Text Retrieval and Routing Using the NEAT System (GJFJ, TS, MK, KS), pp. 197–205.
SIGIR-1998-KitaniOIKKTFMUSTTNA #information retrieval #lessons learnt
Lessons from BMIR-J2: A Test Collection for Japanese IR Systems (TK, YO, TI, HK, IK, JT, TF, KM, YU, TS, TT, HT, HN, TA), pp. 345–346.

Bibliography of Software Language Engineering in Generated Hypertext (BibSLEIGH) is created and maintained by Dr. Vadim Zaytsev.
Hosted as a part of SLEBOK on GitHub.