BibSLEIGH
BibSLEIGH corpus
BibSLEIGH tags
BibSLEIGH bundles
BibSLEIGH people
EDIT!
CC-BY
Open Knowledge
XHTML 1.0 W3C Rec
CSS 2.1 W3C CanRec
email twitter
multimodal
Google multimodal

Tag #multimodal

229 papers:

EDMEDM-2019-AngertS #natural language
Augmenting Transcripts with Natural Language Processing and Multimodal Data (TA, BS).
CoGCoG-2019-LiapisKMSY #learning
Fusing Level and Ruleset Features for Multimodal Learning of Gameplay Outcomes (AL, DK, KM, KS, GNY), pp. 1–8.
CoGCoG-2019-RingerWN #game studies #recognition
Multimodal Joint Emotion and Game Context Recognition in League of Legends Livestreams (CR, JAW, MAN), pp. 1–8.
CoGVS-Games-2019-GiariskanisPM #3d #architecture #design #music #named
ARCHIMUSIC3D: Multimodal Playful Transformations between Music and Refined Urban Architectural Design (FG, PP, KM), pp. 1–4.
CIKMCIKM-2019-JenkinsFWL #learning #representation
Unsupervised Representation Learning of Spatial Data via Multimodal Embedding (PJ, AF, SW, ZL), pp. 1993–2002.
CIKMCIKM-2019-ShahVLFLTJS #classification #image
Inferring Context from Pixels for Multimodal Image Classification (MS, KV, CTL, AF, ZL, AT, CJ, CS), pp. 189–198.
CIKMCIKM-2019-SrivastavaLF #adaptation #community #modelling #platform #visual notation
Adapting Visual Question Answering Models for Enhancing Multimodal Community Q&A Platforms (AS, HWL, SF), pp. 1421–1430.
ECIRECIR-p2-2019-ShengLM #image #question #retrieval
Can Image Captioning Help Passage Retrieval in Multimodal Question Answering? (SS, KL, MFM), pp. 94–101.
ICMLICML-2019-FongLH #parametricity #scalability
Scalable Nonparametric Sampling from Multimodal Posteriors with the Posterior Bootstrap (EF, SL, CCH), pp. 1952–1962.
KDDKDD-2019-ChenJMFKSPSYMSS #metric
Developing Measures of Cognitive Impairment in the Real World from Consumer-Grade Multimodal Sensor Streams (RC, FJ, NM, LF, LK, AS, MP, JS, RY, VM, MS, HHS, HJJ, BT, AT), pp. 2145–2155.
ECIRECIR-2018-ZangerleTWS #analysis #music #named #towards
ALF-200k: Towards Extensive Multimodal Analyses of Music Tracks and Playlists (EZ, MT, SW, GS), pp. 584–590.
ICPRICPR-2018-LiCGJH #gesture #recognition #using
Multimodal Gesture Recognition Using Densely Connected Convolution and BLSTM (DL, YC, MkG, SJ, CH), pp. 3365–3370.
ICPRICPR-2018-SoleymaniDKDN #abstraction #identification #network
Multi-Level Feature Abstraction from Convolutional Neural Networks for Multimodal Biometric Identification (SS, AD, HK, JMD, NMN), pp. 3469–3476.
ICPRICPR-2018-Sun0L #detection #image
Multimodal Face Spoofing Detection via RGB-D Images (XS0, LH0, CL), pp. 2221–2226.
ICPRICPR-2018-ZhangLWCZ #automation #framework #image #set
An Automated Point Set Registration Framework for Multimodal Retinal Image (HZ, XL, GW, YC, WZ), pp. 2857–2862.
KDDKDD-2018-HuF #analysis #sentiment
Multimodal Sentiment Analysis To Explore the Structure of Emotions (AH, SRF), pp. 350–358.
KDDKDD-2018-XuBDMS #monitoring #named
RAIM: Recurrent Attentive and Intensive Model of Multimodal Patient Monitoring Data (YX, SB, SRD, KOM, JS), pp. 2565–2573.
AIIDEAIIDE-2017-MinMRTWBL #game studies #recognition
Multimodal Goal Recognition in Open-World Digital Games (WM, BWM, JPR, RGT, ENW, KEB, JCL), pp. 80–86.
CIKMCIKM-2017-XiangJ #learning #network
Common-Specific Multimodal Learning for Deep Belief Network (CX, XJ), pp. 2387–2390.
CIKMCIKM-2017-XuM #analysis #named #network #semantics #sentiment
MultiSentiNet: A Deep Semantic Network for Multimodal Sentiment Analysis (NX, WM), pp. 2399–2402.
ECIRECIR-2017-GoelGRTS #exclamation #retrieval #user interface #video
“Hey, vitrivr!” - A Multimodal UI for Video Retrieval (PG, IG, LR, CT, HS), pp. 749–752.
KDDKDD-2017-McNamaraVY #feature model #framework
Developing a Comprehensive Framework for Multimodal Feature Extraction (QM, AdlV, TY), pp. 1567–1574.
EDMEDM-2016-ChenLXSMD #problem #process
Riding an emotional roller-coaster: A multimodal study of young child's math problem solving activities (LC, XL, ZX, ZS, LPM, AD), pp. 38–45.
EDMEDM-2016-MinWPVBMFWL #interactive #predict #student
Predicting Dialogue Acts for Intelligent Virtual Agents with Multimodal Student Interaction Data (WM, JBW, LP, AKV, KEB, BWM, MF, ENW, JCL), pp. 454–459.
EDMEDM-2016-SharmaBGPD16a #education #named #network #predict
LIVELINET: A Multimodal Deep Recurrent Neural Network to Predict Liveliness in Educational Videos (AS, AB, AG, SP, OD), pp. 215–222.
EDMEDM-2016-SharmaBGPD16a_ #education #named #network #predict
LIVELINET: A Multimodal Deep Recurrent Neural Network to Predict Liveliness in Educational Videos (AS, AB, AG, SP, OD), pp. 215–222.
CIKMCIKM-2016-YanXGHW #retrieval #robust
Supervised Robust Discrete Multimodal Hashing for Cross-Media Retrieval (TKY, XSX, SG, ZH, XW), pp. 1271–1280.
ECIRECIR-2016-BergRW #named #summary
Scenemash: Multimodal Route Summarization for City Exploration (JvdB, SR, MW), pp. 833–836.
ICPRICPR-2016-GuanCSRR #image #stack
Image stack surface area minimization for groupwise and multimodal affine registration (BHG, JC, MS, SR, AR0), pp. 4196–4201.
ICPRICPR-2016-GurpinarKS #estimation
Multimodal fusion of audio, scene, and face features for first impression estimation (FG, HK, AAS), pp. 43–48.
ICPRICPR-2016-ZamzamiPGKAS #analysis #approach #automation
An approach for automated multimodal analysis of infants' pain (GZ, CYP, DBG, RK, TA, YS0), pp. 4148–4153.
KDDKDD-2016-LynchAA #image #learning #rank #scalability #semantics #visual notation
Images Don't Lie: Transferring Deep Visual Semantic Features to Large-Scale Multimodal Learning to Rank (CL, KA, JA), pp. 541–548.
IJCARIJCAR-2016-NalonHD #proving
: A Resolution-Based Prover for Multimodal K (CN, UH, CD), pp. 406–415.
EDMEDM-2015-JanningSS15a #how #recognition
How to Aggregate Multimodal Features for Perceived Task Difficulty Recognition in Intelligent Tutoring Systems (RJ, CS, LST), pp. 566–567.
CoGCIG-2015-Martin-Niedecken #flexibility #game studies #middleware #physics #quote
“RehabConnex”: A middleware for the flexible connection of multimodal game applications with input devices used in movement therapy and physical exercising (ALMN, RB, RM, UG), pp. 496–502.
CoGVS-Games-2015-BevilacquaBE #analysis #game studies
Proposal for Non-Contact Analysis of Multimodal Inputs to Measure Stress Level in Serious Games (FB, PB, HE), pp. 1–4.
CHICHI-2015-AkshitaSILB #feedback #interactive #towards #visual notation
Towards Multimodal Affective Feedback: Interaction between Visual and Haptic Modalities (A, HAS, BI, EL, YB), pp. 2043–2052.
CHICHI-2015-DerbovenMS #analysis #case study #design
Multimodal Analysis in Participatory Design with Children: A Primary School Case Study (JD, MVM, KS), pp. 2825–2828.
CHICHI-2015-PolitisBP
To Beep or Not to Beep?: Comparing Abstract versus Language-Based Multimodal Driver Displays (IP, SAB, FEP), pp. 3971–3980.
HCIDUXU-DD-2015-CamposBFNC #experience #guidelines #heuristic #industrial
Combining Principles of Experience, Traditional Heuristics and Industry Guidelines to Evaluate Multimodal Digital Artifacts (FC, RB, WF, EVN, WC), pp. 130–137.
HCIHCI-IT-2015-AlmeidaTRBFDSAC
Giving Voices to Multimodal Applications (NA, AJST, AFR, DB, JF, MSD, SSS, JA, CC, NS), pp. 273–283.
HCIHCI-IT-2015-TokdemirACMB #case study #design #interactive #interface #navigation #representation #ubiquitous
Multimodal Interaction Flow Representation for Ubiquitous Environments — MIF: A Case Study in Surgical Navigation Interface Design (GT, GA, NEÇ, HHM, AOB), pp. 797–805.
HCIHCI-UC-2015-DarzentasBC #feedback
Designed to Thrill: Exploring the Effects of Multimodal Feedback on Virtual World Immersion (DPD, MAB, NC), pp. 384–395.
PLATEAUPLATEAU-2015-CuencaBLC #case study #domain-specific language #execution #interactive #performance #programming #user study
A user study for comparing the programming efficiency of modifying executable multimodal interaction descriptions: a domain-specific language versus equivalent event-callback code (FC, JVdB0, KL, KC), pp. 31–38.
SACSAC-2015-GimenesGRG #analysis #graph #repository
Multimodal graph-based analysis over the DBLP repository: critical discoveries and hypotheses (GPG, HG, JFRJ, MG), pp. 1129–1135.
CASECASE-2015-FantiIU
A decision support system for multimodal logistic management (MPF, GI, WU), pp. 63–68.
CASECASE-2015-FohringZ #distributed #towards
Towards decentralized electronic market places and agent-based freight exchanges for multimodal transports (RF, SZ), pp. 249–254.
EDMEDM-2014-GrafsgaardWBWL #data type #learning #predict #tutorial
Predicting Learning and Affect from Multimodal Data Streams in Task-Oriented Tutorial Dialogue (JFG, JBW, KEB, ENW, JCL), pp. 122–129.
EDMEDM-2014-Schneider #collaboration #detection #learning #towards
Toward Collaboration Sensing: Multimodal Detection of the Chameleon Effect in Collaborative Learning Settings (BS), pp. 435–437.
EDMEDM-2014-WorsleyB #learning #using
Using Multimodal Learning Analytics to Study Learning Mechanisms (MW, PB), pp. 431–432.
CHICHI-2014-GongSOHGKP #flexibility #interactive #named
PrintSense: a versatile sensing technique to support multimodal flexible surface interaction (NWG, JS, SO, SH, NEG, YK, JAP), pp. 1407–1410.
CHICHI-2014-PolitisBP
Evaluating multimodal driver displays under varying situational urgency (IP, SAB, FEP), pp. 4067–4076.
HCIDUXU-DP-2014-DiasPdS #communication #design #interface #named #people
TAC-ACCESS — Technologies to Support Communication from Interfaces Accessible and Multimodal for People with Disabilities and Diversity: Context-Centered Design of Usage (CdOD, LMP, CdCL, EGS), pp. 141–151.
HCIDUXU-TMT-2014-NetoC #case study #interface #usability
Evaluating the Usability on Multimodal Interfaces: A Case Study on Tablets Applications (EVN, FFCC), pp. 484–495.
HCIHCI-AIMT-2014-NavarrettaL #behaviour #interactive
Multimodal Behaviours in Comparable Danish and Polish Human-Human Triadic Spontaneous Interactions (CN, ML), pp. 462–471.
HCIHCI-TMT-2014-LatoschikF #reuse #scalability
Engineering Variance: Software Techniques for Scalable, Customizable, and Reusable Multimodal Processing (MEL, MF), pp. 308–319.
HCIHIMI-DE-2014-AlghamdiT #image #mining #retrieval #semantics #towards #using
Towards Semantic Image Retrieval Using Multimodal Fusion with Association Rules Mining (RAA, MT), pp. 407–418.
HCILCT-NLE-2014-VasiliouIZ #case study #experience #learning #student
Measuring Students’ Flow Experience in a Multimodal Learning Environment: A Case Study (CV, AI, PZ), pp. 346–357.
ECIRECIR-2014-KellyDKHGJLM
Khresmoi Professional: Multilingual, Multimodal Professional Medical Search (LK, SD, SK, AH, LG, GJFJ, GL, HM), pp. 754–758.
ICMLICML-c2-2014-KirosSZ #modelling
Multimodal Neural Language Models (RK, RS, RSZ), pp. 595–603.
ICPRICPR-2014-CadoniLG #case study #comparative #recognition
Iconic Methods for Multimodal Face Recognition: A Comparative Study (MC, AL, EG), pp. 4612–4617.
ICPRICPR-2014-KeceliC #approach #using
A Multimodal Approach for Recognizing Human Actions Using Depth Information (ASK, ABC), pp. 421–426.
ICPRICPR-2014-KumarK #adaptation #recognition #security #set #using
Adaptive Security for Human Surveillance Using Multimodal Open Set Biometric Recognition (AK, AK), pp. 405–410.
SIGIRSIGIR-2014-WangariZA #case study #interface
Discovering real-world use cases for a multimodal math search interface (KDVW, RZ, AA), pp. 947–950.
SACSAC-2014-KawsarAL #detection #process #smarttech #using
Smartphone based multimodal activity detection system using plantar pressure sensors (FAK, SIA, RL), pp. 468–469.
SACSAC-2014-RolimBCCAPM #approach #recommendation
A recommendation approach for digital TV systems based on multimodal features (RR, FB, AC, GC, HOdA, AP, AFM), pp. 289–291.
CASECASE-2014-HabibRSP #named #simulation
SkinSim: A simulation environment for multimodal robot skin (AH, IR, KS, DOP), pp. 1226–1231.
ECSAECSA-2013-HeseniusG #interactive #mvc #named
MVIC — An MVC Extension for Interactive, Multimodal Applications (MH, VG), pp. 324–327.
DocEngDocEng-2013-NguyenOC
Bag of subjects: lecture videos multimodal indexing (NVN, JMO, FC), pp. 225–226.
DRRDRR-2013-YouSADT #documentation #image #retrieval
Annotating image ROIs with text descriptions for multimodal biomedical document retrieval (DY, MSS, SA, DDF, GRT).
JCDLJCDL-2013-BahraniK #documentation
Multimodal alignment of scholarly documents and their presentations (BB, MYK), pp. 281–284.
CHICHI-2013-LaputDWCALA #editing #image #interface #named
PixelTone: a multimodal interface for image editing (GL, MD, GW, WC, AA, JL, EA), pp. 2185–2194.
HCIDUXU-NTE-2013-SonntagZSWT #artificial reality #cyber-physical #information management #towards
Towards Medical Cyber-Physical Systems: Multimodal Augmented Reality for Doctors and Knowledge Discovery about Patients (DS, SZ, CHS, MW, TT), pp. 401–410.
HCIHCI-III-2013-CaonAYKM
Context-Aware Multimodal Sharing of Emotions (MC, LA, YY, OAK, EM), pp. 19–28.
HCIHCI-IMT-2013-GuoCCJT
Intent Capturing through Multimodal Inputs (WG, CC, MC, YJ, HT), pp. 243–251.
HCIHCI-IMT-2013-Jokinen #feedback #interactive
Multimodal Feedback in First Encounter Interactions (KJ), pp. 262–271.
HCIHCI-IMT-2013-LeMPNT #interactive
Multimodal Smart Interactive Presentation System (HAL, KNCM, TAP, VTN, MTT), pp. 67–76.
HCIHCI-IMT-2013-MedjkouneMPV #recognition #speech
Multimodal Mathematical Expressions Recognition: Case of Speech and Handwriting (SM, HM, SP, CVG), pp. 77–86.
HCIHCI-IMT-2013-NovickG
Grounding and Turn-Taking in Multimodal Multiparty Conversation (DGN, IG), pp. 97–106.
HCIHCI-IMT-2013-TungGKM #human-computer #interactive #using
Multi-party Human-Machine Interaction Using a Smart Multimodal Digital Signage (TT, RG, TK, TM), pp. 408–415.
HCIHIMI-D-2013-GhoshJT #empirical #evaluation #interactive
Empirical Evaluation of Multimodal Input Interactions (SG, AJ, ST), pp. 37–47.
CIKMCIKM-2013-LiGLYS #framework
A multimodal framework for unsupervised feature fusion (XL, JG, HL, LY, RKS), pp. 897–902.
ECIRECIR-2013-SantosCSM #image #ranking
Multimodal Re-ranking of Product Image Search Results (JMdS, JMBC, PCS, ESdM), pp. 62–73.
KEODKEOD-2013-CholewaACR #integration #modelling #network
Multimodal Statement Networks for Diagnostic Knowledge Modeling and Integration (WC, MA, PC, TR), pp. 140–147.
SIGIRSIGIR-2013-WangHWZ0M #learning #search-based
Learning to name faces: a multimodal learning scheme for search-based face annotation (DW, SCHH, PW, JZ, YH, CM), pp. 443–452.
SACSAC-2013-LoTNCLC #development #framework #platform
i*Chameleon: a platform for developing multimodal application with comprehensive development cycle (KWKL, WWWT, GN, ATSC, HVL, SCFC), pp. 1103–1108.
DATEDATE-2013-HuNRK #detection #hardware #using
High-sensitivity hardware trojan detection using multimodal characterization (KH, ANN, SR, FK), pp. 1271–1276.
ICPRICPR-2012-MaLWZH #authentication #security
Enhancing biometric security with wavelet quantization watermarking based two-stage multimodal authentication (BM, CL, YW, ZZ, DH), pp. 2416–2419.
ICPRICPR-2012-MitraKGSMLOVM #clustering #performance
Spectral clustering to model deformations for fast multimodal prostate registration (JM, ZK, SG, DS, RM, XL, AO, JCV, FM), pp. 2622–2625.
ICPRICPR-2012-SeredinMTRW #pattern matching #pattern recognition #recognition
Convex support and Relevance Vector Machines for selective multimodal pattern recognition (OS, VM, AT, NR, DW), pp. 1647–1650.
ICPRICPR-2012-YanoZL #authentication
Multimodal biometric authentication based on iris pattern and pupil light reflex (VY, AZ, LLL), pp. 2857–2860.
ICPRICPR-2012-YilmazYK #network #process
Non-linear weighted averaging for multimodal information fusion by employing Analytical Network Process (TY, AY, MK), pp. 234–237.
KDDKDD-2012-ZhenY #learning #probability
A probabilistic model for multimodal hash function learning (YZ, DYY), pp. 940–948.
REFSQREFSQ-2012-BruniFST #analysis #automation #perspective #requirements #research
Automatic Analysis of Multimodal Requirements: A Research Preview (EB, AF, NS, GT), pp. 218–224.
WICSAWICSA-2011-SanchezEAB #architecture #framework #named #recognition
ABE: An Agent-Based Software Architecture for a Multimodal Emotion Recognition Framework (JGS, MECE, RKA, WB), pp. 187–193.
DocEngDocEng-2011-Wieschebrink #collaboration #editing
Collaborative editing of multimodal annotation data (SW), pp. 69–72.
DRRDRR-2011-ChengAST #automation #documentation #image #retrieval #segmentation
Automatic segmentation of subfigure image panels for multimodal biomedical document retrieval (BC, SA, RJS, GRT), pp. 1–10.
CoGCIG-2011-SchrumM #evolution #game studies #network
Evolving multimodal networks for multitask games (JS, RM), pp. 102–109.
CHICHI-2011-JunuzovicIHZTB #overview #using #what
What did i miss?: in-meeting review using multimodal accelerated instant replay (air) conferencing (SJ, KI, RH, ZZ, JCT, CB), pp. 513–522.
CHICHI-2011-KimK #design #feedback #performance
Designing of multimodal feedback for enhanced multitasking performance (GK, HCK), pp. 3113–3122.
CHICHI-2011-McGee-LennonWB
User-centred multimodal reminders for assistive living (MRML, MKW, SAB), pp. 2105–2114.
HCIDHM-2011-ElepfandtS #artificial reality #interactive
Multimodal, Touchless Interaction in Spatial Augmented Reality Environments (ME, MS), pp. 263–271.
HCIDHM-2011-SchafferSM #human-computer #interactive
A Model of Shortcut Usage in Multimodal Human-Computer Interaction (SS, RS, SM), pp. 337–346.
HCIDHM-2011-SchmuntzschR #user interface
Multimodal User Interfaces in IPS2 (US, MR), pp. 347–356.
HCIDUXU-v1-2011-TambasciaDM #authentication #mobile
Methodology for Evaluating Multimodal Biometric Authentication on Mobile Devices (CdAT, RED, EMDM), pp. 668–677.
HCIHCD-2011-HajekPJB #behaviour
Influence of a Multimodal Assistance Supporting Anticipatory Driving on the Driving Behavior and Driver’s Acceptance (HH, DP, MJ, KB), pp. 217–226.
HCIHCI-DDA-2011-FeuerstackP #execution #interface #modelling
Building Multimodal Interfaces Out of Executable, Model-Based Interactors and Mappings (SF, EBP), pp. 221–228.
HCIHCI-ITE-2011-CarrinoTMKI #approach #interface
Head-Computer Interface: A Multimodal Approach to Navigate through Real and Virtual Worlds (FC, JT, EM, OAK, RI), pp. 222–230.
HCIHCI-MIIE-2011-ParkCKK #interface #optimisation
Multimodal Interface for Driving-Workload Optimization (HP, JC, HJK, KhK), pp. 452–461.
HCIHCI-MIIE-2011-WalterSSGHSBLTS #behaviour #classification
Multimodal Emotion Classification in Naturalistic User Behavior (SW, SS, MS, MG, DH, MS, RB, KL, HCT, FS), pp. 603–611.
HCIHIMI-v1-2011-LifOLHS
Multimodal Threat Cueing in Simulated Combat Vehicle with Tactile Information Switching between Threat and Waypoint Indication (PL, PAO, BL, JH, JS), pp. 454–461.
HCIHIMI-v2-2011-Otsuka #analysis #behaviour #comprehension #people
Multimodal Conversation Scene Analysis for Understanding People’s Communicative Behaviors in Face-to-Face Meetings (KO), pp. 171–179.
ICEISICEIS-v3-2011-Zang #development #research
Research on International Multimodal Transport Development Strategy in China (XZ), pp. 333–336.
CIKMCIKM-2011-GamperBCI #network
Defining isochrones in multimodal spatial networks (JG, MHB, WC, MI), pp. 2381–2384.
ECIRECIR-2011-ArampatzisZC #database #image #retrieval #scalability
Dynamic Two-Stage Image Retrieval from Large Multimodal Databases (AA, KZ, SAC), pp. 326–337.
ECIRECIR-2011-ArampatzisZC11a #retrieval
Fusion vs. Two-Stage for Multimodal Retrieval (AA, KZ, SAC), pp. 759–762.
ICMLICML-2011-NgiamKKNLN #learning
Multimodal Deep Learning (JN, AK, MK, JN, HL, AYN), pp. 689–696.
KEODKEOD-2011-SeinturierMB #data transformation #knowledge-based #query #representation
Knowledge-based Multimodal Data Representation and Querying (JS, EM, EB), pp. 152–158.
SIGIRSIGIR-2011-ChatzichristofisZA #image #retrieval
Bag-of-visual-words vs global image descriptors on two-stage multimodal retrieval (SAC, KZ, AA), pp. 1251–1252.
SIGIRSIGIR-2011-LiWLZS #image #optimisation #ranking #web
Optimizing multimodal reranking for web image search (HL, MW, ZL, ZJZ, JS), pp. 1119–1120.
SACSAC-2011-PedrosaMMT #component #interactive
A multimodal interaction component for digital television (DP, JACMJ, ELM, CACT), pp. 1253–1258.
CASECASE-2011-GhirardiPS #case study #platform #throughput
Maximizing the throughput of multimodal logistic platforms by simulation-optimization: The Duferco case study (MG, GP, DS), pp. 52–57.
DRRDRR-2010-YouADRGT #image #retrieval #using
Biomedical article retrieval using multimodal features and image annotations in region-based CBIR (DY, SA, DDF, MMR, VG, GRT), pp. 1–10.
TPDLECDL-2010-CamargoCG #image #matrix #using #visualisation
Multimodal Image Collection Visualization Using Non-negative Matrix Factorization (JEC, JCC, FAG), pp. 429–432.
CHICHI-2010-HogganB #interface #named #testing #using
Crosstrainer: testing the use of multimodal interfaces in situ (EEH, SAB), pp. 333–342.
CHICHI-2010-HornofZH
Knowing where and when to look in a time-critical multimodal dual task (AJH, YZ, TH), pp. 2103–2112.
CHICHI-2010-OulasvirtaB #flexibility
A simple index for multimodal flexibility (AO, JBL), pp. 1475–1484.
ICPRICPR-2010-ChiaSN #linear #towards
Towards a Best Linear Combination for Multimodal Biometric Fusion (CC, NS, LN), pp. 1176–1179.
ICPRICPR-2010-GiannakopoulosPT #approach #detection #video
A Multimodal Approach to Violence Detection in Video Sharing Sites (TG, AP, ST), pp. 3244–3247.
ICPRICPR-2010-GiotHR #2d #low cost #recognition
Low Cost and Usable Multimodal Biometric System Based on Keystroke Dynamics and 2D Face Recognition (RG, BH, CR), pp. 1128–1131.
ICPRICPR-2010-HuangWFBHL #classification
Multimodal Sleeping Posture Classification (WH, AAPW, FSF, JB, CCH, KL), pp. 4336–4339.
ICPRICPR-2010-KarpovRKRA #interactive
Multimodal Human Computer Interaction with MIDAS Intelligent Infokiosk (AK, AR, ISK, ALR, LA), pp. 3862–3865.
ICPRICPR-2010-MaLWZW #adaptation #authentication
Block Pyramid Based Adaptive Quantization Watermarking for Multimodal Biometric Authentication (BM, CL, YW, ZZ, YW), pp. 1277–1280.
ICPRICPR-2010-PutzeJS #recognition
Multimodal Recognition of Cognitive Workload for Multitasking in the Car (FP, JPJ, TS), pp. 3748–3751.
KMISKMIS-2010-SonntagR #process #semantics #towards
Towards a Process of Building Semantic Multimodal Dialogue Demonstrators (DS, NR), pp. 322–331.
SEKESEKE-2010-KongZLR #adaptation #design #interface #pervasive
A Cross-Layer Design for Adaptive Multimodal Interfaces in Pervasive Computing (JK, WZ, JL, AGR), pp. 726–731.
SEKESEKE-2010-NetoFRR #design #evaluation #interface #named #reuse #usability
MMWA-ae: boosting knowledge from Multimodal Interface Design, Reuse and Usability Evaluation (ATN, RPdMF, RGR, SOR), pp. 355–360.
SACSAC-2010-MontagnuoloMF #framework #named
HMNews: a multimodal news data association framework (MM, AM, MF), pp. 1823–1824.
TPDLECDL-2009-DammKFC #concept #library #music #query #using
A Concept for Using Combined Multimodal Queries in Digital Music Libraries (DD, FK, CF, MC), pp. 261–272.
TPDLECDL-2009-RomeroLATV #image #interactive
A Web-Based Demo to Interactive Multimodal Transcription of Historic Text Images (VR, LAL, VA, AHT, EV), pp. 459–460.
ICDARICDAR-2009-RegmiW #collaboration #documentation #interface
A Collaborative Interface for Multimodal Ink and Audio Documents (AR, SMW), pp. 901–905.
ICDARICDAR-2009-WangSB #documentation #information management
Information Extraction from Multimodal ECG Documents (FW, TFSM, DB), pp. 381–385.
HCIDHM-2009-ClavelM #approach #modelling #named #permutation
PERMUTATION: A Corpus-Based Approach for Modeling Personality and Multimodal Expression of Affects in Virtual Characters (CC, JCM), pp. 211–220.
HCIHCD-2009-NakanoR #analysis #corpus #usability
Multimodal Corpus Analysis as a Method for Ensuring Cultural Usability of Embodied Conversational Agents (YIN, MR), pp. 521–530.
HCIHCI-NIMT-2009-BannatGRRRW #industrial
A Multimodal Human-Robot-Interaction Scenario: Working Together with an Industrial Robot (AB, JG, TR, WR, GR, FW), pp. 303–311.
HCIHCI-NIMT-2009-BeinhauerH #evaluation #mobile #using
Using Acoustic Landscapes for the Evaluation of Multimodal Mobile Applications (WB, CH), pp. 3–11.
HCIHCI-NIMT-2009-ChoumaneS #interactive #modelling #using
Modeling and Using Salience in Multimodal Interaction Systems (AC, JS), pp. 12–18.
HCIHCI-NIMT-2009-DuarteSC #collaboration #interactive
Exploring Multimodal Interaction in Collaborative Settings (LD, MdS, LC), pp. 19–28.
HCIHCI-NIMT-2009-Jain #human-computer #using
Value of Using Multimodal Data in HCI Methodologies (JJ), pp. 48–57.
HCIHCI-NIMT-2009-JainGD
Multimodal Shopping Lists (JJ, RG, MD), pp. 39–47.
HCIHCI-NIMT-2009-MetzeWSSM #evaluation #reliability
Reliable Evaluation of Multimodal Dialogue Systems (FM, IW, SS, JS, SM), pp. 75–83.
HCIHCI-NIMT-2009-Olmedo-RodriguezMC #3d #evaluation #framework #integration #interactive
Evaluation Proposal of a Framework for the Integration of Multimodal Interaction in 3D Worlds (HOR, DEM, VCP), pp. 84–92.
HCIHCI-NIMT-2009-QueirozFBF #approach #evaluation #towards #user interface
Towards a Multidimensional Approach for the Evaluation of Multimodal Application User Interfaces (JERdQ, JMF, AEVB, DdSF), pp. 29–38.
HCIHCI-NIMT-2009-SunSCC
Building a Practical Multimodal System with a Multimodal Fusion Module (YS, Y(S, FC, VC), pp. 93–102.
HCIHCI-NIMT-2009-VerdurandCPG #evaluation #interactive #modelling #performance
Modeling Multimodal Interaction for Performance Evaluation (EV, GC, FP, OG), pp. 103–112.
HCIHCI-NIMT-2009-WechsungESSMM #evaluation #interface #question #usability
Usability Evaluation of Multimodal Interfaces: Is the Whole the Sum of Its Parts? (IW, KPE, SS, JS, FM, SM), pp. 113–119.
HCIHIMI-DIE-2009-LaquaiAPR #3d #interactive #user interface #using
Using 3D Touch Interaction for a Multimodal Zoomable User Interface (FL, MA, TP, GR), pp. 543–552.
HCIOCSC-2009-BreitfussPI #automation #behaviour #generative
Automatic Generation of Non-verbal Behavior for Agents in Virtual Worlds: A System for Supporting Multimodal Conversations of Bots and Avatars (WB, HP, MI), pp. 153–161.
SACSAC-2009-NetoBFF #case study #interface #usability #web
Developing and evaluating web multimodal interfaces — a case study with usability principles (ATN, TJB, RPdMF, KF), pp. 116–120.
TPDLECDL-2008-KurthDFMC #framework #music
A Framework for Managing Multimodal Digitized Music Collections (FK, DD, CF, MM, MC), pp. 334–345.
SIGITESIGITE-2008-SchmalzC #concept #education
It/cs workshop: multimodal, multimedia courseware for teaching technical concepts in humanistic context (MSS, LC), pp. 23–30.
CHICHI-2008-PlimmerCBB #collaboration #people
Multimodal collaborative handwriting training for visually-impaired people (BP, AC, SAB, RB), pp. 393–402.
ICEISICEIS-HCI-2008-ReisSC #design #mobile
Designing Mobile Multimodal Artefacts (TR, MdS, LC), pp. 78–85.
ICEISICEIS-J-2008-ReisSC08a #design #mobile
Designing Universally Accessible Mobile Multimodal Artefacts (TR, MdS, LC), pp. 334–347.
ICPRICPR-2008-PohK #authentication #bound #fault #on the #using
On using error bounds to optimize cost-sensitive multimodal biometric authentication (NP, JK), pp. 1–4.
ICPRICPR-2008-YanZ #correlation #using
Multimodal biometrics fusion using Correlation Filter Bank (YY, YJZ), pp. 1–4.
ECSAECSA-2007-PereiraHK #architecture #distributed #staged
A Distributed Staged Architecture for Multimodal Applications (ACP, FH, KK), pp. 195–206.
TPDLECDL-2007-MullerKDFC #music #navigation #retrieval
Lyrics-Based Audio Retrieval and Multimodal Navigation in Music Collections (MM, FK, DD, CF, MC), pp. 112–123.
CHICHI-2007-KaiserBEC #interactive #speech
Multimodal redundancy across handwriting and speech during computer mediated human-human interactions (ECK, PB, CE, PRC), pp. 1009–1018.
CHICHI-2007-TseSGF #how
How pairs interact over a multimodal digital table (ET, CS, SG, CF), pp. 215–218.
CHICHI-2007-WilliamsonMH #interactive #mobile #named
Shoogle: excitatory multimodal interaction on mobile devices (JW, RMS, SH), pp. 121–124.
HCIDHM-2007-Soltysinski #human-computer #interactive #novel
Novel Methods for Human-Computer Interaction in Multimodal and Multidimensional Noninvasive Medical Imaging (TS), pp. 717–726.
HCIDHM-2007-WashburnSG #maintenance #using
Using Multimodal Technologies to Enhance Aviation Maintenance Inspection Training (CW, PS, AKG), pp. 1018–1026.
HCIHCI-IDU-2007-TaibR #deployment #design #interface
Wizard of Oz for Multimodal Interfaces Design: Deployment Considerations (RT, NR), pp. 232–241.
HCIHCI-MIE-2007-ChenCW #correlation #interactive
Exploiting Speech-Gesture Correlation in Multimodal Interaction (FC, EHCC, NW), pp. 23–30.
HCIHCI-MIE-2007-FreardJBPB #interactive #metric
Subjective Measurement of Workload Related to a Multimodal Interaction Task: NASA-TLX vs. Workload Profile (DF, EJ, OLB, GP, VB), pp. 60–69.
HCIHCI-MIE-2007-KimCPH #feedback #user interface
A Tangible User Interface with Multimodal Feedback (LK, HC, SHP, MH), pp. 94–103.
HCIHCI-MIE-2007-LepreuxHRTTK #composition #towards #user interface
Towards Multimodal User Interfaces Composition Based on UsiXML and MBD Principles (SL, AH, JR, DT, JCT, CK), pp. 134–143.
HCIHCI-MIE-2007-RigasA #design #empirical #interface #tool support
A Toolkit for Multimodal Interface Design: An Empirical Investigation (DIR, MMA), pp. 196–205.
HCIHCI-MIE-2007-VilimekHO #interface
Multimodal Interfaces for In-Vehicle Applications (RV, TH, BO), pp. 216–224.
HCIHCI-MIE-2007-WangYCI #interactive #interface #realtime #using
Character Agents in E-Learning Interface Using Multimodal Real-Time Interaction (HW, JY, MHC, MI), pp. 225–231.
HCIHCI-MIE-2007-YecanSBC #behaviour
Tracing Users’ Behaviors in a Multimodal Instructional Material: An Eye-Tracking Study (EY, ES, BB, ), pp. 755–762.
HCIHIMI-IIE-2007-PostECK #comparison
Experimental Comparison of Multimodal Meeting Browsers (WP, EE, AHMC, WK), pp. 118–127.
HCIHIMI-IIE-2007-Sonntag #design #implementation #interactive #interface #mobile #semantics #web
Interaction Design and Implementation for Multimodal Mobile Semantic Web Interfaces (DS), pp. 645–654.
KDDKDD-2007-GuoZXF #data mining #database #learning #mining
Enhanced max margin learning on multimodal data mining in a multimedia database (ZG, ZZ, EPX, CF), pp. 340–349.
SEKESEKE-2007-FerriGP #approach #human-computer #interactive
An Approach to Multimodal Input Interpretation in Human-Computer Interaction (FF, PG, SP), pp. 664–669.
VLDBVLDB-2006-JoshiDZWFLW #architecture #image #named #query #web
PARAgrab: A Comprehensive Architecture for Web Image Management and Multimodal Querying (DJ, RD, ZZ, WPW, MF, JL, JZW), pp. 1163–1166.
CHICHI-2006-KuriharaGOI #predict #recognition #speech
Speech pen: predictive handwriting based on ambient multimodal recognition (KK, MG, JO, TI), pp. 851–860.
CSCWCSCW-2006-VoidaM #analysis #challenge
Challenges in the analysis of multimodal messaging (AV, EDM), pp. 427–430.
ICEISICEIS-HCI-2006-DoyleWBW #interface #mobile #personalisation
A Multimodal Interface for Personalising Spatial Data in Mobile GIS (JD, JW, MB, DCW), pp. 71–78.
ICPRICPR-v3-2006-LiPKZ #using
Multimodal Registration using the Discrete Wavelet Frame Transform (SL, JP, JTK, JZ), pp. 877–880.
ICPRICPR-v3-2006-SuSDW #recognition
A Multimodal and Multistage Face Recognition Method for Simulated Portrait (GS, YS, CD, JW), pp. 1013–1017.
ICPRICPR-v3-2006-WangB06b #performance #predict
Performance Prediction for Multimodal Biometrics (RW, BB), pp. 586–589.
ICPRICPR-v4-2006-HaindlZ #image #segmentation
Multimodal Range Image Segmentation by Curve Grouping (MH, PZ), pp. 9–12.
ICDARICDAR-2005-MekhaldiLI #documentation
From Searching to Browsing through Multimodal Documents Linking (DM, DL, RI), pp. 924–929.
CHICHI-2005-OviattLC #difference #integration #question #what #why
Individual differences in multimodal integration patterns: what are they and why do they exist? (SLO, RL, RC), pp. 241–249.
ICMLICML-2005-TorreK #analysis
Multimodal oriented discriminant analysis (FDlT, TK), pp. 177–184.
JCDLJCDL-2004-PeruginiMRPSRWF #interactive #interface #usability #visualisation
Enhancing usability in CITIDEL: multimodal, multilingual, and interactive visualization interfaces (SP, KM, RR, MAPQ, RS, NR, CW, EAF), pp. 315–324.
CHICHI-2004-BeamishMF #interactive #music
Manipulating music: multimodal interaction for DJs (TB, KEM, SF), pp. 327–334.
CHICHI-2004-JackoBKMEES #feedback #visual notation
Isolating the effects of visual impairment: exploring the effect of AMD on the utility of multimodal feedback (JAJ, LB, TK, KPM, PJE, VKE, FS), pp. 311–318.
ICPRICPR-v3-2004-LanMZ #detection #using
Multi-level Anchorperson Detection Using Multimodal Association (DJL, YFM, HZ), pp. 890–893.
ICPRICPR-v3-2004-WanX #automation #generative #performance
Efficient Multimodal Features for Automatic Soccer Highlight Generation (KW, CX), pp. 973–976.
CHICHI-2003-BrewsterLBHT #interactive #smarttech
Multimodal “eyes-free” interaction techniques for wearable devices (SAB, JL, MB, MH, ST), pp. 473–480.
CHICHI-2003-JackoSSBEEKMZ #feedback #performance #question #visual notation #what
Older adults and visual impairment: what do exposure times and accuracy tell us about performance gains associated with multimodal feedback? (JAJ, IUS, FS, LB, PJE, VKE, TK, KPM, BSZ), pp. 33–40.
KDDKDD-2003-WuGLYC
The anatomy of a multimodal information filter (YLW, KG, BL, HY, EYC), pp. 462–471.
SIGIRSIGIR-2003-LinNNNSTNA #using #video
User-trainable video annotation using multimodal cues (CYL, MRN, AN, CN, JRS, BLT, HJN, WHA), pp. 403–404.
JCDLJCDL-2002-LyuYS #library #video
A multilingual, multimodal digital video library system (MRL, EY, SKSS), pp. 145–153.
CHICHI-2002-McGeeCWH #tool support
Comparing paper and tangible, multimodal tools (DM, PRC, RMW, SH), pp. 407–414.
ICPRICPR-v3-2002-HongH #mining
Multimodal Temporal Pattern Mining (PH, TSH), pp. 465–472.
ICPRICPR-v3-2000-KoubaroulisMK #performance
The Multimodal Signature Method: An Efficiency and Sensitivity Study (DK, JM, JK), pp. 3379–3382.
DL-1999-DingMS #video
Multimodal Surrogates for Video Browsing (WD, GM, DS), pp. 85–93.
CHICHI-1999-SuhmMW #empirical #evaluation #fault #interactive #modelling
Model-Based and Empirical Evaluation of Multimodal Interactive Error Correction (BS, BAM, AW), pp. 584–591.
HCIHCI-CCAD-1999-Machate #concept #interactive #smarttech #using
Being natural — on the use of multimodal interaction concepts in smart homes (JM), pp. 937–941.
HCIHCI-EI-1999-CarbonellD #empirical #gesture #human-computer #speech #using
Empirical data on the use of speech and gestures in a multimodal human-computer environment (NC, PD), pp. 446–450.
HCIHCI-EI-1999-HienzMSA #communication #human-computer
Multimodal Human-Computer Communication in Technical Applications (HH, JM, RS, SA), pp. 755–759.
HCIHCI-EI-1999-SteffanKB #3d #design #feedback #interactive
Design of Multimodal Feedback Mechanisms for Interactive 3D Object Manipulation (RS, TK, FB), pp. 461–465.
CHICHI-1998-YangSMW #interactive #visual notation
Visual Tracking for Multimodal Human Computer Interaction (JY, RS, UM, AW), pp. 140–147.
CHICHI-1997-OviattDK #human-computer #integration #interactive
Integration and Synchronization of Input Modes during Multimodal Human-Computer Interaction (SLO, ADA, KK), pp. 415–422.
HCIHCI-CC-1997-GlinertK #interface #ubiquitous
MultiModal Multi-Interface Environments for Accessible Ubiquitous Computing (EPG, RLK), pp. 445–448.
HCIHCI-SEC-1997-KeysonS #design #framework #interface #platform
TacTool v2.0: An Object-Based Multimodal Interface Design Platform (DVK, LvS), pp. 311–314.
HCIHCI-SEC-1997-Marshall #human-computer #interactive #modelling
Modeling Multimodal Human-Computer Interaction: Semiotics, Proxemics and Kinesics (RM), pp. 671–674.
HCIHCI-SEC-1997-RuyterV #interactive #modelling
Modeling and Evaluating Multimodal Interaction Styles (BERdR, JHMdV), pp. 711–714.
HCIHCI-SEC-1997-TakahashiTK
Multimodal Display for Enhanced Situation Awareness Based on Cognitive Diversity (MT, ST, MK), pp. 707–710.
CHICHI-1996-Oviatt #interactive #interface
Multimodal Interfaces for Dynamic Interactive Maps (SLO), pp. 95–102.
CHICHI-1995-NigayC #challenge #framework #platform
A Generic Platform for Addressing the Multimodal Challenge (LN, JC), pp. 98–105.
ICMLICML-1995-Hekanaho #concept #learning
Symbiosis in Multimodal Concept Learning (JH), pp. 278–285.
HCIHCI-SHI-1993-FahnrichH #aspect-oriented #human-computer #interactive
Aspects of Multimodal and Multimedia Human-Computer Interaction (KPF, KHH), pp. 440–445.
CHIINTERCHI-1993-NigayC #concurrent #data fusion #design
A design space for multimodal systems: concurrent processing and data fusion (LN, JC), pp. 172–178.
ICLPILPS-1993-BaldoniGM #logic programming
A Multimodal Logic to Define Modules in Logic Programming (MB, LG, AM), pp. 473–487.

Bibliography of Software Language Engineering in Generated Hypertext (BibSLEIGH) is created and maintained by Dr. Vadim Zaytsev.
Hosted as a part of SLEBOK on GitHub.