CHI 2013Social movement trajectories are visualized as interactive path animations, allowing analysts to track group mobilization over time and geography. By mapping social media streams onto a dynamic spatial interface, the system highlights hotspots of activity and narrative shifts. Evaluation with activists shows improved understanding of movement evolution and identification of influential events.
PervasiveHealth 2012Paper-digital interface captures handwritten medical notes during emergencies and displays patient data on tablets instantly, improving coordination and reducing errors under pressure. The system integrates pen-based input with live digital visualization, ensuring critical information is always visible to care teams. Simulation evaluations demonstrate significant speed and accuracy gains compared to manual record-keeping.
CSCW 2012Pen-based toolkit enables educators to create collaborative language-learning exercises by combining paper worksheets with digital pens, automatically syncing handwritten responses to shared tablets. Instructors define prompts on paper; student answers are captured digitally for group review, fostering interactive peer feedback. Usability tests show increased engagement and seamless transition between analog and digital environments.
CHI 2012Digital pen-and-paper system records researchers’ handwritten field notes and automatically tags them with audio/video captured during observations, enabling seamless review of multimodal data. The system links ink strokes to contextual media, improving recall and reducing manual transcription efforts. Field studies show significant increases in data accuracy and efficiency compared to traditional note-taking methods.
ETRA 2012Mobile eye-tracking system records pilot gaze during flight simulations to study cockpit workload and information-seeking behavior. Data analysis reveals patterns of visual attention linked to task complexity, informing design of adaptive cockpit interfaces. Field tests show the platform collects high-fidelity gaze data without interfering with pilot performance.
ITS 2012Detailed study of reading activities uses video and gaze tracking to capture how users annotate, scroll, and reference documents on desktop displays. Analysis identifies patterns such as frequent context-switching and multitasking, guiding design of interactive workspaces that integrate pen, touch, and keyboard inputs. Recommendations include adaptive layouts and gesture shortcuts to reduce cognitive load during active reading sessions.
ICMI 2012Multimodal system analyzes speech, gesture, and facial cues from group interactions to predict individual expertise and leadership roles in collaborative learning. By training machine learning models on synchronized audio-visual data, the approach identifies patterns of influence and knowledge sharing. Results show the model predicts leadership emergence with over 80% accuracy, guiding interventions for effective team facilitation.
CHI 2012The TAP & PLAY toolkit allows educators to design interactive language exercises on paper, where learners tap printed prompts with a digital pen to receive instant audio feedback and gamified quizzes on tablet devices. The system bridges paper and digital learning by converting pen strokes into digital events, fostering active participation and immediate correction. Deployment in classrooms shows increased student motivation and measurable language gains.
IUI 2012Overlays allow multiple users to annotate ultra-scale display walls simultaneously by projecting personalized transparent layers, enabling private notes and group discussion without obscuring base content. Each participant’s input appears on separate virtual layers, reducing interference and enhancing coordination. User feedback indicates overlays improve task parallelism and decrease communication overhead in collaborative teams.
IUI 2011PaperSketch bridges paper and digital whiteboards for remote teams by capturing sketches on paper and streaming them live to a shared canvas. Low-latency ink recognition ensures that collaborators see each other’s drawings in real time, enabling fluid, co-located feels despite physical distance. User studies indicate increased creativity and coordination compared to voice-only collaboration.