Best Papers

Directing Exploratory Search: Reinforcement Learning from User Interactions with Keywords

Dorota Glowacka, Tuukka Ruotsalo, Ksenia Konuyshkova, kumaripaba Athukorala, Samuel Kaski, Giulio Jacucci

Techniques for both exploratory and known item search tend to direct only to more specific subtopics or individual documents, as opposed to allowing directing the exploration of the information space. We present an interactive information retrieval system that combines Reinforcement Learning techniques along with a novel user interface design to allow active engagement of users in directing the search. Users can directly manipulate document features (keywords) to indicate their interests and Reinforcement Learning is used to model the user by allowing the system to trade off between exploration and exploitation. This gives users the opportunity to more effectively direct their search nearer, further and following a direction. A task-based user study conducted with 20 participants comparing our system to a traditional query-based baseline indicates that our system significantly improves the effectiveness of information retrieval by providing access to more relevant and novel information without having to spend more time acquiring the information.

Back to Top

Dynamic Text Management for See-through Wearable and Heads-up Display Systems

Jason Orlosky, Kiyoshi Kiyokawa, Haruo Takemura

Reading text safely and easily while mobile has been an issue with see-through displays for many years. For example, in order to effectively use optical see through Head Mounted Displays (HMDs) or Heads Up Display (HUD) systems in constantly changing dynamic environments, variables like lighting conditions, human or vehicular obstructions in a user's path, and scene variation must be dealt with effectively.

This paper introduces a new intelligent text management system that actively manages movement of text in a user's field of view. Research to date lacks a method to migrate user-centric content such as e-mail or text messages throughout a user's environment while mobile. Unlike most current annotation and view management systems, we use camera tracking to find dark, uniform regions along the route on which a user is travelling in real time. We then implement methodology to move text from one viable location to the next to maximize readability. A pilot experiment with 19 participants shows that the text placement of our system is preferred to text in fixed location configurations.

Back to Top

Best Paper Nominees

Recommending Targeted Strangers from Whom to Solicit Information on Social Media

Jalal Mahmud, Michelle X. Zhou, Nimrod Megiddo, Jeffrey Nichols, Clemens Drews

We present an intelligent, crowd-powered information collection system that automatically identifies and asks targeted strangers on Twitter for desired information (e.g., current wait time at a nightclub). Our work includes three parts. First, we identify a set of features that characterize one's willingness and readiness to respond based on their exhibited social behavior, including the content of their tweets and social interaction patterns. Second, we use the identified features to build a statistical model that predicts one's likelihood to respond to information solicitations. Third, we develop a recommendation algorithm that selects a set of targeted strangers using the probabilities computed by our statistical model with the goal to maximize the over-all response rate. Our experiments, including several in the real world, demonstrate the effectiveness of our work.

Back to Top

User-Adaptive Information Visualization - Using Eye Gaze Data to Infer Visualization Tasks and User Cognitive Abilities

Ben Steichen, Giuseppe Carenini, Cristina Conati

Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user,s needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user- adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user,s visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.

Back to Top