Accepted Papers

Multi-Tap Sliders: Advancing Touch Interaction for Parameter Adjustment

Sashikanth Damaraju, Jinsil Hwaryoung Seo, Tracy Hammond, Andruid Kerne

Research in multi-touch interaction has typically been focused on direct spatial manipulation; techniques have been create to result in the most intuitive mapping between the movement of the hand and the resultant change in the virtual object. However, as we attempt to design for more complex operations, the expectation of spatial manipulation becomes infeasible.

We introduce Multi-tap Sliders for operation in what we call abstract parametric spaces that do not have an obvious literal spatial representation, such as exposure, brightness, contrast and saturation for image editing. This new widget design promotes multi-touch interaction for prolonged use in scenarios that require adjustment of multiple parameters as part of an operation. The multi-tap sliders encourage the user to keep her visual focus on the target, instead of the requiring to look back at the interface.

Our research emphasizes ergonomics, clear visual design, and fluid transition between the selection of parameters and their subsequent adjustment for a given operation. We demonstrate a new technique for quickly selecting and adjusting multiple numerical parameters. A preliminary user study points out improvements over the traditional sliders.

Back to Top

Haptic Interface for Non-Visual Steering

Burkay Sucu, Eelke Folmer

Glare significantly diminishes visual perception, and is a significant cause of traffic accidents. Existing haptic automotive interfaces typically indicate when and in which direction to steer, but they don't convey how much to steer, as a driver typically determines this using visual feedback. We present a novel haptic interface that relies on an intelligent vehicle position system to indicate when, in which direction and how far to steer, as to facilitate steering without any visual feedback. Our interface may improve driving safety when a driver is temporarily blinded, for example, due to glare or fog. Three user studies were performed, the first study tries to understand driving using visual feedback, the second study evaluates two different haptic encoding mechanisms with no visual feedback present, and a third study evaluates the supplemental effect of haptic feedback when used in conjunction with visual feedback. Studies show this interface to allow for blind steering in small curves and that it can improve a driver's lane keeping ability when combined with visual feedback.

Back to Top

Functionality-Based Clustering using Short Textual Description: Helping Users to Find Apps Installed on their Mobile Device

David Lavid Ben Lulu, Tsvi Kuflik

In recent years, we have witnessed the incredible popularity and widespread adoption of mobile devices. Millions of Apps are being developed and downloaded by users at an amazing rate. These are multi-feature Apps that address a broad range of needs and functions. Nowadays, every user has dozens of Apps on his mobile device. As time goes on, it becomes more and more difficult simply to find the desired App among those that are installed on the mobile device. In spite of several attempts to address the problem, no good solution for this increasing problem has yet been found. In this paper we suggest the use of unsupervised machine learning for clustering Apps based on their functionality, to allow users to access them easily. The functionality is elicited from their description as retrieved from various App stores and enriched by content from professional blogs. The Apps are clustered and grouped according to their functionality and presented hierarchically to the user in order to facilitate the search on the small screen of the mobile device.

Back to Top

SmartDCap: Semi-Automatic Capture of Higher Quality Document Images from a Smartphone

Francine Chen, Scott Carter, Laurent Denoue, Jayant Kumar

People frequently capture photos with their smartphones, and some are starting to capture images of documents. However, the quality of captured document images is often lower than expected, even when an application that performs post-processing to improve the image is used. To improve the quality of captured images before post-processing, we developed the Smart Document Capture (SmartDCap) application that provides real-time feedback to users about the likely quality of a captured image. The quality measures capture the sharpness and framing of a page or regions on a page, such as a set of one or more columns, a part of a column, a figure, or a table. Using our approach, while users adjust the camera position, the application automatically determines when to take a picture of a document to produce a good quality result. We performed a subjective evaluation comparing SmartDCap and the Android Ice Cream Sandwich (ICS) camera application; we also used raters to evaluate the quality of the captured images. Our results indicate that users find SmartDCap to be as easy to use as the standard ICS camera application. Also, images captured using SmartDCap are sharper and better framed on average than images using the ICS camera application.

Back to Top

Mind the Gap: Collecting Commonsense Data about Simple Experiences

Jerry S Weltman, S. Sitharama Iyengar, Michael Hegarty

In natural language, there are many gaps between what is stated and what is understood. Speakers and listeners fill in these gaps, presumably from some life experience, but no one knows how to get this experiential data into a computer. As a first step, we have created a methodology and software interface for collecting commonsense data about simple experiences. This work is intended to form the basis of a new resource for natural language processing.

We model experience as a sequence of comic frames, annotated with the changing intentional and physical states of the characters and objects. To create an annotated experience, our software interface guides non-experts in identifying facts about experiences that humans normally take for granted. As part of this process, the system asks questions using the Socratic Method to help users notice difficult-to-articulate commonsense data. A test on ten subjects indicates that non-experts are able to produce high quality experiential data.

Back to Top

SIDNIE: Scaffolded Interviews Developed by Nurses in Education

Lauren Cairco Dukes, Toni Bloodworth Pence, Larry F. Hodges, Nancy Meehan, Arlene Johnson

One of the most common clinical education methods for teaching patient interaction skills to nursing students is role-playing established scenarios with their classmates. Unfortunately, this is far from simulating real world experiences that they will soon face, and does not provide the immediate, impartial feedback necessary for interviewing skills development. We present a system for Scaffolded Interviews Developed by Nurses In Education (SIDNIE) that supports baccalaureate nursing education by providing multiple guided interview practice sessions with virtual characters. Our scenario depicts a mother who has brought in her five year old child to the clinic. In this paper we describe our system and report on a preliminary usability evaluation conducted with nursing students.

Back to Top

Real-time Gait Classification for Persuasive Smartphone Apps: Structuring the Literature and Pushing the Limits

Oliver S. Schneider, Karon E. MacLean, Kerem Altun, Idin Karuei, Michael M.A. Wu

Persuasive technology is now mobile and context-aware. Intelligent analysis of accelerometer signals in smartphones and other specialized devices has recently been used to classify activity (e.g., distinguishing walking from cycling) to encourage physical activity, sustainable transport, and other social goals. Unfortunately, results vary drastically due to differences in methodology and problem domain. The present report begins by structuring a survey of current work within a new framework, which highlights comparable characteristics between studies; this provided a tool by which we and others can understand the current state-of-the art and guide research towards existing gaps. We then present a new user study, positioned in an identified gap, that pushes limits of current success with a challenging problem: the real-time classification of 15 similar and novel gaits suitable for several persuasive application areas, focused on the growing phenomenon of exercise games. We achieve a mean correct classification rate of 78.1% of all 15 gaits with a minimal amount of personalized training of the classifier for each participant when carried in any of 6 different carrying locations (not known a priori). When narrowed to a subset of four gaits and one location that is known, this improves to means of 92.2% with and 87.2% without personalization. Finally, we group our findings into design guidelines and quantify variation in accuracy when an algorithm is trained for a known location and participant.

Back to Top

LinkedVis: Exploring Social and Semantic Career Recommendations

Svetlin Bostandjiev, John O'Donovan, Tobias Hollerer

This paper presents LinkedVis, an interactive visual recommender system that combines social and semantic knowledge to produce career recommendations based on the LinkedIn API. A collaborative (social) approach is employed to identify professionals with similar career paths and produce personalized recommendations of both companies and roles. To unify semantically identical but lexically distinct entities and arrive at better user models, we employ lightweight natural language processing and entity resolution using semantic information from a variety of end-points on the web. Elements from the underlying recommendation algorithm are exposed through an interactive interface that allows users to manipulate different aspects of the algorithm and the data it operates on, allowing users to explore a variety of "what-if" scenarios around their current profile. We evaluate LinkedVis through leave-one-out accuracy and diversity experiments on a data corpus collected from 47 users and their LinkedIn connections, as well as through a supervised study of 27 users exploring their own profile and recommendations interactively. Results show that our approach outperforms a benchmark recommendation algorithm without semantic resolution in terms of accuracy and diversity, and that the ability to tweak recommendations interactively by adjusting profile item and social connection weights further improves predictive accuracy. Questionnaires on the user experience with the explanatory and interactive aspects of the application reveal very high user acceptance and satisfaction.

Back to Top

Vibrobelt: Tactile Navigation Support for Cyclists

Haska Steltenpohl, Anders Bouwer

Tactile displays can be used without demanding the attention from the human visual system, which makes them attractive for use in wayfinding contexts, where visual attention should be directed at traffic and other information in the environment. To investigate the potential of tactile navigation for cyclists, we have designed and implemented Vibrobelt. This belt, worn around the waist, gives waypoint, distance and endpoint information using directional tactile cues. We evaluated Vibrobelt by comparing it to a visual navigation application. Twenty participants were asked to cycle two routes, each route with a different application. We measured the spatial knowledge acquisition and analyzed the visual focus of the participants. We found that Vibrobelt was successful at guiding all participants to their destinations over an unfamiliar route. Participants using Vibrobelt showed a lower error rate for recognizing images from the route than users of the visual system. Users of the visual system were generally navigating faster, and were better at recalling the route, showing a higher contextual route understanding. The endpoint distance encoding was not always correctly interpreted. Future research will improve Vibrobelt by making a clearer distinction between waypoint and endpoint information, and will test users in more complex navigational situations.

Back to Top

Recommending Targeted Strangers from Whom to Solicit Information on Social Media

Jalal Mahmud, Michelle X. Zhou, Nimrod Megiddo, Jeffrey Nichols, Clemens Drews

We present an intelligent, crowd-powered information collection system that automatically identifies and asks targeted strangers on Twitter for desired information (e.g., current wait time at a nightclub). Our work includes three parts. First, we identify a set of features that characterize one's willingness and readiness to respond based on their exhibited social behavior, including the content of their tweets and social interaction patterns. Second, we use the identified features to build a statistical model that predicts one's likelihood to respond to information solicitations. Third, we develop a recommendation algorithm that selects a set of targeted strangers using the probabilities computed by our statistical model with the goal to maximize the over-all response rate. Our experiments, including several in the real world, demonstrate the effectiveness of our work.

Back to Top

Informing Intelligent User Interfaces by Inferring Affective States from Body Postures in Ubiquitous Computing Environments

Chiew Seng Sean Tan, Kris Luyten, Johannes Schöning, Karin Coninx

Intelligent User Interfaces can benefit from having knowledge on the user's emotion. However, current implementations to detect affective states, are often constraining the user's freedom of movement by instrumenting her with sensors. This prevents affective computing from being deployed in naturalistic and ubiquitous computing contexts. In this paper, we present a novel system called mASqUE, which uses a set of association rules to infer someone's affective state from their body postures. This is done without any user instrumentation and using off-the-shelf and non-expensive commodity hardware: a depth camera tracks the body posture of the users and their postures are also used as an indicator of their openness. By combining the posture information with physiological sensors measurements we were able to mine a set of association rules relating postures to affective states. We demonstrate the possibility of inferring affective states from body postures in ubiquitous computing environments and our study also provides insights how this opens up new possibilities for IUI to access the affective states of users from body postures in a nonintrusive way.

Back to Top

Recommending Energy Tariffs and Load Shifting Based on Smart Household Usage Profiling

Joel E. Fischer, Sarvapali D. Ramchurn, Michael Osborne, Oliver Parson, Trung Dong Huynh, Muddasser Alam, Nadia Pantidi, Stuart Moran, Khaled Bachour, Steve Reece, Enrico Costanza, Tom Rodden, Nicholas R. Jennings

We present a system and study of personalized energy-related recommendation. AgentSwitch utilizes electricity usage data collected from users' households over a period of time to realize a range of smart energy-related recommendations on energy tariffs, load detection and usage shifting. The web service is driven by a third party real-time energy tariff API (uSwitch), an energy data store, a set of algorithms for usage prediction, and appliance-level load disaggregation. We present the system design and user evaluation consisting of interviews and interface walkthroughs. We recruited participants from a previous study during which three months of their household's energy use was recorded to evaluate personalized recommendations in AgentSwitch. Our contributions are a) a systems architecture for personalized energy services; and b) findings from the evaluation that reveal challenges in designing energy-related recommender systems. In response to the challenges we formulate design recommendations to mitigate barriers to switching tariffs, to incentivize load shifting, and to automate energy management.

Back to Top

Tailoring Recommendations to Groups of Users: A Graph Walk-based Approach

Heung-Nam Kim, Majdi Rawashdeh, Abdulmotaleb El Saddik

With the rapid popularity of smart devices, users are easily and conveniently accessing rich multimedia content. Consequentially, the increasing need for recommender services, from both individual users and groups of users, has arisen. In this paper, we present a graph-based approach to a recommender system that can make recommendations most notably to groups of users. From rating information, we first model a signed graph that contains both positive and negative links between users and items. On this graph we examine two distinct random walks to separately quantify the degree to which a group of users would like or dislike items. We then employ a differential ranking approach for tailoring recommendations to the group. Our empirical evaluations on the MovieLens dataset demonstrate that the proposed group recommendation method performs better than existing alternatives. We also demonstrate the feasibility of Folkommender for smartphones.

Back to Top

Dynamic Text Management for See-through Wearable and Heads-up Display Systems

Jason Orlosky, Kiyoshi Kiyokawa, Haruo Takemura

Reading text safely and easily while mobile has been an issue with see-through displays for many years. For example, in order to effectively use optical see through Head Mounted Displays (HMDs) or Heads Up Display (HUD) systems in constantly changing dynamic environments, variables like lighting conditions, human or vehicular obstructions in a user's path, and scene variation must be dealt with effectively.

This paper introduces a new intelligent text management system that actively manages movement of text in a user's field of view. Research to date lacks a method to migrate user-centric content such as e-mail or text messages throughout a user's environment while mobile. Unlike most current annotation and view management systems, we use camera tracking to find dark, uniform regions along the route on which a user is travelling in real time. We then implement methodology to move text from one viable location to the next to maximize readability. A pilot experiment with 19 participants shows that the text placement of our system is preferred to text in fixed location configurations.

Back to Top

glueTK : A Framework For Multi-Modal, Multi-Display Human-Machine-Interaction

Florian van de Camp, Rainer Stiefelhagen

As new input modalities allow interaction not only in front of a single display, but enable interaction in the whole room, application developers face new challenges. They have to handle many new input modalities, each with its own interface and requirements for pre-processing, deal with multiple displays, and applications that are distributed across multiple machines.

We present glueTK, a framework that abstracts from the complexities of these input modalities, allows the design of interfaces for a wide range of display sizes, and makes the distribution across multiple machines transparent to the developer as well as the user. With an example application we demonstrate the wide range of input modalities glueTK can support and the functionality this enables. GlueTK moves away from the focus on point and touch like input modalities, enabling the design of applications tailored towards interactive rooms instead of the traditional desktop environment.

Back to Top

AppFunnel: A Framework for Usage-centric Evaluation of Recommender Systems that Suggest Mobile Applications

Matthias Böhmer, Lyubomir Ganev, Antonio Krüger

Mobile phones have evolved from communication to multi-purpose devices that assist people with applications in various contexts and tasks. The size of the mobile ecosystem is steadily growing and new applications become available every day. This increasing number of applications makes it difficult for end-users to find good applications. Recommender systems suggesting mobile applications are being built to help people to find valuable applications. Since the nature of mobile applications differs from classical items to be recommended (e.g. books, movies, other goods), not only can new approaches for recommendation be developed, but also new paradigms for evaluating performance of recommender systems are advisable. During the lifecycle of mobile applications, different events can be observed that provide insights into users' engagement with particular applications. This gives rise to new approaches for evaluation of recommender systems. In this paper, we present AppFunnel: a framework that allows for usage-centric evaluation considering different stages of application engagement. We present a case study and discuss capabilities for evaluating recommender engines by applying metrics to the AppFunnel.

Back to Top

Optimizing Temporal Topic Segmentation for Intelligent Text Visualization

Shimei Pan, Michelle X. Zhou, Yangqiu Song, Weihong Qian, Fei Wang, Shixia Liu

We are building a topic-based, interactive visual analytic tool that aids users in analyzing large collections of text. To help users quickly discover content evolution and significant content transitions within a topic over time, here we present a novel, constraint-based approach to temporal topic segmentation. Our solution splits a discovered topic into multiple linear, non-overlapping sub-topics along a timeline by satisfying a diverse set of semantic, temporal, and visualization constraints simultaneously. For each derived sub-topic, our solution also automatically selects a set of representative keywords to summarize the main content of the sub-topic. Our extensive evaluation, including a crowd-sourced user study, demonstrates the effectiveness of our method over an existing baseline.

Back to Top

Visualizing Recommendations to Support Exploration, Transparency and Controllability

Katrien Verbert, Denis Parra, Peter Brusilovsky, Erik Duval

Research on recommender systems has traditionally focused on the development of algorithms to improve accuracy of recommendations. So far, little research has been done to enable user interaction with such systems as a basis to support exploration and control by end users. In this paper, we present our research on the use of information visualization techniques to interact with recommender systems. We investigated how information visualization can improve user understanding of the typically black-box rationale behind recommendations in order to increase their perceived relevance and meaning and to support exploration and user involvement in the recommendation process. Our study has been performed using TalkExplorer, an interactive visualization tool developed for attendees of academic conferences. The results of user studies performed at two conferences allowed us to obtain interesting insights to enhance user interfaces that integrate recommendation technology. More specifically, effectiveness and probability of item selection both increase when users are able to explore and interrelate multiple entities, i.e. items bookmarked by users, recommendations and tags.

Back to Top

Indexing Cognitive Workload Based on Pupillary Response under Luminance and Emotional Changes

Weihong Wang, Zhidong Li, Yang Wang, Fang Chen

Pupillary response is a popular physiological index of cognitive workload that can be used for design and evaluation of adaptive interface in various areas of human-computer interaction (HCI) research. However, in practice various confounding factors unrelated to workload, including changes of luminance condition and emotional arousal might degrade pupillary response based workload measures such as commonly used mean pupil diameter. This work investigates pupillary response as a cognitive workload measure under the influence of such confounding factors. Video-based eye tracker is used to record pupillary response during arithmetic tasks under luminance and emotional changes. Machine learning based feature selection and classification techniques are proposed to robustly index cognitive workload based on pupillary response even with the influence of noisy factors unrelated to workload.

Back to Top

Making Graphic-Based Authentication Secure against Smudge Attacks

Emanuel von Zezschwitz, Anton Koslow, Alexander De Luca, Heinrich Hussmann

Most of today's smartphones and tablet computers feature touchscreens as the main way of interaction. By using these touchscreens, oily residues of the users' fingers, smudge, remain on the device's display. As this smudge can be used to deduce formerly entered data, authentication tokens are jeopardized. Most notably, grid-based authentication methods, like the Android pattern scheme are prone to such attacks.

Based on a thorough development process using low fidelity and high fidelity prototyping, we designed three graphic-based authentication methods in a way to leave smudge traces, which are not easy to interpret. We present one grid-based and two randomized graphical approaches and report on two user studies that we performed to prove the feasibility of these concepts. The authentication schemes were compared to the widely used Android pattern authentication and analyzed in terms of performance, usability and security. The results indicate that our concepts are significantly more secure against smudge attacks while keeping high input speed.

Back to Top

Giving Users Full Control Over Personalization

Fedor Bakalov, Marie-Jean Meurs, Birgitta König-Ries, Bahar Sateli, René Witte, Greg Butler, Adrian Tsang

Personalization nowadays is a commodity in a broad spectrum of computer systems. Examples range from online shops recommending products identified based on the user's previous purchases to web search engines sorting search hits based on the user browsing history. The aim of such adaptive behavior is to help users to find relevant content easier and faster. However, there are a number of negative aspects of this behavior. Adaptive systems have been criticized for violating the usability principles of direct manipulation systems, namely controllability, predictability, transparency, and unobtrusiveness. In this paper, we propose an approach to controlling adaptive behavior in recommender systems. It allows users to get an overview of personalization effects, view the user profile that is used for personalization, and adjust the profile and personalization effects to their needs and preferences. We present this approach using an example of a personalized portal for biochemical literature, whose users are biochemists, biologists and genomicists. Also, we report on a user study evaluating the impacts of controllable personalization on the usefulness, usability, user satisfaction, transparency, and trustworthiness of personalized systems.

Back to Top

Directing Exploratory Search: Reinforcement Learning from User Interactions with Keywords

Dorota Glowacka, Tuukka Ruotsalo, Ksenia Konuyshkova, kumaripaba Athukorala, Samuel Kaski, Giulio Jacucci

Techniques for both exploratory and known item search tend to direct only to more specific subtopics or individual documents, as opposed to allowing directing the exploration of the information space. We present an interactive information retrieval system that combines Reinforcement Learning techniques along with a novel user interface design to allow active engagement of users in directing the search. Users can directly manipulate document features (keywords) to indicate their interests and Reinforcement Learning is used to model the user by allowing the system to trade off between exploration and exploitation. This gives users the opportunity to more effectively direct their search nearer, further and following a direction. A task-based user study conducted with 20 participants comparing our system to a traditional query-based baseline indicates that our system significantly improves the effectiveness of information retrieval by providing access to more relevant and novel information without having to spend more time acquiring the information.

Back to Top

Subtle Gaze-Dependent Techniques for Visualising Display Changes in Multi-Display Environments

Jakub Dostal, Per Ola Kristensson, Aaron Quigley

This paper explores techniques for visualising display changes in multi-display environments. We present four subtle gaze-dependent techniques for visualising change on unattended displays called FreezeFrame, PixMap, WindowMap and Aura. To enable the techniques to be directly deployed to workstations, we also present a system that automatically identifies the user's eyes using computer vision and a set of web cameras mounted on the displays. An evaluation confirms this system can detect which display the user is attending to with high accuracy. We studied the efficacy of the visualisation techniques in a five-day case study with a working professional. This individual used our system eight hours per day for five consecutive days. The results of the study show that the participant found the system and the techniques useful, subtle, calm and non-intrusive. We conclude by discussing the challenges in evaluating intelligent subtle interaction techniques using traditional experimental paradigms.

Back to Top

Combining Acceleration and Gyroscope Data for Motion Gesture Recognition using Classifiers with Dimensionality Constraints

Sven Kratz, Michael Rohs, Georg Essl

Motivated by the addition of gyroscopes to a large number of new smart phones, we study the effects of combining accelerometer and gyroscope data on the recognition rate of motion gesture recognizers with dimensionality constraints. Using a large data set of motion gestures we analyze results for the following algorithms: Protractor3D, Dynamic Time Warping (DTW) and Regularized Logistic Regression (LR). We chose to study these algorithms because they are relatively easy to implement, thus well suited for rapid prototyping or early deployment during prototyping stages. For use in our analysis, we contribute a method to extend Protractor3D to work with the 6D data obtained by combining accelerometer and gyroscope data. Our results show that combining accelerometer and gyroscope data is beneficial also for algorithms with dimensionality constraints and improves the gesture recognition rate on our data set by up to 4%.

Back to Top

Modeling Discussion Topics in Interactions with a Tablet Reading Primer

Adrian Boteanu, Sonia Chernova

CloudPrimer is a tablet-based interactive reading primer that aims to foster early literacy skills and shared parent-child reading through user-targeted discussion topic suggestions. The tablet application records discussions between parents and children as they read a story and leverages this information, in combination with a common sense knowledge base, to develop discussion topic models. The long-term goal of the project is to use such models to provide context-sensitive discussion topic suggestions to parents during the shared reading activity in order to enhance the interactive experience and foster parental engagement in literacy education. In this paper, we present a novel approach for using commonsense reasoning to effectively model topics of discussion in unstructured dialog. We introduce a metric for localizing concepts that the users are interested in at a given moment in the dialog and extract a time sequence of words of interest. We then present algorithms for topic modeling and refinement that leverage semantic knowledge acquired from ConceptNet, a commonsense knowledge base. We evaluate the performance of our algorithms using transcriptions of audio recordings of parent-child pairs interacting with a tablet application, and compare the output of our algorithms to human-generated topics. Our results show that words of interest and discussion topics selected by our algorithm closely match those identified by human readers. \

Back to Top

CatStream: Categorising Tweets for User Profiling and Stream Filtering

Sandra Garcia Esparza, Michael O'Mahony, Barry Smyth

Real-time information streams such as Twitter have become a common way for users to discover new information. For most users this means curating a set of other users to follow. However, at the moment the following granularity of Twitter is restricted to the level of individual users. Our research has highlighted that many following relationships are motivated by a subset of interests that are shared by the users in question. For example, user A might follow user B because of their technology related tweets, but shares little or no interest in their other tweets. As a result, this all-or-nothing following relationship can quickly overwhelm users' timelines with extraneous information. To improve this situation we propose a user profiling approach based on the topical categorisation of users' posted URLs. These topics can then be used to filter information streams so that they focus on more relevant information from the people they follow, based on their core interests. In particular, we present a system called CatStream that provides for a more fine-grained way to follow users on specific topics and filter our timelines accordingly. We present the results of a live-user study that shows how filtered timelines offer a better way to organise and filter their information streams. Most importantly users are generally satisfied with the categories predicted for their profiles and tweets.

Back to Top

Locating User Attention using Eye Tracking and EEG for Spatio-Temporal Event Selection

Felix Putze, Jutta Hild, Rainer Kärgel, Alexander Redmann, Christian Herff, Beyerer Jürgen, Tanja Schultz

In expert video analysis, the selection of certain events in a continuous video stream is a frequently occurring operation, e.g., in surveillance applications. Due to the dynamic and rich visual input, the constantly high attention and the required hand-eye coordination for mouse interaction, this is a very demanding and exhausting task. Hence, relevant events might be missed. We propose to use eye tracking and electroencephalography (EEG) as additional input modalities for event selection. From eye tracking, we derive the spatial location of a perceived event and from patterns in the EEG signal we derive its temporal location within the video stream. This reduces the amount of the required active user input in the selection process, and thus has the potential to reduce the user's workload. In this paper, we describe the employed methods for the localization processes and introduce the developed scenario in which we investigate the feasibility of this approach. Finally, we present and discuss results on the accuracy and the speed of the method and investigate how the modalities interact.

Back to Top

Towards Cooperative Brain-Computer Interfaces for Space Navigation

Riccardo Poli, Caterina Cinel, Ana Matran-Fernandez, Francisco Sepulveda, Adrian Stoica

We explored the possibility of controlling a spacecraft simulator using an analogue Brain-Computer Interface (BCI) for 2-D pointer control. This is a difficult task, for which no previous attempt has been reported in the literature. Our system relies on an active display which produces event-related potentials (ERPs) in the user's brain. These are analysed in real-time to produce control vectors for the user interface. In tests, users of the simulator were told to pass as close as possible to the Sun. Performance was very promising, on average users managing to satisfy the simulation success criterion in 67.5% of the runs. Furthermore, to study the potential of a collaborative approach to spacecraft navigation, we developed BCIs where the system is controlled via the integration of the ERPs of two users. Performance analysis indicates that collaborative BCIs produce trajectories that are statistically significantly superior to those obtained by single users.

Back to Top

Recommendation System for Automatic Design of Magazine Covers

Ali Jahanian, Jan Allebach, Jerry Liu, Qian Lin, Daniel Tretter, Eamonn O'Brien-Strain, Seungyon Claire Lee, Nic Lyons

In this paper, we present a recommendation system for the automatic design of magazine covers. Our users are non-designer designers: individuals or small and medium businesses who want to design without hiring a professional designer while still wanting to create aesthetically compelling designs. Because a design should have a purpose, we suggest a number of semantic features to the user, e.g., "clean and clear", "dynamic and active", or "formal" to describe the color mood for the purpose of his/her design. Based on these high level features and a number of low level features, such as the complexity of the visual balance in a photo, our system selects the best photos from the user's album for his/her design. Our system then generates several alternative designs that can be rated by the user. Consequently, our system generates future designs based on the user's style. In this fashion, our system personalizes the designs of a user based on his/her preferences.

Back to Top

Detecting Boredom and Engagement During Writing with Keystroke Analysis, Task Appraisals, and Stable Traits

Robert Bixler, Sidney D'Mello

It is hypothesized that the ability for a system to automatically detect and respond to users' affective states can greatly enhance the human-computer interaction experience. Although there are currently many options for affect detection, keystroke analysis offers several attractive advantages to traditional methods. In this paper, we consider the possibility of automatically discriminating between natural occurrences of boredom, engagement, and neutral by analyzing keystrokes, task appraisals, and stable traits of 44 individuals engaged in a writing task. The analyses explored several different arrangements of the data: using downsampled and/or standardized data; distinguishing between three different affect states or groups of two; and using keystroke/timing features in isolation or coupled with stable traits and/or task appraisals. The results indicated that the use of raw data and the feature set that combined keystroke/timing features with task appraisals and stable traits, yielded accuracies that were 11% to 38% above random guessing and generalized to new individuals. Applications of our affect detector for intelligent interfaces that provide engagement support during writing are discussed.

Back to Top

Exploring 3D Gesture Metaphors for Interaction with Unmanned Aerial Vehicles

Kevin Pfeil, Seng Lee Koh, Joseph LaViola

We present a study exploring upper body 3D spatial interaction metaphors for control and communication with Unmanned Aerial Vehicles (UAV) such as the Parrot AR Drone. We discuss the design and implementation of five interaction techniques using the Microsoft Kinect, based on metaphors inspired by UAVs, to support a variety of flying operations a UAV can perform. Techniques include a first-person interaction metaphor where a user takes a pose like a winged aircraft, a game controller metaphor, where a user's hands mimic the control movements of console joysticks, ``proxy" manipulation, where the user imagines manipulating the UAV as if it were in their grasp, and a pointing metaphor in which the user assumes the identity of a monarch and commands the UAV as such. We examine qualitative metrics such as perceived intuition, usability and satisfaction, among others. Our results indicate that novice users appreciate certain 3D spatial techniques over the smartphone application bundled with the AR Drone. We also discuss the trade-offs in the technique design metrics based on results from our study.

Back to Top

User-Adaptive Information Visualization - Using Eye Gaze Data to Infer Visualization Tasks and User Cognitive Abilities

Ben Steichen, Giuseppe Carenini, Cristina Conati

Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user,s needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user- adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user,s visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.

Back to Top

Leveraging the Crowd to Improve Feature-Sentiment Analysis of User Reviews

Shih-Wen Huang, Pei-Fen Tu, Wai-Tat Fu, Mohammad Amamzadeh

Crowdsourcing and machine learning are both useful techniques for solving difficult problems (e.g., computer vision and natural language processing). In this paper, we propose a novel method that harnesses and combines the strength of these two techniques to better analyze the features and the sentiments toward them in user reviews. To strike a good balance between reducing information overload and providing the original context expressed by review writers, the proposed system (1) allows users to interactively rank the entities based on feature-rating, (2) automatically highlights sentences that are related to relevant features, and (3) utilizes implicit crowdsourcing by encouraging users to provide correct labels of their own reviews to improve the feature-sentiment classifier. The proposed system not only helps users to save time and effort to digest the often massive amount of user reviews, but also provides real-time suggestions on relevant features and ratings as users generate their own reviews. Results from a simulation experiment show that leveraging on the crowd can significantly improve the feature-sentiment analysis of user reviews. Furthermore, results from a user study show that the proposed interface was preferred by more participants than interfaces that use traditional noun-adjective pair summarization, as the current interface allows users to view feature-related information in the original context.

Back to Top

Non-Visual Skimming On Touch-Screen Devices

Faisal Ahmed, Andrii Soviak, Yevgen Borodin, I.V. Ramakrishnan

While reading on touch-screens, sighted users can quickly pan through content, skim it, and pick out bits and pieces of information before deciding to read it more carefully. In contrast, blind users have to rely on the screen reader to narrate the content to them. To go through the text quickly, blind users employ gestures that direct the screen reader to skip to the next line or the next paragraph. However, the serial audio interface of the screen reader makes it difficult for blind users to get a sense of what is important before listening to, at least, a part of the content. This makes ad hoc skimming with gestures slow and ineffective. We address this problem in this paper; specifically we propose a non-visual skimming interface that enables blind users to control the amount of content with simple pinch-in and pinch-out gestures. This interface simulates the skimming experience enjoyed by sighted people, and enables blind users to listen to the gist of content, while controlling the speed of information intake. We report on a user study demonstrating that the proposed interface significantly outperforms ad hoc skimming techniques employed by blind users. Our results suggest that the proposed approach holds promise in empowering blind users to access digitized information much faster.

Back to Top

Helping Users with Information Disclosure Decisions: Potential for Adaptation

Bart P Knijnenburg, Alfred Kobsa

Personalization relies on personal data about each individual user. Users are quite often reluctant though to disclose information about themselves and to be “tracked” by a system. We investigated whether different types of rationales (justifications) for disclosure that have been suggested in the privacy literature would increase users, willingness to divulge demographic and contextual information about themselves, and would raise their satisfaction with the system. We also looked at the effect of the order of requests, owing to findings from the literature. Our experiment with a mockup of a mobile app recommender shows that there is no single strategy that is optimal for everyone. Heuristics can be defined though that select for each user the most effective justification to raise disclosure or satisfaction, taking the user,s gender, disclosure tendency, and the type of solicited personal information into account. We discuss the implications of these findings for research aimed at personalizing privacy strategies to each individual user.

Back to Top

Automatic and Continuous User Task Analysis

Siyuan Chen, Julien Epps, Fang Chen

A day in the life of a user can be segmented into a series of tasks: a user begins a task, becomes loaded perceptually and cognitively to some extent by the objects and mental challenge that comprise that task, then at some point switches or is distracted to a new task, and so on. Understanding the contextual task characteristics and user behavior in interaction can benefit the development of intelligent systems to aid user task management. Applications that aid the user in one way or another have proliferated as computing devices become more and more of a constant companion. However, direct and continuous observations of individual tasks in a naturalistic context and subsequent task analysis, for example the diary method, have traditionally been a manual process. We propose a method for automatic task analysis system, which monitors the user,s current task and analyzes it in terms of the task transition, and perceptual and cognitive load imposed by the task. An experiment was conducted in which participants were required to work continuously on groups of three sequential tasks of different types. Three classes of eye activity, namely pupillary response, blink and eye movement, were analyzed to detect the task transition and non-transition states, and to estimate three levels of perceptual load and three levels of cognitive load every second to infer task characteristics. This paper reports statistically significant classification accuracies in all cases and demonstrates the feasibility of this approach for task monitoring and analysis.

Back to Top

Learning Non-Myopically from Human-Generated Reward

W. Bradley Knox, Peter Stone

Recent research has demonstrated that human-generated reward signals can be effectively used to train agents to perform a range of reinforcement learning tasks. Such tasks are either episodic---i.e., conducted in unconnected episodes of activity that often end in either goal or failure states---or continuing---i.e., indefinitely ongoing. Another point of difference is whether the learning agent highly discounts the value of future reward---a myopic agent---or conversely values future reward appreciably. In recent work, we found that previous approaches to learning from human reward all used myopic valuation. This study additionally provided evidence for the desirability of myopic valuation in task domains that are both goal-based and episodic.

In this paper, we conduct three user studies that examine critical assumptions of our previous research: task episodicity, optimal behavior with respect to a Markov Decision Process, and lack of a failure state in the goal-based task. In the first experiment, we show that converting a simple episodic task to non-episodic (i.e., continuing) task resolves some theoretical issues present in episodic tasks with generally positive reward and---relatedly---enables highly successful learning with non-myopic valuation in multiple user studies. The primary learning algorithm in this paper, which we call ``VI-TAMER'', is it the first algorithm to successfully learn non-myopically from human-generated reward; we also empirically show that such non-myopic valuation facilitates higher-level understanding of the task. Anticipating the complexity of real-world problems, we perform two subsequent user studies---one with a failure state added---that compare (1) learning when states are updated asynchronously with local bias---i.e., states quickly reachable from the agent's current state are updated more often than other states---to (2) learning with the fully synchronous sweeps across each state in the VI-TAMER algorithm. With these locally biased updates, we find that it the general positivity of human reward creates problems even for continuing tasks, revealing a distinct research challenge for future work.

Back to Top

Semi-automatic Generation of Recommendation Processes and their GUIs

Hermann Kaindl, Elmar P. Wach, Ada Okoli, Roman Popp, Ralph Hoch, Werner Gaulke, Tim Hussein

Creating and optimizing content- and dialogue-based recommendation processes and their GUIs (graphical user interfaces) manually is expensive and slow. Changes in the environment may also be found too late or even be overlooked by humans. We show how to generate such processes and their GUIs semi-automatically by using knowledge derived from unstructured data such as customer feedback on products on the Web. Our approach covers the whole lifecycle from knowledge discovery through text mining techniques to the use of this knowledge for semi-automatic generation of recommendation processes and their user interfaces as well as their comparison in real-world use within the e-commerce domain through A/B-variant tests. These tests indicate that our approach can lead to better results as well as less manual effort.

Back to Top

Curating and Contextualizing Twitter Stories to Assist with Social Newsgathering

Arkaitz Zubiaga, Heng Ji, Kevin Knight

While journalism is evolving toward a rather open-minded participatory paradigm, social media presents overwhelming streams of data that make it difficult to identify the information of a journalist's interest. Given the increasing interest of journalists in broadening and democratizing news by incorporating social media sources, we have developed TweetGathering, a prototype tool that provides curated and contextualized access to news stories on Twitter. This tool was built with the aim of assisting journalists both with gathering and with researching news stories as users comment on them. Five journalism professionals who tested the tool found helpful characteristics that could assist them with gathering additional facts on breaking news, as well as facilitating discovery of potential information sources such as witnesses in the geographical locations of news.

Back to Top

Real-time Hand Interaction for Augmented Reality on Mobile Phones

Wendy H. Chun, Tobias Hollerer

Over the past few years, Augmented Reality has become widely popular in the form of smart phone applications, however most smart phone-based AR applications are limited in user interaction and do not support gesture-based direct manipulation of the augmented scene. In this paper, we introduce a new AR interaction methodology, employing users' hands and fingers to interact with the virtual (and possibly physical) objects that appear on the mobile phone screen. The goal of this project was to support different types of interaction (selection, transformation, and fine-grain control of an input value) while keeping the methodology for hand detection as simple as possible to maintain good performance on smart phones. We evaluated our methods in user studies, collecting task performance data and user impressions about this direct way of interacting with augmented scenes through mobile phones.

Back to Top

Agent Metaphor for Machine Translation Mediated Communication

Chunqi Shi, Donghui Lin, Toru Ishida

Machine translation is increasingly used to support multilingual communication. Because of unavoidable translation errors, multilingual communication cannot accurately transfer information. We propose to shift from the transparent channel metaphor to the human-interpreter (agent) metaphor. Instead of viewing machine translation mediated communication as a transparent channel, the interpreter (agent) encourages the dialog participants to collaborate, as their interactivity will be helpful in reducing the number of translation errors, the noise of the channel. We examine the translation issues raised by multilingual communication, and analyze the impact of interactivity on the elimination of translation errors. We propose an implementation of the agent metaphor, which promotes interactivity between dialog participants and the machine translator. We design the architecture of our agent, analyze the interaction process, describe decision support and autonomous behavior, and provide an example of repair strategy preparation. We conduct an English-Chinese communication task experiment on tangram arrangement. The experiment shows that, compared to the transparent-channel metaphor, our agent metaphor reduced human communication effort by 21.6%.

Back to Top

Team Reactions to Voiced Agent Instructions in a Pervasive Game

Stuart Moran, Nadia Pantidi, Khaled Bachour, Joel E Fischer, Martin Flintham, Tom Rodden, Simon Evans, Simon Johnson

The assumed role of humans as controllers and instructors of machines is changing. As systems become more complex and incomprehensible to humans, it will be increasingly necessary for us to place confidence in intelligent interfaces and follow their instructions and recommendations. This type of relationship becomes particularly intricate when we consider significant numbers of humans and agents working together in collectives. While instruction-based interfaces and agents already exist, our understanding of them within the field of Human-Computer Interaction is still limited.

As such, we developed a large-scale pervasive game called "Cargo", a semi-autonomous ruled-based agent distributes a number of text-to-speech instructions to multiple teams of players via their mobile phone as an interface. We describe how people received, negotiated and acted upon the instructions in the game both individually and as a team and how players initial plans and expectations shaped their understanding of the instructions.

Back to Top

Westland Row Why So Slow? Fusing Social Media and Linked Data Sources for Understanding Real-Time Traffic Conditions

Elizabeth M. Daly, Freddy Lecue, Veli Bicer

The advent of real-time traffic streaming offers users the opportunity to visualise current traffic conditions and congestion information. However, real-time information highlighting the underlying reason for tail-backs remains largely unexplored. Broken traffic lights, an accident, a large concert, or road-works reveal important information for citizens and traffic operators alike. Providing such information in real-time requires intelligent mechanisms and user interfaces in order to (i) harness heterogeneous data sources (volume, velocity, variety, veracity) and (ii) make derived knowledge consumable so users can visualize traffic conditions and congestion information making better routing decisions while travelling. This work focuses on surfacing relevant information and explaining the underlying reasons behind traffic conditions. To this end, static data from event providers, planned road works together with dynamically emerging events such as a traffic accidents, localized weather conditions or unplanned obstructions are captured through social media to provide users real-time feedback to highlight the causes of traffic congestion.

Back to Top