Gesture Interfaces for Visually-Impairing Interaction Contexts
Project no: PN-II-RU-TE-2014-4-1187; Contract no: 47/01.10.2015 | |
Principal Investigator: Radu-Daniel Vatavu | |
Funded by UEFISCDI, Romania | |
Running period: October 2015 - September 2017 (24 months) |
Abstract
In this project, we design gesture interface technology more accessible to people with visual impairments as well as to people without impairments that occasionally find themselves in visually-impairing situations, i.e., when they are not able to have a clear, direct look at the smartphone, yet interaction is mandatory. Our goal is to understand the effects of visual impairment on gesture performance on touch-sensitive mobile devices and, therefore, to inform the development of new algorithms to recognize gestures for various visually-impairing conditions, and new interaction techniques to help mobile users overcome their visual impairment, either physiological or situational.
Touch input on today's smart devices is entirely visual and, consequently, requires a direct view to locate, select, and control on-screen objects. Eye conditions that affect vision clarity or the field of view determine less efficient and, possibly, ineffective interaction.
From left to right: (1) clear sight of a tablet as experienced by a person with full sight; (2) visual acuity loss, determined for example by hyperopia; (3) peripheral vision loss, caused by moderate glaucoma; (4) central vision loss, caused by moderated age-related macular degeneration; (5) color blindness. Note: images were produced using the Vision and Hearing Impairment Simulator (images produced and included with permission, courtesy of Sam Waller).
Concrete objectives
- Understand the ways in which visually-impairing situations, either physiological or situational, affect touch input on mobile devices, such as smartphones or tablets.
- Develop efficient and robust algorithms for recognizing touch gestures performed in visually-impairing contexts.
- Design and implement assistive touch input techniques for efficient item selection and text entry on mobile devices in visually-impairing contexts.
Expected results
- Recognition algorithms for touch and free-hand gestures performed in visually-impairing contexts.
- Interaction techniques for target selection and symbol entry on smart devices with appropriate feedback to assist touch input.
- Gesture datasets collected from people with low vision and from people without visual impairments in visually-impairing situations. Accompanying software for gesture data collection in experimental settings and accompanying methodology for gesture analysis.
- Scientific publications in high-quality journals and conferences (at least 5 major publications), research and progress reports.
Team
- Radu-Daniel Vatavu, Principal Investigator
- Doina-Maria Schipor, Postdoctoral researcher (Psychology)
- Ionela Rusu, Postdoctoral researcher (Computer Science)
- Gabriel Cramariuc, PhD student (2015-2017) & Postdoctoral researcher (2017) (Computer Science)
- Bogdan-Florin Gheran, PhD student (Computer Science)
Publications
|
|
|
|
|
|
|
|
|
|
|
|
Media
Project reports (in Romanian)
- Progress reports: 2015 PDF , 2016 PDF , and 2017 PDF
- Scientific reports: 2015 PDF and 2016 PDF
- Summary scientific report (2015-2017): PDF
Other resources (software, algorithms, and datasets)
-
Software for gesture data collection in experimental settings
RAR | License -
Touch gesture recognition algorithm (pseudocode)
PDF -
Free-hand gesture recognition algorithm (pseudocode)
PDF -
Gesture datasets.
The following datasets were collected:
- 26,625 touch tap gestures for static targets from 54 participants (27 with visual impairments)
- 24,343 touch tap gestures for moving targets from 54 participants (27 with visual impairments)
- 11,334 touch tap gestures for dense targets from 54 participants (27 with visual impairments)
- 6,562 stroke gestures from 54 participants (27 with visual impairments)
- 3,600 free-hand gestures from 30 participants (10 with visual impairments)
- 11,383 touch tap gestures from 11 participants and 2400 stroke gestures from 10 participants in situationally-impairing conditions
We will make these datasets public once the corresponding publications are accepted and published. - Demos for the 1-2-Text and 1-2-3-Text soft keyboards are available here and here. Please note that the two keyboards work with touch input only.