Eye Data Quality (EDQ) Standardisation Project

For information on the previous COGAIN standardisation work on safety levels of IR light in eyetrackers, please contact fiona.mulvey@humlab.lu.se. This work is complete.

Our current standardisation work is on the topic of eye data quality (EDQ) - how eyetrackers compare in their performance across different types of participants/users and usage scenarios, their suitability for valid mesurement of eye movement classes and types, designing methods and measures for valid comparison. This project is led and administered by the EMRA Association in collaboration with COGAIN, whose role in the project is to report results as they are relevant to users and designers of gaze-interaction interfaces.
 
In April 2014 the first large scale test of commercial eyetrackers eye data quality, using the methods developed by the standardisation committee, was hosted by The Humanities Laboratory, Lund Univeristy, Sweden, which is the home lab of the project leadership, and one of several sites used by the project for eye movement data collection. With help of visiting students and researchers from the EDQ committee member's institutions, and the support of manufacturers in supplying systems, we recorded 192 participant's fixational and smooth pursuit eye movements, with participants from around the world, on 17 commercially available eyetrackers including high-end lab-based equipment, wearable eye tracking glasses, and low cost plug and play eye trackers. Rochester Institute of Technology, NY, has also been a test centre, where recording has focussed on artificial eyes and the development of robotics. First results from the project were presented at ETRA 2012, ECEM 2013, VSS 2014, and at ECEM 2015 via an EDQ project symposium. These methods will be published and both analysis scripts and data made available in the future, but any interested researchers who would like to get involved in project in the meantime is welcome to join the sub committee and contribute with full attribution. Membership of the main technical committee is through election and open positions are advertised via call for nominations from the subcommittee and among existing external collaborators, and/or through relevant eye movement mailing lists.
 
Current Members of the Standardisation Committee:
 
Brian Sullivan, tobii, Sweden
Carlos Morimoto, University of Sao Paolo, Brazil
Dixon Cleveland, LC Technologies, USA
Dong Wang, Rochester Institute of Technology, USA
Fiona Mulvey, Lund University, Sweden (Chair of the Committee)
Jason Babcock, Positive Science, USA
Jeff Mulligan, NASA Ames, USA
Jeff Pelz, Rochester Institute of Technology, USA
Josh Borah, Applied Science Laboratories, USA
Kara Latorella, NASA Langley, USA
Markus Joos, Interactive Minds Dresden, Germany
Mary Hayhoe, University of Texas, Austin, USA
Michael MacAskill, New Zealand Brain Research Institute, NZ
Sol Simpson, iSolver software, Canada
Water Nistico, SensoMotoric Instruments (SMI), Germany
 
Contact the project management on: EDQ@cogain.org
 
The EMRA & COGAIN Technical Committee on eye data quality (scroll down for sub committee members list) aims to define standard measures of eye data quality in research reports and across systems, and to test these defined methods and measures validity in a series of large scale experiments. Experts from eye tracker design and eye movement research are working together with the wider eye movements community to reach consensus on measures of accuracy and precision, temporal and spatial resolution and the robustness of eye trackers under variant environmental and individual characteristics, and to publish the outcome of tests for the benefit of the community at large. This work is very important to the development of eye tracking and eye movement research for a number of reasons, including:
 
1. We need to know something about the quality of data recorded for every research report using eye movement measures in order to know if the results are comparable to previous research, to replicate results, and to know if the conclusions drawn are valid. There is currently no formal consensus on how to measure and report data quality.
 
2. People interested in buying an eye tracker need to know if it can produce data capable of the measures they need, for the people they wish to study and for the environment they wish to record in, or for the single user who wants to use a gaze controlled interface.
 
3. Designers of gaze controlled interfaces need to know what range of accuracy and precision they can expect from a system in order to design an interface which will operate smoothly on it.
 
4. Manufacturers want to report, compare and optimise the quality of data their systems produce, and know that other manufacturers are using the same measures. System specifications based on standardised measures are the best way to do this.
 
Eye data quality standardisation helps to improve eye tracking in general. Everyone benefits.
Every effort has been made to record data from commercial eyetrackers with set ups that are repeatable in small labs or manufacturer premises. Large scale data recording requires replication of set up at multiple sites - this method and the software to run it are the main output of the work for the development of the field. All software is open source and all interested manufacturers can have support (e.g. template implementations) if they want to run the standard experiments and analyses on their systems.
 
The current work of the Standardisation Committee is organised into the following workpackages:
 
Standardisation of eye data terminology
The terms used to describe data quality are often inadequately defined, or used inconsistently. Agreeing definitions and standardising terminology, both conceptually and mathematically, will allow for consensus on the definition of measurement techniques. This work will provide precise definitions of data quality terminology based on consensus of a large group of experts in the field, so we can refer precisely to what we measure and how. This work is at an advanced stage within the committee and will soon be ready for general review. If you have expertise in eye tracking and eye data quality and would like to take part in reviewing draft versions, please contact us. The committee member leading this work is Kara Latorella, with all members contributing.
 
Designing a standardised set of artifical eyes
Artificial eyes are important tools for measuring system error or precision. Since real eyes vary according to colour, shape, and pupil size, artificial eyes should represent a range of eye types and work equally well on diverse systems. An important aspect of the design of artificial eyes is that they must work for both bright and dark pupil methods, we need to measure system temporal performance by simulating a change in gaze position, and robotic eyes are considered to measure the quality of the data with regard to eye dynamics. We are currently in negotiation with a manufacturer of artificial eyes for training ophthalmologists which fits the criteria. We aim to modify these eyes to produce a standard set of artificial eyes. Most manufacturers have asked to purchase the same set we will use for data quality measures. The committee member leading this work is Dong Wang, with Fiona Mulvey, Jeff Pelz, Dixon Cleveland and external collaborators.
 
Designing a standard experimental protocol for the collection of data for data quality measurement
This workpackage defines the actual experimental procedures to investigate data quality, both in our own large scale study and as part of future experiments using provided methods as a control measure in a research project, for example. We defined specifications that the experiment and software should be capable of running on all systems and take account of varying intended usage. There is a high degree of collaboration with manufacturers in this work. The project has produced open source tools for the calculation of data quality using a simple experimental routine and investigate critical issues for reported data quality such as sample selection, variances in calculation results with sample rate and algorithms. This work is at an advanced stage in terms of results of testing human participants, and ongoing in terms of artificial eyes. We are collaborating with vibration and robotics expertise for the design of robustness measures and a means of testing data quality in the eye-in-motion, such as the validity of saccadic measures used in clinical research. We welcome collaboration from students, researchers and manufacturers interested in this work. So far, we have recorded 192 people on 17 eyetrackers and are preparing the first large scale system comparison for publication. ECEM 2015 included a symposium on the EDQ project where 5 of our upcoming papers were presented. This work is led by Fiona Mulvey, with input from all committee members.
 
Physiological measures of eye data quality
This workpackage focuses on the eye in motion, and uses knowledge from eye physiology to assess the quality of eye data. Methods considered include dual recording of eye movements with coil based systems and DPI systems, reproducing `standard' and realistic eye movements via replay of human movements in robitic artificial eyes, and applying calculus to the detection of error. Other methods including identifying thresholds for maximum velocity, comparing main sequence, peak velocity and skew of saccades. The work is ongoing, we welcome collaboration. This work is led by Fiona Mulvey with input from Mary Hayhoe, Jeff Pelz, Sol Simpson and others.
 
Implementing commercial eye trackers to the open source experimental software
This work involves manufacturers working with the committee and subcommittee to get as many as possible available eyetrackers integrated with the experimental software. The following systems have been implemented and are part of the large data set recorded in the Humanities Lab, Lund University, Sweden: SMI remote, tower and glasses, SR Research EyeLink systems, Dual Purkinje Imaging from Fourward Technologies, LC Technologies EyeFollower and Eye Gaze systems, EyeTribe eyetracker, tobii eyetrackers, and ASL glasses (some integration work remains for headmounted systems). Several other manufacturers have provided systems or committed time to the work ,and more eyetrackers are being implemented into the experimental software all the time. Manufacturers willing to lend a system and some development time to the project can have their system integrated into the ioHub collaboratively, using the ioHub software developed for the project by Sol Simpson (see 'Tools' menu left). ioHub is not a complete experiment design and runtime API. It's main focus is on device event monitoring, real-time reporting, and persistant storage of input device events on a system wide basis. It is designed to work with psychopy and is now part of the general psychopy release. The ioHub allows us to run eye data quality experiments on any integrated eyetracker, or other integrated sensors or input devices, with extremely precise device event management. This work is led by Sol Simpson, with input from several other committee members and manufacturers/commercial engineers.
 
Investigating the effects of data quality on eye movement measures
This workpackage uses the data collected for comparison of systems in order to test event detection algorithms and their settings under variant data quality. It is secondary output from the main work - but of critical importance to researchers. We are now preparing publications that investigate the effects of data quality on fixations and saccades in all systems, and on microsaccades in high end VOGs and a dual-purkinje imaging tracker. This work is led by Fiona Mulvey, with input from Jeff Pelz, Dixon Cleveland, and other committee members and subcommittee members. First results were presented at VSS in 2014 and at ECEM 2015.
 
Funding
We are continually looking for funding for this research. Funding is donated either via EMRA (for all aspects of the standardisation work), or COGAIN (for EDQ standardisation work related to gaze interaction) which are both not-for-profit organisations. We have received small donations from SR Research and SMI for committee travel, and several system loans from ASL, tobii, SMI, EyeTribe, and other manufacturers and software providers. Funding covers travel of committee members for meetings and workshops, small equipment and publication costs. No members of the committee are paid for their work. 
 
The Technical Committee is supported in their work by a large subcommittee from a broad skill base across eye tracking research and applications:

 
If you are working in the area of eye tracking research or applications and would like to become a sub committee member, please contact EDQ@cogain.org.

 

Andrew Clarke, Charite Medical School, Berlin, Germany
Andrew Duchowski, Clemson University, USA
Barney Hawes, Sensory Software International Ltd., UK
Bo Hu, Rochester Institute of Technology, USA
Bonita Sharif, Youngstown State University, Ohio, USA
Carlos Morimoto, Universidade de São Paulo, Brazil
Chip Clarke, Assistive Technology Works, Inc., USA
Dan Witzner Hansen, IT University of Copenhagen, Denmark
Detlev Droege, University of Koblenz, Germany
Edwige Pissaloux, Pierre and Marie Curie University, Paris, France
Erik Wastlund, Karlstad University, Sweden
Frank Marchak, Veridical Research and Design corporation, USA
Gintautas Daunys, Sauliai University, Lithuania
Haakon Lund, Royal School of Library and Information Science, Copenhagen, Denmark
Haijun Kang, Kansas State University, USA
Hendrik Koesling, University of Bielefeld, Germany
Henrik Eskilsson, tobii technologies, Sweden
Jacob Fiset, mirametrix, Quebec, Canada
Jan Hoffman, formerly of SensoMotoric Instruments, Germany
Jayson Turner, Lancaster University, UK
Javier San Agustin, IT University of Copenhagen, Denmark
Joakim Troenge, tobii technologies, Sweden
John Paulin Hansen, IT University of Copenhagen, Denmark
Jonas Andersson, SmartEye, Sweden
Karl Frederick Arrington, Arrington Research, Arizona, USA
Laura Rossi, University of Torino, Italy
Lester C. Loschky, Kansas State University, USA
Linnea Larsson, Lund University, Sweden
Lisa West Åkerblom, tobii technologies, Sweden
Magnus Sjolin, SmartEye, Sweden
Margret Buchholz, DART, Sweden
Marina Green Jarvinen, CTC, Tikoteekki, Finland
Martin Raubal, Swiss Federal Institute of Technology, Zurich, Switzerland
Oleg Spakov, University of Tampere, Finland
Patrick Worfolk, Synaptics, California, USA
Peter Blixt, tobii technologies, Sweden
Pieter Blignaut, University of the Free State, South Africa
Pieter Unema, Consultant, Salt Lake City, USA
Raimondas Zemblys, Siauliai University, Lithuania
Reynold Bailey, Rochester Institute of Technology, New York, USA
Ricardo Matos, tobii technologies, Sweden
Robert Allison, York University, Toronto, Canada
Rudolf Groner, Journal of Eye Movement Research, Bern, Switzerland
Sandra Marshall, San Diego State University, USA
Scott MacKenzie, York University, Canada
Thies Pfeiffer, University of Bielefeld, Germany
Zaheer Ahmed, IT University of Copenhagen, Denmark
Zoi Kapoula, Universite Paris Descartes, France