Driving is to a large extent a visual task and safe driving demands active visual search for traffic relevant information. Despite this fact, an increasing amount of potentially distracting objects are continuously being added to the driving environment; advanced driving assistance systems, navigation systems, smart phones, billboards along the roads and so on. From a human factors perspective, this development has brought forward eye tracking as one of the most important tools in traffic safety research. The objective of this presentation is to provide a bird’s eye view on how eye tracking is used in traffic safety research and, above all, to highlight many of the difficulties we have encountered in our daily work.
In naturalistic settings where eye tracking is used from hours to months of driving without any manual recalibration, one is often surprised of how bad the tracking quality is. Tracking is lost for no apparent reason, the left and the right eyes diverge in unpredictable ways, slowly varying baseline drifts of 10 – 20 degrees are present, and the uptime of the tracker is low. When compared to some ground truth, only a fraction of eye blinks are detected, and accuracy, precision and availability deteriorates quickly when the gaze targets are moved to the periphery. Some eye tracking equipment has problems with sunlight, others with the large pupil size that arise at night-time – not to mention alternating lighting conditions such as when driving under a bridge. Problems also arise due to mascara, clothing or certain types of glasses. Remote eye trackers typically have problems with the cameras’ field of view and there is usually no way of knowing if lost tracking is due to tracking failure, or because the head is outside the head box.
As a wish list for the eye tracker developers, we would like to see systems that are able to deal with quickly changing lighting conditions (including strong sunlight) as well as mascara and glasses. We would also well-functioning auto-calibration including driver identification. On the top of our list though, we would like to have better control over the lost tracking episodes. If we knew which sequences of lost tracking that were due to extreme gaze angles, the data sets would be so much more useful.