Alvitta Ottley standing in front of the Where's Waldo? exercise

Painting a Picture with Data

Alvitta Ottley, who recently received her doctorate in computer science, studies visualizations, their users, and how to bring the two together. 

By Lynne Powers

Alvitta Ottley understands firsthand the importance of communicating complex medical information clearly and concisely. When she attended pre-natal screenings before the birth of her son, she heard a lot of numbers about the potential risks involved in pregnancy and birth. Thanks to her academic background, she understood the implications of those mathematical probability chains, but, she says, “realizing that the doctor was actually struggling to communicate this information to me was sobering.”

To help eliminate confusion, the medical community often uses visual aids to present complex probabilities to patients. Doctoral recipient Ottley chose to study how helpful these visualizations actually are to patients’ comprehension of data.

Working with Associate Professor Remco Chang in the Visual Analytics Lab at Tufts (VALT), Ottley designed an experiment to test the efficacy of several different methods of communicating medical risk information, including storyboarding, visualizations, and unformatted plain text. She assessed users' capacity to understand and remember the spatial relations among objects or space. What she found was quite interesting: in every area, users with higher spatial ability outperformed users with low spatial ability, who struggled to understand the data regardless of the method used to communicate the information. Ottley said this is “alarming”—half of the tested population did not understand the information presented to them. Ottley wanted to learn more about which cognitive indicators influence how users process information presented in visualizations.

Ottley was involved in a VALT study in which subjects performed a simple and familiar task of finding Waldo in a complex scene. Users were presented with a Where’s Waldo? image with simple controls to zoom and to scan left, right, up, and down. The research team tracked their clicks, and found that users with “internal locus of control,” who felt that they controlled the course of their own lives, tended to process globally, exploring at a higher level and finding Waldo relatively quickly. In comparison, users with “external locus of control,” who ascribed more importance to destiny or a higher power in determining their lives, processed locally. They zoomed in and scrolled through small sections at a time, taking longer to spot Waldo.

“In less than a minute of capturing a user’s interactions, we were able to predict if they were going to be fast or slow at completing the task,” says Ottley. The results from the Waldo study suggest that a visualization system could be designed to predict users’ behavior and, in real time, adapt to suit their traits and needs. “For example, if a user is lost, maybe the visualization could detect this and offer an alternative area of exploration for them,” Ottley explains. “For big data, can the visualization predict that a person is about to do a calculation and pre-fetch the data for them so they don’t have to wait?”

That’s the question, and an area that Ottley plans to research in her new role as an assistant professor in the Department of Computer Science and Engineering at Washington University in St. Louis.