
Alex Kale
Assistant Professor of Computer Science at UChicago.
Uncertainty visualization, data cognition, HCI.
Assistant Professor of Computer Science at UChicago.
Uncertainty visualization, data cognition, HCI.
Hello! I am an Assistant Professor in Computer Science and the Data Science Institute at the University of Chicago. I create and evaluate tools for helping people think with data, specializing in data visualization and reasoning with uncertainty. My work addresses gaps in dominant theories and models of what makes visualization effective for inferences and decision making, striving for principled data science tools.
Before starting at UChicago, I earned my PhD at the University of Washington (UW) Information School where I worked with Jessica Hullman. During graduate school, I collaborated on statistical tools with members of the Midwest Uncertainty Collective at Northwestern University CS, the Interactive Data Lab at UW CS&E, and the Interpret ML team at Microsoft Research. I also earned my MS in Information Science at UW in 2020 and my BS in Psychology at UW in 2015.
I create and evaluate software tools to help people think with data. Visualizations and software often mediate our interactions with data, in part because these media facilitate efficiency in thinking and communication. However, our current approaches to thinking with data often fail to account for the cognitive mechanisms that guide people's interpretations of data, such as heuristics and other dynamic processes of the mind which underly human judgment and decision making. As a result, tools for reasoning with data leave us open to the failure modes of human cognition, especially around applications involving uncertainty and statistical reasoning. My research aims to address these problems by creating tools that explicitly represent the user's cognitive process and by pursuing a more theoretically grounded and empirically rigorous science behind the design of software tools for data science and visualization.
Here are some representative publications. See my CV and research statement to learn more.
Causal Support: Modeling Causal Inferences with Visualizations
VIS 2021, Honorable Mention Award 🏆
Alex Kale, Yifan Wu, and Jessica Hullman
Visual Reasoning Strategies for Effect Size Judgments and Decisions
VIS 2020, InfoVis Best Paper Award 🏆
Alex Kale, Matthew Kay, and Jessica Hullman
Boba: Authoring and Visualizing Multiverse Analyses
VIS 2020
Yang Liu, Alex Kale, Tim Althoff, and Jeffrey Heer
Adaptation and Learning Priors in Visual Inference
Position Paper
VIS 2019
Alex Kale and Jessica Hullman
Capture & Analysis of Active Reading Behaviors for Interactive Articles on the
Web
EuroVis 2019
Matt Conlen, Alex Kale, and Jeffrey Heer
These are my recent talks in addition those accompanying conference papers.
SIPS 2022 - Workshop: Multiverse Analyses - Introduction and Applications
Research and data science involve myiad decisions about how to collect, analyze, and report on data. These decisions impact what gets measured
and how it gets interpreted, potentially influencing the downstream conclusions that are drawn from data. For more rigorous analysis, we need
software tools that enable analysts to express a set of possible decisions and skeptically examine how robust their results are to different combinations
of choices. I present Boba, a tool for multiverse analysis created in collaboration with researchers in the University of Washington Interactive Data Lab.
Boba consists of both (1) a domain specific language for authoring multiverse analyses in Python, R, and other scripting languages, and (2) an interactive
visualization tool for exploring results of multiverse analyses which runs in a web browser. In this workshop presentation, I walk through a few example analyses
demonstrating how Boba and tools like it can improve the rigor of data analysis.
SDSS 2020 - User Testing Statistical Graphics
Conventional statistical graphics tend to emphasize point estimates and omit uncertainty information. However, given that research in
visualization, psychology, and behavioral economics shows that people often satisfice (i.e., use heuristics that deviate from the optimal strategy)
when reasoning with uncertainty, users of data visualizations may not recognize or correctly interpret uncertainty. I argue that we need to understand
users’ potential reasoning strategies in order to design graphical interfaces that steer users toward more systematic ways of reasoning with uncertainty.
In this talk, I present empirical evidence on how users satisfice, both when reading individual charts and when conducting analysis, and
I discuss ways we are designing statistical graphics and interfaces for data analysis that anticipate users’ tendency to satisfice.
In my role as an Assistant Professor of in Computer Science and the Data Science Institute at the University of Chicago, I will teach courses on data visualization, introductory programming, and advanced topics in data science (to be decided).
You can learn more about my teaching by reading my teaching statement.