Alex Kale

Assistant Professor of Computer Science at UChicago.
Uncertainty visualization, data cognition, HCI.

Bio

Hello! I am an Assistant Professor in Computer Science and the Data Science Institute at the University of Chicago. I create and evaluate tools for helping people think with data, specializing in data visualization and reasoning with uncertainty. My work addresses gaps in dominant theories and models of what makes visualization effective for inferences and decision making, striving for principled data science tools.


Before starting at UChicago, I earned my PhD at the University of Washington (UW) Information School where I worked with Jessica Hullman. During graduate school, I collaborated on statistical tools with members of the Midwest Uncertainty Collective at Northwestern University CS, the Interactive Data Lab at UW CS&E, and the Interpret ML team at Microsoft Research. I also earned my MS in Information Science at UW in 2020 and my BS in Psychology at UW in 2015.

Research

I create and evaluate software tools to help people think with data. Visualizations and software often mediate our interactions with data, in part because these media facilitate efficiency in thinking and communication. However, our current approaches to thinking with data often fail to account for the cognitive mechanisms that guide people's interpretations of data, such as heuristics and other dynamic processes of the mind which underly human judgment and decision making. As a result, tools for reasoning with data leave us open to the failure modes of human cognition, especially around applications involving uncertainty and statistical reasoning. My research aims to address these problems by creating tools that explicitly represent the user's cognitive process and by pursuing a more theoretically grounded and empirically rigorous science behind the design of software tools for data science and visualization.


Here are some representative publications. See my CV and research statement to learn more.

Causal Support: Modeling Causal Inferences with Visualizations
VIS 2021, Honorable Mention Award 🏆
Alex Kale, Yifan Wu, and Jessica Hullman

Modeling causal inferences with visualizations: (A) Users view and may interact with data visualizations; (B) Ideally, users reason through a series of comparisons that allow them to allocate subjective probabilities to possible data generating processes; and (C) We elicit users’ subjective probabilities as a Dirichlet distribution across possible causal explanations and compare these causal inferences to a computed benchmark of causal support, which we derive from Bayesian inference across possible causal models.

Visual Reasoning Strategies for Effect Size Judgments and Decisions
VIS 2020, InfoVis Best Paper Award 🏆
Alex Kale, Matthew Kay, and Jessica Hullman

We present a mixed design experiment on Mechanical Turk which tests eight uncertainty visualization designs: intervals, hypothetical outcome plots, densities, and quantile dotplots, each with and without means added. Participants estimate the effect size of an intervention and make an incentivized decision whether or not to pay for that intervention. Our results suggest that many users rely on the sub-optimal strategy of judging the distance between distributions while ignoring uncertainty. Visualization designs that support the least biased estimation do not support the best decisions, suggesting that a chart user’s sense of the signal in a chart may vary for different tasks.

Boba: Authoring and Visualizing Multiverse Analyses
VIS 2020
Yang Liu, Alex Kale, Tim Althoff, and Jeffrey Heer

Boba is an integrated domain-specific language (DSL) and visual analysis system for authoring and visually exploring multiverse analyses. The Boba DSL enables analysts to specify decision spaces in data analysis, supporting multiple scripting languages. The Boba Visualizer provides linked views of the decision space and model results to enable rapid, systematic assessment of robustness of results to decision alternatives, sampling uncertainty, and model fit.

Adaptation and Learning Priors in Visual Inference
Position Paper
VIS 2019
Alex Kale and Jessica Hullman

We review the vision science literature on adaption, a set of processes by which the visual system adjusts incoming sensory signals based on previous visual experience. We present an explanation of visual adaptation that is tailored to the visualization community and consider the implications of adaptation when designing for inferences from visualized data.

Capture & Analysis of Active Reading Behaviors for Interactive Articles on the Web
EuroVis 2019
Matt Conlen, Alex Kale, and Jeffrey Heer

Interactive articles are increasingly popular online, yet their effectiveness at engaging readers has not been widely studied with real-world audiences. We developed tools for instrumentation and analysis to make it easier for researchers and publishers to understand how readers are reacting to this new media.

Invited Talks

These are my recent talks in addition those accompanying conference papers.

Using Boba to Author and Visualize Multiverse Analyses

SIPS 2022 - Workshop: Multiverse Analyses - Introduction and Applications
Research and data science involve myiad decisions about how to collect, analyze, and report on data. These decisions impact what gets measured and how it gets interpreted, potentially influencing the downstream conclusions that are drawn from data. For more rigorous analysis, we need software tools that enable analysts to express a set of possible decisions and skeptically examine how robust their results are to different combinations of choices. I present Boba, a tool for multiverse analysis created in collaboration with researchers in the University of Washington Interactive Data Lab. Boba consists of both (1) a domain specific language for authoring multiverse analyses in Python, R, and other scripting languages, and (2) an interactive visualization tool for exploring results of multiverse analyses which runs in a web browser. In this workshop presentation, I walk through a few example analyses demonstrating how Boba and tools like it can improve the rigor of data analysis.

Expect Users to Satisfice: Designing Interfaces for Reasoning with Uncertainty

SDSS 2020 - User Testing Statistical Graphics
Conventional statistical graphics tend to emphasize point estimates and omit uncertainty information. However, given that research in visualization, psychology, and behavioral economics shows that people often satisfice (i.e., use heuristics that deviate from the optimal strategy) when reasoning with uncertainty, users of data visualizations may not recognize or correctly interpret uncertainty. I argue that we need to understand users’ potential reasoning strategies in order to design graphical interfaces that steer users toward more systematic ways of reasoning with uncertainty. In this talk, I present empirical evidence on how users satisfice, both when reading individual charts and when conducting analysis, and I discuss ways we are designing statistical graphics and interfaces for data analysis that anticipate users’ tendency to satisfice.

Teaching

In my role as an Assistant Professor of in Computer Science and the Data Science Institute at the University of Chicago, I will teach courses on data visualization, introductory programming, and advanced topics in data science (to be decided).


You can learn more about my teaching by reading my teaching statement.

Recent and upcoming courses

  • Fall 2022. CMSC 14100. Introduction to Computer Science I.

Get in touch

kalea@uw.edu

or follow me online