no code implementations • 27 Jan 2024 • Ziyang Guo, Yifan Wu, Jason Hartline, Jessica Hullman
We argue that the current definition of appropriate reliance used in such research lacks formal statistical grounding and can lead to contradictions.
no code implementations • 25 Jan 2024 • Jessica Hullman, Alex Kale, Jason Hartline
We argue that to attribute loss in human performance to forms of bias, an experiment must provide participants with the information that a rational agent would need to identify the normative decision.
no code implementations • 16 Jan 2024 • Dongping Zhang, Angelos Chatzimparmpas, Negar Kamali, Jessica Hullman
As deep neural networks are more commonly deployed in high-stakes domains, their black-box nature makes uncertainty quantification challenging.
no code implementations • 30 Nov 2023 • Jake M. Hofman, Angelos Chatzimparmpas, Amit Sharma, Duncan J. Watts, Jessica Hullman
Amid rising concerns of reproducibility and generalizability in predictive modeling, we explore the possibility and potential benefits of introducing pre-registration to the field.
no code implementations • 21 Aug 2023 • Jessica Hullman, Ari Holtzman, Andrew Gelman
In this essay, we focus on an unresolved tension when we bring this dilemma to bear in the context of generative AI: are we looking for proof that generated media reflects something about the conditions that created it or some eternal human essence?
no code implementations • 15 Aug 2023 • Sayash Kapoor, Emily Cantrell, Kenny Peng, Thanh Hien Pham, Christopher A. Bail, Odd Erik Gundersen, Jake M. Hofman, Jessica Hullman, Michael A. Lones, Momin M. Malik, Priyanka Nanayakkara, Russell A. Poldrack, Inioluwa Deborah Raji, Michael Roberts, Matthew J. Salganik, Marta Serra-Garcia, Brandon M. Stewart, Gilles Vandewiele, Arvind Narayanan
Machine learning (ML) methods are proliferating in scientific research.
no code implementations • 10 Aug 2023 • Hariharan Subramonyam, Jessica Hullman
Visualization for machine learning (VIS4ML) research aims to help experts apply their prior knowledge to develop, understand, and improve the performance of machine learning models.
no code implementations • 12 Mar 2022 • Jessica Hullman, Sayash Kapoor, Priyanka Nanayakkara, Andrew Gelman, Arvind Narayanan
We conclude by discussing risks that arise when sources of errors are misdiagnosed and the need to acknowledge the role of human inductive biases in learning and reform.
2 code implementations • 28 Jul 2020 • Alex Kale, Matthew Kay, Jessica Hullman
We also see that visualization designs that support the least biased effect size estimation do not support the best decision-making, suggesting that a chart user's sense of effect size may not necessarily be identical when they use the same information for different tasks.
Human-Computer Interaction
1 code implementation • 23 Apr 2020 • Sungsoo Ray Hong, Jessica Hullman, Enrico Bertini
As the use of machine learning (ML) models in product development and data-driven decision-making processes became pervasive in many domains, people's focus on building a well-performing model has increasingly shifted to understanding how their model works.