When algorithms make decisions, is there any room for discretion? Is new media technology making democratic politics impossible? What are the implications for the information explosion unleashed by large corporations such as Google? How has social life been transformed by new media technologies? What transformations have emerged in art and performative cultures with the impact of interactive media technologies? How has our understanding of the ‘human’ been transformed by advances in genetic engineering and artificial intelligence? Humanities Informatics is emerging as a new field in response to these issues.  

In June 2018, the Humanities Informatics Lab at UVA hosted the Consortium of Humanities Centers and Institutes annual meeting on this theme. Scholars from universities across the globe gathered to discuss the ubiquity of data in our lives and showcase the power of the humanities to address urgent questions about the 'human' in our information age. You can find podcasts of the lectures here.


Upcoming Events

HMI Weekly Meeting: Algorithmic Fairness with Deborah Hellman, Law, UVA

Wednesday, October 16, 2019

12:15pm - 1:30pm in Wilson 142 (lunch served)

Algorithmic decision making is both increasingly common and increasingly controversial. Critics worry that algorithmic tools are not transparent, accountable or fair. Assessing the fairness of these tools has been especially fraught as it requires that we agree about what fairness is and what it entails. Unfortunately, we do not. The technological literature is now littered with a multitude of measures, each purporting to assess fairness along some dimension. Two types of measures stand out. According to one, algorithmic fairness requires that the score an algorithm produces should be equally accurate for members of legally protected groups, blacks and whites for example. According to the other, algorithmic fairness requires that the algorithm produces the same percentage of false positives or false negatives for each of the groups at issue. Unfortunately, there is often no way to achieve parity in both these dimensions. This fact has led to a pressing question.  Which type of measure should we prioritize and why?

This Article makes three contributions to the debate about how best to measure algorithmic fairness: one conceptual, one normative, and one legal. Equal predictive accuracy ensures that a score means the same thing for each group at issue.  As such, it relates to what one ought to believe about a scored individual. Because questions of fairness usually relate to action not belief, this measure is ill-suited as a measure of fairness. This is the Article’s conceptual contribution. Second, this Article argues that parity in the ratio of false positives to false negatives is a normatively significant measure. While a lack of parity in this dimension is not constitutive of unfairness, this measure provides important reasons to suspect that unfairness exists. This is the Article’s normative contribution. Interestingly, improving the accuracy of algorithms overall will lessen this unfairness. Unfortunately, a common assumption that antidiscrimination law prohibits the use of racial and other protected classifications in all contexts is inhibiting those who design algorithms from making them as fair and accurate as possible. This Article’s third contribution is to show that the law poses less of a barrier than many assume.

HMI Weekly Meeting: Representation in Deep Neural Networks with Cameron Buckner, Philosophy, University of Houston

Friday, October 25, 2019

12:15pm - 1:30pm, Wilson 142

No HMI Meeting: Thanksgiving

Wednesday, November 27, 2019

HMI Conference

Friday, December 6, 2019


HMI Weekly Meeting: Applications of Machine Learning to Marketing Big Data with Natasha Foutz, Commerce School, UVA

Wednesday, January 29, 2020

12:15pm - 1:30pm in Wilson 142 (lunch served)

HMI Weekly Meeting: Computer Ethics with John Basl, Philosophy, Northeastern University

Wednesday, February 12, 2020

12:15pm - 1:30pm in Wilson 142 (lunch served)

HMI Weekly Meeting: Criminal Justice and Big Data with Sarah Brayne, Sociology, University of Texas

Wednesday, March 4, 2020

12:15pm - 1:30pm in Wilson 142 (lunch served)

No HMI Meeting: Spring Break

Wednesday, March 11, 2020

12:15pm - 1:30pm in Wilson 142 (lunch served)

Humanities Informatics Lab Showcase

Thursday, April 2, 2020 to Saturday, April 4, 2020

Wilson 142