I am an Assistant Professor in the Department of Public Administration and International Affairs in the Maxwell School at Syracuse University.
My research is at the intersection of practical ethics, political philosophy, and technology. Currently, I work on the governance of AI and the philosophy of data science.
As part of the Governance of AI Research Group, I co-edited the forthcoming Oxford Handbook of AI Governance.
My research concerns the governance of emerging technologies, such as self-driving cars, autonomous weapons systems, and machine learning in the public sector. I am currently working on a book on the philosophy of data science. For several years, my work investigated the nature of human and collective agency and how it relates to moral responsibility. I still draw on this research when thinking about the ethics of artificial intelligence.
Some of my work is in practical political philosophy. I have examined what states owe to refugrees and how the digital economy affects global justice.
Some time ago, a computer science magazine asked me to talk about my work on self-driving cars.
co-authored with Désirée Lim
This chapter argues for a structural injustice approach to the governance of AI. Structural injustice has an analytical and evaluative component. The analytical component consists of structural explanations that are well known in the social sciences. The evaluative component is a theory of justice. Structural injustice is a powerful conceptual tool that allows researchers and practitioners to identify, articulate, and perhaps even anticipate, AI biases. The chapter begins with an example of racial bias in AI that arises from structural injustice. The chapter then presents the concept of structural injustice as introduced by the philosopher Iris Marion Young. The chapter moreover argues that structural injustice is well suited as an approach to the governance of AI and compares this approach to alternative approaches that start from analyses of harms and benefits or from value statements. The chapter suggests that structural injustice provides methodological and normative foundations for the values and concerns of diversity, equity, and inclusion (DEI). The chapter closes with a look into the idea of “structure” and responsibility. The idea of structure is central to justice. An open theoretical research question is to what extent AI is itself part of the structure of society. Finally, the practice of responsibility is central to structural injustice. Even if they cannot be held responsible for the existence of structural injustice, every individual and every organization has some responsibility to address structural injustice going forward.
This chapter reviews and evaluates different ways in which digital technologies may affect democracy. Specifically, the chapter develops a framework to evaluate democratic practices that is rooted in the tradition of deliberative democracy. The chapter then applies this framework to evaluate proposals of how technology may improve democracy. The chapter distinguishes three families of proposals depending on the depth of the change that they affect. Mere changes, such as automatic fact checking on social media, augment existing practices. Moderate reforms, such as apps that enable and re-ward participation in local government, facilitate new practices. Radical revisions, such as using artificial intelligence to replace parliaments, are constitutive of new practices often replacing existing ones. This chapter then concentrates on three radical revisions — Wiki democracy, avatar democracy, and data democracy — and identifies meaning-ful benefits in the first and deep problems in the latter two proposals.
This paper argues against the call to democratize artificial intelligence (AI). Several authors demand to reap purported benefits that rest in direct and broad participation: In the governance of AI, more people should be more involved in more decisions about AI—from development and design to deployment. This paper opposes this call. The paper presents five objections against broadening and deepening public participation in the governance of AI. The paper begins by reviewing the literature and carving out a set of claims that are associated with the call to “democratize AI”. It then argues that such a democratization of AI (1) rests on weak grounds because it does not answer to a demand of legitimization, (2) is redundant in that it overlaps with existing governance structures, (3) is resource intensive, which leads to injustices, (4) is morally myopic and thereby creates popular oversights and moral problems of its own, and finally, (5) is neither theoretically nor practically the right kind of response to the injustices that animate the call. The paper concludes by suggesting that AI should be democratized not by broadening and deepening participation but by increasing the democratic quality of the administrative and executive elements of collective decision making. In a slogan: The question is not so much whether AI should be democratized but how.
co-authored with Joshua Cohen
This article describes a teaching plan for a discussion-driven introduction to moral reasoning and explains its philosophical and pedagogical rationale. The teaching plan consists of a sequence of thought experiments that build on one another, and ends with participants addressing some morally complex, real-life issues. The plan rests on extensive experience teaching moral reasoning in several different professional learning environments. The main contribution of this article is practical. The goal is to equip educators with a pedagogical approach and ready-to-use teaching materials. To this end, the article offers the methodological background, identifies learning objectives as well as pitfalls of teaching the trolley problem, and describes the pedagogy of the session.
co-authored with Matthew M Young, Justin Bullock and Kyoung-Cheol Kim
Artificial intelligence (AI) offers challenges and benefits to the public sector. We present an ethical framework to analyze the effects of AI in public organizations, guide empirical and theoretical research in public administration, and inform practitioner deliberation and decision-making on AI adoption. We put forward six propositions on how the use of AI by public organizations may facilitate or prevent unnecessary harm. The framework builds on the theory of administrative evil and contributes to it in two ways. First, we interpret the theory of administrative evil through the lens of agency theory. We examine how the mechanisms stipulated by the former relate to the underlying mechanisms of the latter. Specifically, we highlight how mechanisms of administrative evil can be analyzed as information problems in the form of adverse selection and moral hazard. Second, we describe possible causal pathways of the theory of administrative evil and associate each with a level of analysis: individual (micro), organizational (meso), and cultural (macro). We then develop both descriptive and normative propositions on AI’s potential to increase or decrease the risk of administrative evil. The article hence contributes an institutional and public administration lens to the growing literature on AI safety and value alignment.
The ongoing debate on the ethics of self-driving cars typically focuses on two approaches to answering such questions: moral philosophy and social science. I argue that these two approaches are both lacking. We should neither deduce answers from individual moral theories nor should we expect social science to give us complete answers. To supplement these approaches, we should turn to political philosophy. The issues we face are collective decisions that we make together rather than individual decisions we make in light of what we each have reason to value. Political philosophy adds three basic concerns to our conceptual toolkit: reasonable pluralism, human agency, and legitimacy. These three concerns have so far been largely overlooked in the debate on the ethics of self-driving cars.
co-authored with Matthew M Young, Danylo Honcharov, and Sucheta Soundarajan
Administrative errors in unemployment insurance (UI) decisions give rise to a public values conflict between efficiency and efficacy. We analyze whether artificial intelligence (AI) – in particular, methods in machine learning (ML) – can be used to detect administrative errors in UI claims decisions, both in terms of accuracy and normative tradeoffs. We use 16 years of US Department of Labor audit and policy data on UI claims to analyze the accuracy of 7 different random forest and deep learning models. We further test weighting schemas and synthetic data approaches to correcting imbalances in the training data. A random forest model using gradient descent boosting is more accurate, along several measures, and preferable in terms of public values, than every deep learning model tested. Adjusting model weights produces significant recall improvements for low-n outcomes, at the expense of precision. Synthetic data produces attenuated improvements and drawbacks relative to weights.
co-authored with Sebastian Köhler
The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate the conceptual choices we have, in the light of a systematic understanding of why the concept is important in the first place—in short, the way forward is to engage in conceptual engineering. The paper then illustrates what approaching the responsibility gap problem as a conceptual engineering problem looks like. It outlines argumentative pathways out of the responsibility gap problem and relates these to existing contributions to the dispute.
co-authored with Matthew M Young, Danylo Honcharov, and Sucheta Soundarajan
This article explores the extent to which machine learning can be used to detect administrative errors. It concentrates on administrative errors in unemployment insurance (UI) decisions, which give rise to a public values conflict between efficiency and effectiveness. This conflict is first described and then highlighted in the history of the US UI regime. Machine learning may not only mitigate this conflict but it may also help to combat fraud and reduce the backlog of claims associated with economic crises such as the COVID-19 pandemic. The article uses data about improper UI payments throughout the US from 2002 through 2018 to analyze the accuracy of random forests and deep learning models. We find that a random forest model using gradient descent boosting is more accurate, along several measures, than every deep learning model tested. This finding could be explained by the goodness-of-fit between the machine learning method and the available data. Alternatively, deep learning performance could be attenuated by necessary limits to publicly-accessible claims data.
This chapter evaluates the global digital economy from the perspective of political, socio-economic, and intergenerational justice. I argue that the digital economy poses a problem of political injustice in the form of, broadly, illegitimate power.
The paper begins by arguing that “digital economy” should be defined as infrastructure that is provided or accessed online. This has several advantages. First, the economic analysis of the digital economy becomes a special case of the economics of infrastructure. Second, seeing the digital economy as about infrastructure allows to set aside ethical concerns that are orthogonal. It is argued that the digital economy is much broader than the data economy and its related concerns of privacy. Third, seeing the digital economy as digital infrastructure brings into clearer view the relevance of justice. Not only does the digital economy on this analysis global reach it is also facilitating the production of goods across all aspects of life.
I argue that the digital economy raises no distinctive concerns from the perspective of socio-economic or intergenerational justice. Instead, I argue that the crucial problem is one of political justice. The digital economy poses four problems (a) an abridgment of state power (b) a degradation of economic opportunities and political relations (c) support of authoritarian politics (d) leverage of international dominance. Each phenomenon is backed up by a political economic-analysis and illustrated with examples.
Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take steps towards vindicating the more general idea that superiors can be morally responsible in virtue of being in command.
Trolley cases are widely considered central to the ethics of autonomous vehicles. I caution against this by identifying four problems. (1) Trolley cases, given technical limitations, rest on assumptions that are in tension with one another. Furthermore, (2) trolley cases illuminate only a limited range of ethical issues insofar as they cohere with a certain design framework. Furthermore, (3) trolley cases seem to demand a moral answer when a political answer is called for. Finally, (4) trolley cases might be epistemically problematic in several ways. To put forward a positive proposal, I illustrate how ethical challenges arise from mundane driving situations. I argue that mundane situations are relevant because of the specificity they require and the scale they exhibit. I then illustrate some of the ethical challenges arising from optimizing for safety, balancing safety with other values such as mobility, and adjusting to incentives of legal frameworks.
Related publications: “The everyday ethical challenges of self-driving cars,” The Conversation, syndicated in The Boston Globe, and others.
The asylum system faces problems on two fronts. States undermine it with populist politics, and migrants use it to satisfy their migration preferences. To address these problems, asylum services should be commodified. States should be able to pay other states to provide determination and protection-elsewhere. In this article, I aim to identify a way of implementing this idea that is both feasible and desirable. First, I sketch a policy proposal for a commodification of asylum services. Then, I argue that this policy proposal is not only compatible with the right to asylum, but also supported by moral considerations. Despite some undesirable moral features, a market in asylum facilitates the provision of asylum to those who need it.
Related publications: This proposal also made it to this book Wenn ich mir etwas wünschen dürfte (Steidl 2017) on the occasion of German general elections, and to a discussion in the Change My View Subreddit here.
This paper is on the problem of profligate omissions. The problem is that counter-factual definitions of causation identify as a cause anything that could have prevented an effect but that did not actually occur, which is a highly counterintuitive result. Many solutions of the problem of profligate omissions appeal to normative, epistemic, pragmatic, or metaphysical considerations. These existing solutions are in some sense substantive. In contrast to such substantive answers, this paper puts forward a technical proposal. I propose to weaken the centering condition of the semantics that is used to evaluate counterfactuals. This allows to distinguish between proximate and distant possibilities and requires the existence of a greater than singleton set of proximate possibilities relative to which the truth of conditionals is evaluated. This proposal captures an abstraction that is shared by many of the existing solutions: depending on how the distance ordering underlying the weak centering con-dition is constructed and interpreted, some of these existing solutions can be recovered.
The disappearing agent problem is an argument in the metaphysics of agency. Proponents of the agent-causal approach argue that the rival event-causal approach fails to account for the fact that an agent is active. This paper examines an analogy between this disappearing agent problem and the exclusion problem in the metaphysics of mind. I develop the analogy between these two problems and survey existing solutions. I suggest that some solutions that have received significant attention in response to the exclusion problem have seen considerably less attention in response to the disappearing agent problem. For example, one solution to the exclusion problem is to reject the exclusion assumption. Analogously, one solution to the disappearing agent problem could be to deny the claim that the agent-causal approach and the event-causal approach are mutually exclusive. Similarly, proportionality theories of causation, a solution to the exclusion problem, can be transferred to the disappearing agent problem. After establishing the plausibility of the analogy between the two problems, I examine how this latter solution in particular can be transferred from the one problem to the other.
A central dispute in social ontology concerns the existence of group minds and actions. I argue that some authors in this dispute rely on rival views of existence without sufficiently acknowledging this divergence. I proceed in three steps in arguing for this claim. First, I define the phenomenon as an implicit higher-order disagreement by drawing on an analysis of verbal disputes. Second, I distinguish two theories of existence – the theory-commitments view and the truthmaker view – in both their eliminativist and their constructivist variants. Third, I examine individual contributions to the dispute about the existence of group minds and actions to argue that these contributions have an implicit higher-order disagreement. This paper serves two purposes. First, it is a study to apply recent advances in meta-ontology. Second, it contributes to the debate on social ontology by illustrating how meta-ontology matters for social ontology.
co-authored with Holly Lawford-Smith
Punishing groups raises a difficult question, namely, how their punishment can be justified at all. Some have argued that punishing groups is morally problematic because of the effects that the punishment entails for their members. In this paper we argue against this view. We distinguish the question of internal justice – how punishment-effects are distributed – from the question of external justice – whether the punishment is justified. We argue that issues of internal justice do not in general undermine the permissibility of punishment. We also defend the permissibility of what some call “random punishment.” We argue that, for some kinds of collectives, there is no general obligation to internally distribute the punishment-effects equally or in proportion to individual contribution.
This paper develops a taxonomy of kinds of actions that can be seen in group agency, human–machine interactions, and virtual realities. These kinds of actions are special in that they are not embodied in the ordinary sense. I begin by analysing the notion of embodiment into three separate assumptions that together comprise what I call the Embodiment View. Although this view may find support in paradigmatic cases of agency, I suggest that each of its assumptions can be relaxed. With each assumption that is given up, a different kind of disembodied action becomes available. The taxonomy gives a systematic overview and suggests that disembodied actions have the same theoretical relevance as the actions of any ordinarily embodied human.
This paper is about the status of collective actions. According to one view, collective actions metaphysically reduce to individual actions because sentences about collective actions are merely a shorthand for sentences about individual actions. I reconstruct an argument for this view and show via counterexamples that it is not sound. The argument relies on a paraphrase procedure to unpack alleged shorthand sentences about collective actions into sentences about individual actions. I argue that the best paraphrase procedure that has been put forward so far fails to produce adequate results.
Related publications: The paper prompted a discussion note, which you can find here.
co-authored with Jason McKenzie Alexander and Chris Thompson
This paper examines two questions about scientists' search for knowledge. First, which search strategies generate discoveries effectively? Second, is it advantageous to diversify search strategies? We argue pace Weisberg and Muldoon (2009) that, on the first question, a search strategy that deliberately seeks novel research approaches need not be optimal. On the second question, we argue they have not shown epistemic reasons exist for the division of cognitive labor, identifying the errors that led to their conclusions. Furthermore, we generalize the epistemic landscape model, showing that one should be skeptical about the benefits of social learning in epistemically complex environments.
Additional material: The model used for this article is written using NetLogo. The source code of our model is available here. It involves a swarm strategy, which draws on the model by Couzin et al. (2005) and the Boids model. You can find a simple simulation that I wrote to study the behaviour of this model here.
We are responsible for some things but not for others. In this thesis, I investigate what it takes for an entity to be responsible for something. This question has two components: agents and actions. I argue for a permissive view about agents. Entities such as groups or artificially intelligent systems may be agents in the sense required for responsibility. With respect to actions, I argue for a causal view. The relation in virtue of which agents are responsible for actions is a causal one. I claim that responsibility requires causation and I develop a causal account of agency. This account is particularly apt for addressing the relationship between agency and moral responsibility and sheds light on the causal foundations of moral responsibility.