Welcome

I am an Assistant Professor in the Department of Public Administration and International Affairs in the Maxwell School at Syracuse University.

My research is at the intersection of practical ethics, political philosophy, and technology. Currently, I work on AI governance and the philosophy of data science.

In the past few years, I have thought a bit about self-driving cars, written about their ethics, and talked about their politics.

As part of the Governance of AI Research Group, I co-edited The Oxford Handbook of AI Governance.

You can also find me on PhilPeople, Twitter, LinkedIn and my bicycle.

Research

My research concerns the governance of emerging technologies, such as self-driving cars, autonomous weapons systems, and machine learning in the public sector. Currently, I am writing a book on the philosophy of data science. For several years, my work investigated the nature of human and collective agency and how it relates to moral responsibility. I still draw on this research when thinking about the ethics of artificial intelligence.

Some of my work is in practical political philosophy. I have examined what states owe to refugees and how the digital economy affects global justice.

Some time ago, a computer science magazine asked me to talk about my work on self-driving cars.

Publications


AI Governance

AI and Structural Injustice: Foundations for Equity, Values, and Responsibility   In Oxford Handbook of AI Governance, Justin Bullock (ed.) et al.   2022

co-authored with Désirée Lim

This chapter argues for a structural injustice approach to the governance of AI. Structural injustice has an analytical and evaluative component. The analytical component consists of structural explanations that are well known in the social sciences. The evaluative component is a theory of justice. Structural injustice is a powerful conceptual tool that allows researchers and practitioners to identify, articulate, and perhaps even anticipate, AI biases. The chapter begins with an example of racial bias in AI that arises from structural injustice. The chapter then presents the concept of structural injustice as introduced by the philosopher Iris Marion Young. The chapter moreover argues that structural injustice is well suited as an approach to the governance of AI and compares this approach to alternative approaches that start from analyses of harms and benefits or from value statements. The chapter suggests that structural injustice provides methodological and normative foundations for the values and concerns of diversity, equity, and inclusion (DEI). The chapter closes with a look into the idea of “structure” and responsibility. The idea of structure is central to justice. An open theoretical research question is to what extent AI is itself part of the structure of society. Finally, the practice of responsibility is central to structural injustice. Even if they cannot be held responsible for the existence of structural injustice, every individual and every organization has some responsibility to address structural injustice going forward.

Chapter Published Online
Should we Automate Democracy?   In Oxford Handbook of Digital Ethics, Carissa Véliz (ed.)   2022

This chapter reviews and evaluates different ways in which digital technologies may affect democracy. Specifically, the chapter develops a framework to evaluate democratic practices that is rooted in the tradition of deliberative democracy. The chapter then applies this framework to evaluate proposals of how technology may improve democracy. The chapter distinguishes three families of proposals depending on the depth of the change that they affect. Mere changes, such as automatic fact checking on social media, augment existing practices. Moderate reforms, such as apps that enable and re-ward participation in local government, facilitate new practices. Radical revisions, such as using artificial intelligence to replace parliaments, are constitutive of new practices often replacing existing ones. This chapter then concentrates on three radical revisions — Wiki democracy, avatar democracy, and data democracy — and identifies meaning-ful benefits in the first and deep problems in the latter two proposals.

Chapter Published Online
Against “Democratizing AI”   AI & Society

This paper argues against the call to democratize artificial intelligence (AI). Several authors demand to reap purported benefits that rest in direct and broad participation: In the governance of AI, more people should be more involved in more decisions about AI—from development and design to deployment. This paper opposes this call. The paper presents five objections against broadening and deepening public participation in the governance of AI. The paper begins by reviewing the literature and carving out a set of claims that are associated with the call to “democratize AI”. It then argues that such a democratization of AI (1) rests on weak grounds because it does not answer to a demand of legitimization, (2) is redundant in that it overlaps with existing governance structures, (3) is resource intensive, which leads to injustices, (4) is morally myopic and thereby creates popular oversights and moral problems of its own, and finally, (5) is neither theoretically nor practically the right kind of response to the injustices that animate the call. The paper concludes by suggesting that AI should be democratized not by broadening and deepening participation but by increasing the democratic quality of the administrative and executive elements of collective decision making. In a slogan: The question is not so much whether AI should be democratized but how.

Paper Published Online
Teaching Moral Reasoning: Why and How to Use the Trolley Problem   Journal of Public Affairs Education 2021

co-authored with Joshua Cohen

This article describes a teaching plan for a discussion-driven introduction to moral reasoning and explains its philosophical and pedagogical rationale. The teaching plan consists of a sequence of thought experiments that build on one another, and ends with participants addressing some morally complex, real-life issues. The plan rests on extensive experience teaching moral reasoning in several different professional learning environments. The main contribution of this article is practical. The goal is to equip educators with a pedagogical approach and ready-to-use teaching materials. To this end, the article offers the methodological background, identifies learning objectives as well as pitfalls of teaching the trolley problem, and describes the pedagogy of the session.

Paper Published Online
Artificial Intelligence and Administrative Evil   Perspectives on Public Management and Governance 2021

co-authored with Matthew M Young, Justin Bullock and Kyoung-Cheol Kim

Artificial intelligence (AI) offers challenges and benefits to the public sector. We present an ethical framework to analyze the effects of AI in public organizations, guide empirical and theoretical research in public administration, and inform practitioner deliberation and decision-making on AI adoption. We put forward six propositions on how the use of AI by public organizations may facilitate or prevent unnecessary harm. The framework builds on the theory of administrative evil and contributes to it in two ways. First, we interpret the theory of administrative evil through the lens of agency theory. We examine how the mechanisms stipulated by the former relate to the underlying mechanisms of the latter. Specifically, we highlight how mechanisms of administrative evil can be analyzed as information problems in the form of adverse selection and moral hazard. Second, we describe possible causal pathways of the theory of administrative evil and associate each with a level of analysis: individual (micro), organizational (meso), and cultural (macro). We then develop both descriptive and normative propositions on AI’s potential to increase or decrease the risk of administrative evil. The article hence contributes an institutional and public administration lens to the growing literature on AI safety and value alignment.

Paper Published Online
Ethics of Technology needs more Political Philosophy   Communications of the ACM 2020

The ongoing debate on the ethics of self-driving cars typically focuses on two approaches to answering such questions: moral philosophy and social science. I argue that these two approaches are both lacking. We should neither deduce answers from individual moral theories nor should we expect social science to give us complete answers. To supplement these approaches, we should turn to political philosophy. The issues we face are collective decisions that we make together rather than individual decisions we make in light of what we each have reason to value. Political philosophy adds three basic concerns to our conceptual toolkit: reasonable pluralism, human agency, and legitimacy. These three concerns have so far been largely overlooked in the debate on the ethics of self-driving cars.

Paper Published Online Video Abstract

Practical Ethics

Justice and the Global Digital Economy   In: Justice in Global Economic Governance, Axel Berger, Clara Brandi, Eszter Kollar (eds.), Edinburgh University Press   forthcoming

This chapter evaluates the global digital economy from the perspective of political, socio-economic, and intergenerational justice. I argue that the digital economy poses a problem of political injustice in the form of, broadly, illegitimate power.

The paper begins by arguing that “digital economy” should be defined as infrastructure that is provided or accessed online. This has several advantages. First, the economic analysis of the digital economy becomes a special case of the economics of infrastructure. Second, seeing the digital economy as about infrastructure allows to set aside ethical concerns that are orthogonal. It is argued that the digital economy is much broader than the data economy and its related concerns of privacy. Third, seeing the digital economy as digital infrastructure brings into clearer view the relevance of justice. Not only does the digital economy on this analysis global reach it is also facilitating the production of goods across all aspects of life.

I argue that the digital economy raises no distinctive concerns from the perspective of socio-economic or intergenerational justice. Instead, I argue that the crucial problem is one of political justice. The digital economy poses four problems (a) an abridgment of state power (b) a degradation of economic opportunities and political relations (c) support of authoritarian politics (d) leverage of international dominance. Each phenomenon is backed up by a political economic-analysis and illustrated with examples.

Paper
No Wheel but a Dial: Why and how passengers in self-driving cars should decide how their car drives   Ethics and Information Technology

Much of the debate on the ethics of self-driving cars has revolved around trolley scenarios. This paper instead takes up the political or institutional question of who should decide how a self-driving car drives. Specifically, this paper is on the question of whether and why passengers should be able to control how their car drives. The paper reviews existing arguments — those for passenger ethics settings and for mandatory ethics settings respectively — and argues that they fail. Although the arguments are not successful, they serve as the basis to formulate desiderata that any approach to regulating the driving behavior of self-driving cars ought to fulfill. The paper then proposes one way of designing passenger ethics settings that meets these desiderata.

Paper Published Online
Using Artificial Intelligence to Identify Administrative Errors in Unemployment Insurance   Government Information Quarterly

co-authored with Matthew M Young, Danylo Honcharov, and Sucheta Soundarajan

Administrative errors in unemployment insurance (UI) decisions give rise to a public values conflict between efficiency and efficacy. We analyze whether artificial intelligence (AI) – in particular, methods in machine learning (ML) – can be used to detect administrative errors in UI claims decisions, both in terms of accuracy and normative tradeoffs. We use 16 years of US Department of Labor audit and policy data on UI claims to analyze the accuracy of 7 different random forest and deep learning models. We further test weighting schemas and synthetic data approaches to correcting imbalances in the training data. A random forest model using gradient descent boosting is more accurate, along several measures, and preferable in terms of public values, than every deep learning model tested. Adjusting model weights produces significant recall improvements for low-n outcomes, at the expense of precision. Synthetic data produces attenuated improvements and drawbacks relative to weights.

Published Online
Responsible AI Through Conceptual Engineering   Philosophy & Technology

co-authored with Sebastian Köhler

The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate the conceptual choices we have, in the light of a systematic understanding of why the concept is important in the first place—in short, the way forward is to engage in conceptual engineering. The paper then illustrates what approaching the responsibility gap problem as a conceptual engineering problem looks like. It outlines argumentative pathways out of the responsibility gap problem and relates these to existing contributions to the dispute.

Paper Published Online
The Right Tool for The Job? Assessing the Use of Artificial Intelligence for Identifying Administrative Errors   ACM DG.O2021 Proceedings

co-authored with Matthew M Young, Danylo Honcharov, and Sucheta Soundarajan

This article explores the extent to which machine learning can be used to detect administrative errors. It concentrates on administrative errors in unemployment insurance (UI) decisions, which give rise to a public values conflict between efficiency and effectiveness. This conflict is first described and then highlighted in the history of the US UI regime. Machine learning may not only mitigate this conflict but it may also help to combat fraud and reduce the backlog of claims associated with economic crises such as the COVID-19 pandemic. The article uses data about improper UI payments throughout the US from 2002 through 2018 to analyze the accuracy of random forests and deep learning models. We find that a random forest model using gradient descent boosting is more accurate, along several measures, than every deep learning model tested. This finding could be explained by the goodness-of-fit between the machine learning method and the available data. Alternatively, deep learning performance could be attenuated by necessary limits to publicly-accessible claims data.

Paper Published Online
Responsibility for Killer Robots   Ethical Theory and Moral Practice 2019

Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take steps towards vindicating the more general idea that superiors can be morally responsible in virtue of being in command.

Paper Published Online
Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations   Ethical Theory and Moral Practice 2018

Trolley cases are widely considered central to the ethics of autonomous vehicles. I caution against this by identifying four problems. (1) Trolley cases, given technical limitations, rest on assumptions that are in tension with one another. Furthermore, (2) trolley cases illuminate only a limited range of ethical issues insofar as they cohere with a certain design framework. Furthermore, (3) trolley cases seem to demand a moral answer when a political answer is called for. Finally, (4) trolley cases might be epistemically problematic in several ways. To put forward a positive proposal, I illustrate how ethical challenges arise from mundane driving situations. I argue that mundane situations are relevant because of the specificity they require and the scale they exhibit. I then illustrate some of the ethical challenges arising from optimizing for safety, balancing safety with other values such as mobility, and adjusting to incentives of legal frameworks.

Paper Published Online

Related publications:The everyday ethical challenges of self-driving cars,” The Conversation, syndicated in The Boston Globe, and others.

Asylum for Sale: A Market between States that is Feasible and Desirable   Journal of Applied Philosophy 2019

The asylum system faces problems on two fronts. States undermine it with populist politics, and migrants use it to satisfy their migration preferences. To address these problems, asylum services should be commodified. States should be able to pay other states to provide determination and protection-elsewhere. In this article, I aim to identify a way of implementing this idea that is both feasible and desirable. First, I sketch a policy proposal for a commodification of asylum services. Then, I argue that this policy proposal is not only compatible with the right to asylum, but also supported by moral considerations. Despite some undesirable moral features, a market in asylum facilitates the provision of asylum to those who need it.

Paper Published Online

Related publications: This proposal also made it to this book Wenn ich mir etwas wünschen dürfte (Steidl 2017) on the occasion of German general elections, and to a discussion in the Change My View Subreddit here.


Agency and Social Ontology

Difference-making and the Control Relation that Grounds Responsibility in Hierarchical Groups   Philosophical Studies

Hierarchical groups shape social, political, and personal life. This paper concerns the question of how individuals within such groups can be responsible. The paper explores how individual responsibility can be partially grounded in difference-making. The paper concentrates on the control condition of responsibility and takes into view three distinct phenomena of responsibility in hierarchical groups. First, a superior can be responsible for outcomes that her subordinates bring about. Second, a subordinate can be responsible although she is unable to prevent the outcome she brings about. Third, a superior can sometimes be responsible to a greater degree than her subordinates. It is argued that difference-making, as an interpretation of the control condition that partially grounds responsibility, accounts for all three of these phenomena within a limited but significant range of circumstances and can hence partially ground individual moral responsibility in hierarchical groups. The paper provides an element of a theory of individual responsibility to complement theories of corporate responsibility.

Paper Published Online
What Killed your Plant? Profligate Omissions and Weak Centering   Erkenntnis

This paper is on the problem of profligate omissions. The problem is that counter-factual definitions of causation identify as a cause anything that could have prevented an effect but that did not actually occur, which is a highly counterintuitive result. Many solutions of the problem of profligate omissions appeal to normative, epistemic, pragmatic, or metaphysical considerations. These existing solutions are in some sense substantive. In contrast to such substantive answers, this paper puts forward a technical proposal. I propose to weaken the centering condition of the semantics that is used to evaluate counterfactuals. This allows to distinguish between proximate and distant possibilities and requires the existence of a greater than singleton set of proximate possibilities relative to which the truth of conditionals is evaluated. This proposal captures an abstraction that is shared by many of the existing solutions: depending on how the distance ordering underlying the weak centering con-dition is constructed and interpreted, some of these existing solutions can be recovered.

Paper Published Online
The Disappearing Agent as an Exclusion Problem   Inquiry

The disappearing agent problem is an argument in the metaphysics of agency. Proponents of the agent-causal approach argue that the rival event-causal approach fails to account for the fact that an agent is active. This paper examines an analogy between this disappearing agent problem and the exclusion problem in the metaphysics of mind. I develop the analogy between these two problems and survey existing solutions. I suggest that some solutions that have received significant attention in response to the exclusion problem have seen considerably less attention in response to the disappearing agent problem. For example, one solution to the exclusion problem is to reject the exclusion assumption. Analogously, one solution to the disappearing agent problem could be to deny the claim that the agent-causal approach and the event-causal approach are mutually exclusive. Similarly, proportionality theories of causation, a solution to the exclusion problem, can be transferred to the disappearing agent problem. After establishing the plausibility of the analogy between the two problems, I examine how this latter solution in particular can be transferred from the one problem to the other.

Paper Published Online
Existence, Really? Tacit Disagreements about “Existence” in Disputes about Group Minds and Corporate Agents   Synthese 2021

A central dispute in social ontology concerns the existence of group minds and actions. I argue that some authors in this dispute rely on rival views of existence without sufficiently acknowledging this divergence. I proceed in three steps in arguing for this claim. First, I define the phenomenon as an implicit higher-order disagreement by drawing on an analysis of verbal disputes. Second, I distinguish two theories of existence – the theory-commitments view and the truthmaker view – in both their eliminativist and their constructivist variants. Third, I examine individual contributions to the dispute about the existence of group minds and actions to argue that these contributions have an implicit higher-order disagreement. This paper serves two purposes. First, it is a study to apply recent advances in meta-ontology. Second, it contributes to the debate on social ontology by illustrating how meta-ontology matters for social ontology.

Paper Published Online
Punishing Groups   The Monist 2019

co-authored with Holly Lawford-Smith

Punishing groups raises a difficult question, namely, how their punishment can be justified at all. Some have argued that punishing groups is morally problematic because of the effects that the punishment entails for their members. In this paper we argue against this view. We distinguish the question of internal justice – how punishment-effects are distributed – from the question of external justice – whether the punishment is justified. We argue that issues of internal justice do not in general undermine the permissibility of punishment. We also defend the permissibility of what some call “random punishment.” We argue that, for some kinds of collectives, there is no general obligation to internally distribute the punishment-effects equally or in proportion to individual contribution.

Paper Published Online
Agency and Embodiment: Groups, Human–Machine Interactions, and Virtual Realities   Ratio 2018

This paper develops a taxonomy of kinds of actions that can be seen in group agency, human–machine interactions, and virtual realities. These kinds of actions are special in that they are not embodied in the ordinary sense. I begin by analysing the notion of embodiment into three separate assumptions that together comprise what I call the Embodiment View. Although this view may find support in paradigmatic cases of agency, I suggest that each of its assumptions can be relaxed. With each assumption that is given up, a different kind of disembodied action becomes available. The taxonomy gives a systematic overview and suggests that disembodied actions have the same theoretical relevance as the actions of any ordinarily embodied human.

Paper Published Online
The Paraphrase Argument against Collective Actions   Australasian Journal of Philosophy 2017

This paper is about the status of collective actions. According to one view, collective actions metaphysically reduce to individual actions because sentences about collective actions are merely a shorthand for sentences about individual actions. I reconstruct an argument for this view and show via counterexamples that it is not sound. The argument relies on a paraphrase procedure to unpack alleged shorthand sentences about collective actions into sentences about individual actions. I argue that the best paraphrase procedure that has been put forward so far fails to produce adequate results.

Paper Published Online

Related publications: The paper prompted a discussion note, which you can find here.


Other Projects

Epistemic landscapes, optimal search and the division of cognitive labor   Philosophy of Science 2015

co-authored with Jason McKenzie Alexander and Chris Thompson

This paper examines two questions about scientists' search for knowledge. First, which search strategies generate discoveries effectively? Second, is it advantageous to diversify search strategies? We argue pace Weisberg and Muldoon (2009) that, on the first question, a search strategy that deliberately seeks novel research approaches need not be optimal. On the second question, we argue they have not shown epistemic reasons exist for the division of cognitive labor, identifying the errors that led to their conclusions.  Furthermore, we generalize the epistemic landscape model, showing that one should be skeptical about the benefits of social learning in epistemically complex environments.

Paper Published Online

Additional material: The model used for this article is written using NetLogo. The source code of our model is available here. It involves a swarm strategy, which draws on the model by Couzin et al. (2005) and the Boids model. You can find a simple simulation that I wrote to study the behaviour of this model here.



Book Reviews

Understanding Institutions: The Science and Philosophy of Living Together, by Francesco Guala   Journal of Social Ontology 2017
From Individual to Collective Intentionality: New Essays, edited by Sara Rachel Chant, Frank Hindriks, and Gerhard Preyer   Economics and Philosophy 2015


Dissertation

Agency as Difference-making   London School of Economics 2016

We are responsible for some things but not for others. In this thesis, I investigate what it takes for an entity to be responsible for something. This question has two components: agents and actions. I argue for a permissive view about agents. Entities such as groups or artificially intelligent systems may be agents in the sense required for responsibility. With respect to actions, I argue for a causal view. The relation in virtue of which agents are responsible for actions is a causal one. I claim that responsibility requires causation and I develop a causal account of agency. This account is particularly apt for addressing the relationship between agency and moral responsibility and sheds light on the causal foundations of moral responsibility.

Published Online

Teaching

  • Public Administration and Democracy
    Syracuse University, since Spring 2021
  • Philosophy and Ethics of Data Science
    Syracuse University, since Spring 2020
  • Ethics of Emerging Technology
    Syracuse University, since Fall 2019
  • Political Ontology: State, Law, and Gender
    Humboldt-Universität zu Berlin, Summer 2017
  • Authority, Democracy, and Freedom: Themes in Political Philosophy
    Humboldt-Universität zu Berlin, Summer 2017
  • Philosophy of the Social Sciences
    Humboldt-Universität zu Berlin, Winter 2016
  • Hard Choices and Transformative Experience
    Humboldt-Universität zu Berlin, Summer 2016
  • Equality
    University of Bayreuth, Summer 2015
  • Agency, Responsibility and Artificial Intelligence
    University of Bayreuth, Winter 2014
  • Ethics of Markets
    University of Bayreuth, Summer 2013
  • Policy Evaluation and Cost-Benefit-Analysis
    University of Bayreuth, Winter 2012

  • since 2019Syracuse University

    Since summer 2019 I am an Assistant Professor in the Department of Public Administration and International Affairs in the Maxwell School at Syracuse University.
  • 2017 — 2019Stanford University and Apple

    I worked on the ethics of machine learning and autonomous systems as a postdoctoral fellow in the McCoy Family Center for Ethics in Society, while spending part of my time at Apple University.
  • 2015 — 2014Humboldt-Universität Berlin

    I was a postdoctoral fellow at the department of philosophy and IRI THESys.
  • 2010 — 2016London School of Economics

    I read for a masters degree in Philosophy and Public Policy and did my PhD in Philosophy supervised by Christian List and Richard Bradley.
  • 2006 — 2010University of Bayreuth

    Studying Philosophy and Economics was formative for me. Although planning to work in journalism, I dabbled in social systems epistemology.

Download CV

You can download my CV as a PDF.