![]() The more ubiquitous AI technology becomes, the more important it is to understand its ethical effects. ![]() It is relatively clear that AI technology can make a difference to the conditions of human autonomy, and it would be surprising if the difference it makes could not be negative or positive. The responsible people (designers, users, etc.) have duties and ought-to-do norms, which correspond to these ought-to-be norms. AI systems can promote or hinder human autonomy, but can they literally respect or disrespect a person’s autonomy? We argue for a philosophical view according to which AI systems-while not moral agents or bearers of duties, and unable to literally respect or disrespect-are governed by so-called “ought-to-be norms.” This explains the normativity at stake with AI systems. It is clear how human agents can, for example, via coercion or manipulation, hinder each other’s autonomy, or how they can respect each other’s autonomy. Ranging from consent to data collection and processing, to computational tasks and interface design, to institutional and societal considerations, various aspects related to sociotechnical systems must be accounted for in order to get the full picture of potential effects of AI systems on human autonomy. What emerges is a philosophically motivated picture of autonomy and of the normative requirements personal autonomy poses in the context of algorithmic systems. Drawing on recent literature on AI ethics, philosophical literature on dimensions of autonomy, and on independent philosophical scrutiny, we first propose a multi-dimensional model of human autonomy and then discuss how AI systems can support or hinder human autonomy. This study concerns the sociotechnical bases of human autonomy. Faculty of Social Sciences, Tampere University, Tampere, Finland.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |