Since at least the 1960s, scholars, pundits and the general public have been worried about AI hype – the tendency to project more power, agency and intelligence on AI than it really has. To counter this tendency, academics often warn against anthropomorphizing AI or assuming that its impacts on society are inevitable. While these are sensible cautions, taken too far, they risk ignoring the way the design of AI actually can shape and condition how power and authority is distributed in society – and in academia. How, then, to see AI? Is it merely a tool people use for their own ends? Or can it hijack these ends? This paper argues that Langdon Winner provides provocative frameworks for tackling these questions.

PAGES
232 – 238
DOI
All content is freely available without charge to users or their institutions. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles in this journal without asking prior permission of the publisher or the author. Articles published in the journal are distributed under a http://creativecommons.org/licenses/by/4.0/.
Issues
Also in this issue:
-
Creating value through service innovation: an effectual design thinking framework
-
Health and medical researchers are willing to trade their results for journal impact factors: results from a discrete choice experiment
-
The death and resurrection of manuscript submission systems
-
Ryan Jenkins, David Černý and Tomáš Hříbek (eds) Autonomous Vehicle Ethics: The Trolley Problem and Beyond
-
As open as possible, but as closed as necessary: openness in innovation policy