Since at least the 1960s, scholars, pundits and the general public have been worried about AI hype – the tendency to project more power, agency and intelligence on AI than it really has. To counter this tendency, academics often warn against anthropomorphizing AI or assuming that its impacts on society are inevitable. While these are sensible cautions, taken too far, they risk ignoring the way the design of AI actually can shape and condition how power and authority is distributed in society – and in academia. How, then, to see AI? Is it merely a tool people use for their own ends? Or can it hijack these ends? This paper argues that Langdon Winner provides provocative frameworks for tackling these questions.

PAGES
232 – 238
DOI
All content is freely available without charge to users or their institutions. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles in this journal without asking prior permission of the publisher or the author. Articles published in the journal are distributed under a http://creativecommons.org/licenses/by/4.0/.
Issues
Also in this issue:
-
Automated plagiarism
-
Generative artificial intelligence in qualitative analysis: a critical examination of tools, trust and rigor
-
‘Foreignize yourself’. What has translation to do with innovation? A translation studies approach to hybrid innovation
-
From tools to symbols: exploring the complex nexus of smartphones in Bangladesh
-
Impoverishing peer review