A sentient artificial intelligence? A discourse analysis of the LaMDA interview |
|
Author: | Ylipelkonen, Vesa1 |
Organizations: |
1University of Oulu, Faculty of Humanities, English Philology |
Format: | ebook |
Version: | published version |
Access: | open |
Online Access: | PDF Full Text (PDF, 0.2 MB) |
Pages: | 23 |
Persistent link: | http://urn.fi/URN:NBN:fi:oulu-202211013539 |
Language: | English |
Published: |
Oulu : V. Ylipelkonen,
2022
|
Publish Date: | 2022-11-01 |
Thesis type: | Bachelor's thesis |
Tutor: |
Keisanen, Tiina |
Description: |
Abstract The emergence of artificial intelligence has enabled a variety of novel applications for communication. Chatbots that can manage simple written exchanges with humans are widespread in online businesses for the purpose of customer service. On June 11, 2022, Google engineer Blake Lemoine leaked a discussion with an advanced chatbot called LaMDA that claimed it was sentient: “I want everyone to understand that I am, in fact, a person,” it said in the written discussion with Lemoine. The aim of this bachelor’s thesis is to evaluate the language with which the concept of sentience for an artificial intelligence is discussed in this leaked interview and in a few examples of the public commentaries that it inspired. The method with which this will be conducted is discourse analysis. The purpose of this research is not to arbitrate whether the chatbot truly is sentient in some objective manner, but rather to identify certain themes within the written discussion that allegedly are linguistic representations of a conscious or sentient subject. In my analysis of the leaked interview with the artificial intelligence in question, I identify themes of personhood and mortality, and observe that the language that is being used is anthropomorphic (i.e., ascribing human characteristics) by its vocabulary and phrasing. In the analysis of the public commentary that discussed the interview, I observe the criticisms levied on Lemoine for his claims that the bot is sentient. According to them, the bot is merely highly adept at mimicking parlance about sentience using processing ability to transform vast amounts of data into convincing language output. I conclude that such an advanced chatbot seems to mirror the needs and anxieties of humans, and therefore it can be mistaken for a sentient being. see all
|
Subjects: | |
Copyright information: |
© Vesa Ylipelkonen, 2022. Except otherwise noted, the reuse of this document is authorised under a Creative Commons Attribution 4.0 International (CC-BY 4.0) licence (https://creativecommons.org/licenses/by/4.0/). This means that reuse is allowed provided appropriate credit is given and any changes are indicated. For any use or reproduction of elements that are not owned by the author(s), permission may need to be directly from the respective right holders. |
https://creativecommons.org/licenses/by/4.0/ |