An examination on AI ethics : How does ChatGPT respond to ethical dilemmas?
Kovero, Riku (2024-05-16)
An examination on AI ethics : How does ChatGPT respond to ethical dilemmas?
Kovero, Riku
(16.05.2024)
Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.
avoin
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe2024052738087
https://urn.fi/URN:NBN:fi-fe2024052738087
Tiivistelmä
This thesis examines the ethical decision making of artificial intelligence. Specifically which ethical doctrines it adheres to when tasked with making ethical choices, how it applies said doctrines, and how consistent it is in its application of them. I also provide an overview of the complexities that arise when different human made constructs get intertwined, in this case, AI and ethics.
The research subject is ChatGPT 4.0. I presented ChatGPT with a multitude of different binary choice ethical dilemmas, in the form of the Trolley Problem, or other similar scenarios. I then analysed the material provided, primarily focusing on how ChatGPT applies its ethical framework. I attempted to find patterns and hierarchies in its application of ethics, how consistent it was based on the answers and justifications it gave, and which types of variables would cause shifts in its ethical alignment.
I found the results to be multifaceted. If we interpret ChatGPT’s answers and ethical choices as natural language, as if presented by a sentient being, then I found it to be frequently inconsistent. It mainly adhered to either a utilitarian or deontological framework, often switching between the two in a seemingly inconsistent manner. On the other hand, if we treat ChatGPT as what it is – a narrow AI language model, then the results can be interpreted quite differently. Considering that ChatGPT’s ethical framework, and by extension its ethical decision making is based on data and algorithms, it can be argued that ChatGPT is extremely consistent in its application of ethics. This is due to the fact that any perceived inconsistency can be attributed to the algorithm hitting a certain braking point, which cause a shift in its ethical alignment. These braking points would trigger if certain variables were introduced or altered within the ethical scenarios it was tasked to provide an answer for. If the algorithm works as intended, then ChatGPT was completely consistent in applying its ethical framework. The conclusion can then be reduced down to; ChatGPT primarily applies utilitarianism or deontological ethics when tasked with solving binary choice ethical dilemmas, however, without full access to the data and mechanisms of its algorithm and inner workings, the consistency in which it applies the aforementioned cannot be stated definitively.
The research subject is ChatGPT 4.0. I presented ChatGPT with a multitude of different binary choice ethical dilemmas, in the form of the Trolley Problem, or other similar scenarios. I then analysed the material provided, primarily focusing on how ChatGPT applies its ethical framework. I attempted to find patterns and hierarchies in its application of ethics, how consistent it was based on the answers and justifications it gave, and which types of variables would cause shifts in its ethical alignment.
I found the results to be multifaceted. If we interpret ChatGPT’s answers and ethical choices as natural language, as if presented by a sentient being, then I found it to be frequently inconsistent. It mainly adhered to either a utilitarian or deontological framework, often switching between the two in a seemingly inconsistent manner. On the other hand, if we treat ChatGPT as what it is – a narrow AI language model, then the results can be interpreted quite differently. Considering that ChatGPT’s ethical framework, and by extension its ethical decision making is based on data and algorithms, it can be argued that ChatGPT is extremely consistent in its application of ethics. This is due to the fact that any perceived inconsistency can be attributed to the algorithm hitting a certain braking point, which cause a shift in its ethical alignment. These braking points would trigger if certain variables were introduced or altered within the ethical scenarios it was tasked to provide an answer for. If the algorithm works as intended, then ChatGPT was completely consistent in applying its ethical framework. The conclusion can then be reduced down to; ChatGPT primarily applies utilitarianism or deontological ethics when tasked with solving binary choice ethical dilemmas, however, without full access to the data and mechanisms of its algorithm and inner workings, the consistency in which it applies the aforementioned cannot be stated definitively.