A feasibility study for the application of AI-generated conversations in pragmatic analysis

Chen, Xi orcid iconORCID: 0000-0003-2393-532X, Li, Jun and Ye, Yuting (2024) A feasibility study for the application of AI-generated conversations in pragmatic analysis. Journal of Pragmatics, 223 . ISSN 0378-2166

[thumbnail of AAM] PDF (AAM) - Accepted Version
Restricted to Repository staff only until 21 February 2025.
Available under License Creative Commons Attribution Non-commercial No Derivatives.

617kB

Official URL: https://doi.org/10.1016/j.pragma.2024.01.003

Abstract

This study explores the potential of including AI-generated language in pragmatic analysis– a field that has primarily been conducted on human language use. With the rapid growth of large language models, AI-generated texts and AI-human interactions constitute a growing field where pragmatics research is expanding to. Language data that humans used to hold a full authorship may also involve modifications made by AI. The foremost concern is thus the pragmatic qualities of AI-generated language, such as whether and to which extent AI data mirror the pragmatic patterns we have found in human speech behaviours. In this study, we compare 148 ChatGPT-generated conversations with 82 human-written ones and 354 human evaluations of these conversations. The data are analysed using various methods, including traditional speech strategy coding, four computational methods developed in NLP, and four statistical tests. The findings reveal that ChatGPT performs equally well as human participants in four out of the five tested pragmalinguistic features and five out of six sociopragmatic features. Additionally, the conversations generated by ChatGPT exhibit higher syntactic diversity and a greater sense of formality compared to those written by humans. As a result, our participants are unable to distinguish ChatGPT-generated conversations from human-written ones.


Repository Staff Only: item control page