当前位置:
X-MOL 学术
›
Sociological Methods & Research
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Simulating Subjects: The Promise and Peril of Artificial Intelligence Stand-Ins for Social Agents and Interactions
Sociological Methods & Research ( IF 6.5 ) Pub Date : 2025-06-02 , DOI: 10.1177/00491241251337316
Austin C. Kozlowski, James Evans
Sociological Methods & Research ( IF 6.5 ) Pub Date : 2025-06-02 , DOI: 10.1177/00491241251337316
Austin C. Kozlowski, James Evans
Large language models (LLMs), through their exposure to massive collections of online text, learn to reproduce the perspectives and linguistic styles of diverse social and cultural groups. This capability suggests a powerful social scientific application—the simulation of empirically realistic, culturally situated human subjects. Synthesizing recent research in artificial intelligence and computational social science, we outline a methodological foundation for simulating human subjects and their social interactions. We then identify six characteristics of current models that are likely to impair the realistic simulation of human subjects: bias, uniformity, atemporality, disembodiment, linguistic cultures, and alien intelligence. For each of these areas, we discuss promising approaches for overcoming their associated shortcomings. Given the rate of change of these models, we advocate for an ongoing methodological program for the simulation of human subjects that keeps pace with rapid technical progress, and caution that validation against human subjects data remains essential to ensure simulation accuracy.
中文翻译:
模拟主体:人工智能替代社会代理和交互的前景和危险
大型语言模型 (LLM) 通过接触大量在线文本,学习复制不同社会和文化群体的观点和语言风格。这种能力表明了一个强大的社会科学应用——模拟实证现实、文化背景的人类主体。综合人工智能和计算社会科学的最新研究,我们概述了模拟人类受试者及其社会互动的方法论基础。然后,我们确定了当前模型的六个特征,这些特征可能会损害人类受试者的真实模拟:偏见、均匀性、非时间性、无实体、语言文化和外星智能。对于这些领域中的每一个,我们讨论了克服其相关缺点的有前途的方法。鉴于这些模型的变化率,我们主张采用一种持续的方法论计划来模拟人类受试者,以跟上快速的技术进步,并警告说,针对人类受试者数据的验证对于确保模拟准确性仍然至关重要。
更新日期:2025-06-02
中文翻译:

模拟主体:人工智能替代社会代理和交互的前景和危险
大型语言模型 (LLM) 通过接触大量在线文本,学习复制不同社会和文化群体的观点和语言风格。这种能力表明了一个强大的社会科学应用——模拟实证现实、文化背景的人类主体。综合人工智能和计算社会科学的最新研究,我们概述了模拟人类受试者及其社会互动的方法论基础。然后,我们确定了当前模型的六个特征,这些特征可能会损害人类受试者的真实模拟:偏见、均匀性、非时间性、无实体、语言文化和外星智能。对于这些领域中的每一个,我们讨论了克服其相关缺点的有前途的方法。鉴于这些模型的变化率,我们主张采用一种持续的方法论计划来模拟人类受试者,以跟上快速的技术进步,并警告说,针对人类受试者数据的验证对于确保模拟准确性仍然至关重要。