当前位置: X-MOL 学术New Media & Society › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exploring social media users’ experiences with algorithmic transparency cues
New Media & Society ( IF 4.5 ) Pub Date : 2025-05-19 , DOI: 10.1177/14614448251339493
Anne Oeldorf-Hirsch, Lili R Romann, Isabella Witkowich, Jiayi Chen

All mainstream social media platforms now use algorithms to display recommended content, and some (e.g. Instagram, LinkedIn) have started showing what we call algorithmic transparency cues about why certain posts are recommended. However, little is known about what cues users see on their own feeds and how they experience them. Thus, using an online survey ( N = 515) of adult U.S. social media users, we gathered data about two research questions: (1) What types of algorithmic cues users find in their own feeds, and (2) their experiences with algorithms and their transparency. Content analysis of user-submitted screenshots and cue descriptions shows that most transparency cues refer to users’ behaviors, behaviors of others in their network, and sponsored posts. Furthermore, open-ended responses indicate that users have critical opinions about algorithms, calling for greater algorithmic transparency on social media, and offering suggestions for researchers and platform designers moving forward.

中文翻译:

通过算法透明度线索探索社交媒体用户的体验

现在,所有主流社交媒体平台都使用算法来显示推荐的内容,其中一些平台(例如 Instagram、LinkedIn)已经开始显示我们所谓的算法透明度提示,说明为什么推荐某些帖子。然而,关于用户在自己的 Feed 上看到的提示以及他们如何体验这些提示知之甚少。因此,通过对美国成年社交媒体用户的在线调查 ( N = 515),我们收集了有关两个研究问题的数据:(1) 用户在自己的提要中找到了哪些类型的算法线索,以及 (2) 他们对算法的体验及其透明度。对用户提交的屏幕截图和提示描述的内容分析表明,大多数透明度提示是指用户的行为、他们网络中其他人的行为以及赞助帖子。此外,开放式回答表明用户对算法持批评意见,呼吁在社交媒体上提高算法透明度,并为研究人员和平台设计师提供未来发展的建议。
更新日期:2025-05-19
down
wechat
bug