信息资源管理学报 ›› 2025, Vol. 15 ›› Issue (2): 108-122.doi: 10.13365/j.jirm.2025.02.108

• 研究论文 • 上一篇    下一篇

用户与生成式人工智能交互的隐私披露多因素影响模型研究

孙国烨1 吴丹1,2 刘静3 邓宇扬1   

  1. 1.武汉大学信息管理学院,武汉,430072; 
    2.武汉大学人机交互与用户行为研究中心,武汉,430072; 
    3.四川大学公共管理学院,成都,610065
  • 出版日期:2025-03-26 发布日期:2025-04-11
  • 作者简介:孙国烨,博士研究生,研究方向为用户信息行为;吴丹(通讯作者),博士,教授,博士生导师,研究方向为人机交互,Email:woodan@whu.edu.cn;刘静,博士,研究方向为人工智能素养;邓宇扬,硕士,研究方向为可解释人工智能。
  • 基金资助:
    本文系国家自然科学基金“可解释、可通用的下一代人工智能方法”重大研究计划培育项目“人机交互视角下数据与知识双驱动的可解释智能决策方法研究”(92370112)及湖北省自然科学基金创新群体项目“以人为本的人工智能创新应用”(2023AFA012)的阶段性成果之一。

Exploring a Multi-factor Model of Privacy Disclosure in User-Generative AI Interaction

Sun Guoye1 Wu Dan1,2 Liu Jing3 Deng Yuyang1   

  1. 1.School of Information Management, Wuhan University, Wuhan, 430072; 
    2.Center of Human-Computer Interaction and User Behavior, Wuhan University, Wuhan, 430072; 
    3.School of Public Administration, Sichuan University, Chengdu, 610065
  • Online:2025-03-26 Published:2025-04-11
  • About author:Sun Guoye, Ph.D. candidate, research interests include user information behavior; Wu Dan(corresponding author), Ph.D, professor, Ph.D supervisor, research interests include human-computer interaction, Email: woodan@whu.edu.cn; Liu Jing, Ph.D, research interests include artificial intelligence literacy; Deng Yuyang, master, research interests include explainable artificial intelligence.
  • Supported by:
    This paper is one of the interim results of the National Natural Science Foundation of China's major research program "Explainable and universal next-generation artificial intelligence methods"(92370112) and the Hubei Natural Science Foundation Innovation Group Project "Human-centered innovative applications of artificial intelligence" (2023AFA012).

摘要: 生成式人工智能的广泛应用为人机交互带来了独特的隐私挑战,本研究聚焦用户与生成式人工智能交互中的隐私披露,结合大语言模型与人工编码,识别用户与生成式人工智能交互中披露的常见隐私类型。在此基础上,根据情境脉络完整性理论,采取用户标注与半结构化访谈的研究方法,揭示用户隐私披露受到用户的隐私态度、技术信任、隐私风险感知的共同影响,系统的数据管理透明度则通过影响技术信任间接影响隐私披露。基于研究结果,本研究构建了用户与生成式人工智能交互的隐私披露多因素影响模型,可为开发更具隐私友好性的生成式人工智能系统提供理论参考。

关键词: 隐私披露, 生成式人工智能, 隐私风险, 数据管理透明度, 技术信任

Abstract: The widespread application of generative artificial intelligence (Generative AI) has brought unique privacy challenges to human-computer interaction. This study focuses on privacy disclosure in the interaction between users and Generative AI, combining large language models with manual coding to identify common types of privacy disclosed in the interaction between users and Generative AI. Based on contextual integrity theory, this study employs user annotation and semi-structured interviews to explore the mechanisms influencing user privacy disclosure. The findings reveal that user privacy disclosure is jointly affected by the user's privacy attitude, technology trust, and privacy risk perception, and the system's data management transparency indirectly affects privacy disclosure by affecting technology trust. Based on the research results, this study constructs a multi-factor influence model of privacy disclosure in the interaction between users and Generative AI, providing a theoretical reference for the development of more privacy-friendly Generative AI.

Key words: Privacy disclosure, Generative AI, Privacy risk, Data management transparency, Technology trust

中图分类号: