加入支持让我们有继续维护的动力!会员畅享查看所有预告
立即购买
Exposure-Driven Behavior Risk in Generative AI: A Framework for Adaptive Behavior-Aware Governance
- 来源:
- 学校官网
- 收录时间:
- 2026-03-17 19:20:13
- 时间:
- 2025-12-29 13:00:00
- 地点:
- 延安路校区旭日楼310室
- 报告人:
- 黄庐
- 学校:
- 东华大学
- 关键词:
- Generative AI, behavior risk, information governance, exposure-driven escalation, platform tolerance, adaptive governance, hidden Markov model, safety-engagement trade-off
- 简介:
- Generative AI platforms pose new governance challenges due to user behavioral adaptation following exposure to explicit content. This study proposes a trajectory-aware framework linking user behavior and platform moderation, revealing that exposure drives escalated engagement, with implications for safety-engagement trade-offs and adaptive governance design.
- -/- 17
报告介绍:
Generative AI (GenAI) platforms have transformed digital creation but introduced new challenges for information governance. Despite active moderation, these platforms still surface explicit or boundary-pushing outputs that pose ethical and legal risks. Existing safeguards assess prompts or outputs in isolation, overlooking how users adapt across iterative interactions with GenAI. Consequently, current governance mechanisms fail to capture how even rare exposures to explicit outputs reshape user behavior over time, creating an unrecognized behavioral source of governance risk. We address this gap by developing a trajectory-aware governance framework that links user-level behavioral dynamics with platform-level moderation design. Using a large-scale dataset from a leading text-to-image platform and a hidden Markov model to infer latent engagement states and transitions, we show that exposure to explicit content drives escalation and persistence in higher-intensity engagement. The effect is more substantial for active seekers, weaker for users with public profiles, and attenuates with tenure, consistent with curiosity-driven behavioral theory. At the platform level, we introduce a platform-tolerance metric, the mean explicitness of generated outputs, to quantify platform tolerance and simulate policy outcomes. Results reveal a measurable safety–engagement trade-off: a tighter platform reduces exposure but suppresses participation and topic diversity, suggesting an optimal tolerance level that balances safety and engagement. This study extends information governance from technical control to behavioral adaptation, demonstrating that effective governance in GenAI requires dynamic, behavior-aware governance mechanisms. It contributes a new behavioral source for governance risk (exposure-driven escalation), a quantifiable governance construct (platform tolerance), and an integrated modeling framework for assessing how policy design shapes safety and user engagement. For platform managers, it provides actionable guidance on developing governance policies that strike a balance between user safety and the creative potential of GenAI.
报告人介绍:
Lu Huang,现任美国宾夕法尼亚州立大学斯米尔商学院Assistant Clinical Professor,获美国康涅狄格大学定量建模博士学位。其研究成果发表于多个商学与经济学领域重要期刊,包括Production and Operations Management、Journal of Interactive Marketing、Journal of Public Policy & Marketing以International Journal of Industrial Organization 等,主要研究聚焦于机器学习与动态结构计量经济模型,为管理实践与商业决策提供方法支持与实证依据。
购买下会员支持下吧...用爱发电已经很久了 立即购买

