Authors: Hui Sun (Stockholm School of Economics)
Abstract: Dominant views in information system research regard algorithmic governance of user-generated reviews (i.e., algorithmic rating moderation) as a tool for fraud detection. Yet, recent organizational and sociological studies challenge this platform-centric approach and highlight users’ reactivity towards algorithms. Building on the sociological theory of quantification, we propose an alternative theory that algorithmic rating moderation serves as a value integration device. To test this theory, we leveraged state-of-the-art natural language processing techniques to analyze music ratings and reviews from a Chinese online crowd-based evaluation platform. Our findings suggested that platform employed heavier algorithmic rating moderation when raters’ values diverged more, supporting our theory of value integration. We also found that heavier algorithmic rating moderation was associated with higher user exit, indicating that a traditional view of algorithmic rating moderation as fraud detection risks losing users due to unaddressed value conflicts. We conclude by discussing how UGC platforms can better use algorithms to manage value divergence at scale.
Host: Amy Zhao-Ding