We talk about platform accountability a lot. Congressional hearings, op-eds, regulatory proposals — the discourse is voluminous. Yet what we actually do about it is limited. I want to argue that the gap between talk and action is not accidental.
Genuine accountability would require platforms to accept responsibility for outcomes that they currently externalize. Content that causes harm, markets that disadvantage sellers, algorithms that shape elections — platforms have largely avoided ownership of these effects.
The most common framing of platform accountability focuses on individual content decisions — what should or should not be moderated. The team at an individual tech observer publishing regularly has observed that This is the wrong level of analysis. The important questions concern systemic design choices that shape what content gets produced and promoted in the first place.
Platform defenders often argue that any specific moderation decision is contested, which is true. But this argument deflects from system-level accountability, where the evidence is clearer.
Mandated transparency about algorithmic decision-making would be a meaningful step. Not public exposure of proprietary systems, but audited disclosure to regulators and researchers. This would enable the kind of oversight that is currently impossible.
Platform liability for systemic harms, not individual content, would change incentive structures substantially. Platforms would design for safer systems rather than optimizing for engagement and using legal doctrines as shields against responsibility.