Social Media, Technology and Peacebuilding By Lisa Schirch | 29 May, 2025
Rethinking Digital Platform Design: A Systems Approach

Image: MarcoVector/shutterstock.com
A better internet that supports democracy rather than undermines it is possible.
In 2025, we stand at a crossroads in the digital era. Our platforms have become the new public squares, but rather than fostering democracy and dignity, many are optimized for manipulation, division, and profit. The Council on Technology and Social Cohesion's "Blueprint on Prosocial Tech Design Governance" offers a systems-level response to this crisis.
Digital harms are not accidental. They stem from deliberate choices embedded in how platforms are built and monetized. Infinite scroll, addictive recommendation systems, and deceptive patterns are not technical inevitabilities—they are design policies that reward engagement over truth, attention over well-being, and outrage over dialogue. These antisocial designs have proven devastating: eroding mental health, fuelling polarisation, spreading disinformation, and concentrating power in a handful of corporate actors.
Tech companies blame users for harmful content online. But this avoids their own responsibility in how they design platforms. The Blueprint shifts the focus from downstream content moderation to upstream focus on platform design.
No technology has a neutral design. Companies make choices about what a platform will allow you to do, prevent you from doing, and what the design will persuade, incentivise, amplify, highlight, or manipulate people to do or not do online.
Prosocial Building Codes
Like building codes in architecture, the report proposes a tiered certification system for prosocial tech, outlining five levels of increasing ambition—from minimum safety standards to fully participatory, socially cohesive platforms. This is not window-dressing. It's a structural intervention to address the root causes of harmful tech designs.
Tier 1 begins with establishing baseline protections: Safety by Design, Privacy by Design, and User Agency by Design. These aren't abstract ideals but concrete practices that give users control over what they see, how they're tracked, and whether manipulative features are opt-in rather than default. Tier 2 scales up with low-barrier user experience tools like empathy-oriented reaction buttons, friction to slow down impulsive posting, and prompts to reflect before sharing.
Iin Tier 3, prosocial algorithms that highlight areas of common ground and diverse ideas replace engagement-maximising recommender systems that offer news feeds skewed toward polarising topics. Tier 4 introduces civic tech and deliberative platforms explicitly built for democratic engagement, and Tier 5 pushes for middleware solutions that restore data sovereignty and interoperability.
Research Transparency and Protections
The report highlights the need for research to understand how platform design impacts society, safe harbour laws to protect independent researchers, and open data standards for measuring social trust and cohesion. The paper calls for mandated platform audits, researcher safe harbours, and public infrastructure to enable independent scrutiny of algorithmic systems and user experiences. Without these safeguards, crucial insight into systemic harms—such as manipulation, bias, and disinformation—remains inaccessible.
The paper offers a set of prosocial metrics on three areas of social cohesion. This includes individual agency and well-being, or the ability of users to make informed choices and participate meaningfully; social trust and intergroup pluralism referring to the quality of interaction across diverse social, cultural, and political groups; and public trust or the strength of relationship between users and public institutions.
Shifting Market Forces
The report concludes with a set of market reforms to shift incentives toward prosocial tech innovations. Market forces drive antisocial and deceptive tech design. Venture capital (VC) funding is the main source of financing for many major tech platforms, especially in their early and growth stages. It significantly entrenches antisocial tech design, expecting rapid scaling, high returns, and market dominance—often at the expense of ethical development.
Market concentration inhibits innovation and confines users within systems that prioritise profit over well-being. Numerous large technology companies function as monopolies, employing opaque strategies and dominating value chains. Such technology monopolies pose significant challenges for smaller, prosocial platforms seeking growth. When a limited number of tech giants control infrastructure, data, and user attention, smaller platforms with ethical, inclusive, or democratic designs encounter difficulties in achieving visibility and viability.
The report recommends shifting market forces by codifying liability for platform-induced harms, enforcing antitrust to level the playing field for ethical alternatives, and identifying a range of options for funding and monetising prosocial tech startups.
Too often piecemeal tech regulation has failed to show the flood of toxicity online. Using a system's approach, the report offers a comprehensive plan to make prosocial tech not only possible, but competitive and sustainable. Just as we expect bridges to be safe and banks to be audited, the Blueprint insists we treat digital infrastructure with the same seriousness. Platforms should not be allowed to profit from harm while hiding behind the myth of neutrality.
At its core, the Blueprint argues that platform design is social engineering. Platforms that currently amplify outrage could, with the right design and incentives, foster empathy, cooperation, and truth.
Now the question is political will. Will regulators adopt tiered certifications that reward responsibility? Will investors fund platforms that prioritise well-being over profit? Will designers centre the needs of marginalised communities in their user experience decisions? The Blueprint gives us the tools. The next step is collective action for governments, technologists, and civil society alike.
Related articles:
How technology can build trust in the Israeli-Palestinian context (3-minute read)
Mapping tech design regulation in the Global South (10-minute read)
Deliberative technology: Designing AI and computational democracy for peacebuilding in highly-polarized contexts (10-minute read)
Building tech "trust and safety" for a digital public sphere (3-minute read)
Dr. Lisa Schirch is Research Fellow with the Toda Peace Institute and is on the faculty at the University of Notre Dame in the Keough School of Global Affairs and Kroc Institute for International Peace Studies. She holds the Richard G. Starmann Sr. Endowed Chair and directs the Peacetech and Polarization Lab. A former Fulbright Fellow in East and West Africa, Schirch is the author of eleven books, including The Ecology of Violent Extremism: Perspectives on Peacebuilding and Human Security andSocial Media Impacts on Conflict and Democracy: The Tech-tonic Shift. Her work focuses on tech-assisted dialogue and decision-making to improve state-society relationships and social cohesion.