a convergent initiative

Exploring the paths to convergent human discourse

The Problem

Many of our attitudes and opinions about other humans in society are shaped by the way we consume information. This has historically been a function of the public sphere: services provided by journalistic institutions, government entities, and physical common spaces. Today, the institutions responsible for the way this information flows are a handful of increasingly powerful technology companies. These companies have outsized leverage over the information we consume: our knowledge of our friends, our goings-on within our communities, and our understanding of the world at large.

The architecture these systems is designed with the primary goal of extracting value from the attention of its users. This attention is sold to advertisers, who pay a premium for the targeting allowed by the enormous quantities of user data they collect.

This effort to extract attention from users at all costs has created a new set of deep structural flaws in our public sphere:


Perceptual Dehumanization
In the real world, contentious social interactions are mitigated by social norms and innate empathic responses. Online, many of these incentives for civil discourse are inverted: The social benefits of objectifying someone who disagrees with us online outweigh the costs. We see unresponsive avatars instead of saddened human faces. We’re accountable not to the people in the same physical space, but to our online audiences who share in our ideological fervor. These behaviors tend to be inherently engaging, and scale to billions of interactions online – even as they objectify and dehumanize others.


Factual Fragmentation
Much of the content we consume online today is pushed through digital filters that determine what we see. We no longer share consistent channels of information and can select our independent versions of ideological reality. Though facts still matter, this fragmented information ecosystem has allowed for misinformation, disinformation and propaganda campaigns to obscure it more than any other time in recent history. These are profound structural vulnerabilities and critical threats to free and open democratic institutions around the world.


Mass-Moralization of Media
News providers have come to rely upon social platforms for targeting, optimization and editorial decisions. In this hyper-competitive attentional environment, content creators have found both efficiencies and profits in skewing coverage towards partisanship, sensationalism, fear and outrage. Since the medium itself incentivizes media that attracts attention, this type of moralized content has been shown to carry a viral advantage. Ideological content with moral/emotional valence propagates faster and farther than other content online, consistently providing mass-exposure to morally divisive issues.

We can mitigate these structural flaws by establishing clear evidence, raising awareness, breaking institutional silos, and offering an ethical design framework for fixing them. Our collaborators include early engineers at Twitter & Google, and researchers at NYU & Stanford, and The Center for Humane Technology. 

 For additional information, please reach out directly here.