Thanks for reading my second letter. The first was on self-driving cars in China. I’d love feedback about topics and ideas—reach out if you have thoughts.
We’re in a bit of a mess when it comes to information on the internet, and things are going to get worse before they get better.
Anytime I engage with a discussion on Twitter or Reddit, I assume that a lot, often most, of the content I’m seeing is “inauthentic”—made by paid actors, amplified through bot networks, or at least someone’s sock puppet. I feel pretty certain about how prevalent coordinated inauthentic content is under popular threads already, but I imagine it will spread even more as the price of generating such content falls drastically as coordination software matures and machine-assisted content production becomes accessible. High-quality deepfakes are also about to take off and drive another nail into the coffin of trust.
The early internet was created with the assumption that actors on the network could be trusted. We’re still escaping this assumption 30 years on; it seems baked in to the internet’s technological culture and maybe to human optimism. We built the internet more quickly than we could secure it, and now we’ve built communication products more quickly than we can secure them. Maybe we’ll transition to a paranoid mindset as the ill effects of lack of network security become everyday. We haven’t yet.
I think the era of the public sphere internet is drawing to a close. It was fun while it lasted, though it probably was never a good idea–predators certainly found prey on the internet and our natural instincts for avoiding danger did not immediately translate into the digital realm. The social products we use today are going to need to change or go away. Products that throw mixed anonymous/real, potentially sybilized identities together are irresponsible and we should transition away from such models. We need products that capture the complexity of trust in the real world.
To date, social platforms have tried to build up trust through manual verification (the blue check) or by authentic names/identity policies. The blue check doesn’t scale and it invites politicization of who gets it and who doesn’t. Authentic names policies at least establish a norm of authentic identity but in products like FB Groups, a breach of the authenticity norm becomes very powerful. Heavy-handed mass verification schemes, like tying accounts to government ID, may work but I don’t see people happily handing over their IDs to platforms without some incentive.
Another Way
I want a layer on top of Twitter featuring a decentralized, self-centered concept of verification. I’d like to see some information about my trust level with an entity every time I see it/them on Twitter. I know a hundred people on Twitter are who they say they are; I’ve seen them with my own eyes. I trust them to tell me who they know is real. The UI should tell me they are trusted. I trust friends of friends some, but less. The UI should tell me that. If an account is four or more degrees away from me, I do not know them from a stranger and I don’t want to see their content in most contexts.
In a decentralized model, the potential harm of a bad actor is limited. Botnets are welcome to follow and amplify each other, but I should never encounter them. If one of my friends is compromised and endorses fake accounts, I will see manipulated content but the effect only hits hundreds of people. I can also opt to not trust them. Self-centered authenticity webs are also not that limited; my extended, 3-degree social network is plenty large and diverse. It should actually capture almost every real person on the internet.
I think this product could be built as a browser extension at first. I can imagine Twitter and other platforms, which actively do not want to manage real identity, would be happy to opt into a trust broker if one were to take off. I can also imagine a decentralized architecture for such a product. It seems like a killer app for blockchain storage. Who do we trust to own and protect the global trust graph? Probably no one, which is the sole strength of decentralized architectures.
We’ve told the social networks a lot about our social graphs and we have gotten not enough in return. They’ve given us lackluster trust models like Twitter’s single context, Reddit’s highly-manipulated popular subreddits, and FB Groups. A user-centered concept of trust will serve us better.