If we’ve built totally ‘trustless’ systems, we may never have to trust another human being again, right? Right??
Wormhole is a crypto protocol that lets people transfer funds across different chains. It’s supposed to work like this:

But in February of this year, a hacker exploited Wormhole, and took $320m. So it looked more like this:

Of course, hacks aren’t new to crypto. The interesting thing about this was that Jump Capital — an investment firm — replenished the funds. It seems they did this because they are a significant investor in one of the chains involved in the exploit. In other words, they made the protocol whole because it was in their own financial interests.

This got me thinking: who or what are we actually trusting when using crypto? When crypto enthusiasts say that blockchains are ‘trustless,’ what does that mean? It probably doesn’t mean that an investment bank is here to subsidize any and all future losses. Is ‘trust’ even the right word? I’ve got questions!
Web2 has its own trust-producing infrastructures — how might these be different from blockchains? According to socio-legal researcher Balazs Bodo, Web2 encouraged two different types of trust: “reputation” and “insight.” For example, Bodo argues that “the sole product of AirBnB is the trustworthiness information of its users, and the trustworthiness of the transaction infrastructure it has built.” But the “reputation” scores they developed are detached from their local context. They are global, standardized metrics such as ratings, response times, text-based feedback, and cancellations. In this way, reputation-based systems are a version of trust that “disembeds trust from social relations, strips away the local, idiosyncratic characteristics of trustworthiness signals, and re-molds them according to the choices, preferences, interests of the private trust producer.”
“Insight” based trust is another form of trust produced by Web2 tools. According to Bodo, predictive algorithms promise to “produce trust by reducing future uncertainty.” In this way, they reaffirm a version of trust that “enables action through the reduction of complexity and the [...] elimination of future contingencies in order to sustain the illusion of a navigable, and manageable present.” So, predictive algorithms purport to reduce the uncertainty of the future, enabling the feeling of control in a totally predictable world.
These systems of reputation and insight-based trust haven’t worked great. In service of scale — just as I’ve written before — these private producers of trust commodify and distort the social nature of trust. As Bodo writes,
“The commodification of trust denies the opportunity for the trustor to directly assess, through its own eyes, the trustworthiness of the trustee [...] It is not just that they have to invest less cognitive resources into such labor, but that they can completely outsource it in exchange for a monetary payment.”
These systems create a version of trust that is anti-social. By ‘anti-social,’ I’m not referring to high-school me, sitting in the corner with Dashboard Confessional blaring in my headphones, but the desire to separate oneself from social relations, norms, and obligations — to live apart from, and independent of one another. It’s an attempt, according to Bodo, to remove trust from its communal roots where it once operated “based on norms of reciprocity” and was “produced by and for the members of the well-defined group.” This also follows a long history of technologists trying to move ‘trust’ out of the social realm and shoehorn it into a technology.
But here’s the problem: you can’t separate these things so easily. Existing social relations shape the data that become a proxy for one’s reputation. Take AirBnB again: research has found that “Inquiries from guests with distinctively African American sounding names were 16% less likely to get a yes from the hosts than those with white-sounding names.” This means that reputation (which is meant to help build trust) isn’t an objective data score; it’s shaped by people.
Okay, so Web2 has already done a lot to distort how we experience and understand trust — what’s next?
The basic idea with blockchains is that they remove the need to trust the person you’re transacting with. If you simply trust the smart contract and the underlying code, the need to trust actual people disappears! As cryptographer Nick Szabo put it, “blockchains substitute [t]rust in the secret and arbitrarily mutable activities of a private computation” with “verifiable confidence in the behaviour of a generally immutable public computation.” Indeed, the original Bitcoin whitepaper argued for abandoning social trust in favor of algorithmic regulation. Not surprisingly, the expression “Don’t trust, verify” became a mantra of early blockchain communities.
The first point to make is that while blockchains purport to obviate trust away, they actually just shift it from a central intermediary to all of the participants involved. Crypto users are implicitly trusting developers to build secure software, miners not to collude, and people to vote on governance decisions in ways that don’t screw the pooch. More recently, crypto users also have to trust the decisions of companies like Infura or Alchemy, which are tools that application-layer projects use to interact with blockchains. It’s hard to grok the idea that trust can be replaced by computation when there are so many people involved in the verification, security, and legitimacy of a given transaction. The narrative frame persists — it’s trustless I tell ya! — even if the technology and its surrounding ecosystem both rely quite a lot on actual trust.
The second point, though, is a bit more existential: I’m not sure ‘trust’ is the right word, and I’m not alone. Let’s start with a definition. Where web2 wants to reduce future uncertainty and build trust that way, blockchains — according to Bodo — produce a form of trust that wants to extend control over real-time transactions, and in turn, the present. Bodo writes,
“Blockchain systems follow a different logic, that of control. These systems try to minimize the need for trust and produce confidence by hard-coding rules into the system, both at the level of infrastructure and in their application (smart contracts). This ensures that the behavior of the system is predictable.”
For Bodo, blockchains produce trust through “technical coercion and control.” But can control lead to trust? Scholars of trust don’t think so. As trust researcher Charles Feldman puts it, trust is “choosing to risk making something you value vulnerable to another person’s actions.” Trust expert Rachel Botsman defines it as a “confident relationship to the unknown.” Trust, in other words, isn’t about control — quite the opposite, it depends on being vulnerable and embracing uncertainty. That takes hard work, and in my case, therapy!
Primavera De Filippi argues that what blockchains produce is actually more akin to “confidence,” which does not entail vulnerability but does depend on the predictability of future events. She actually refers to blockchains as a ‘confidence machine.’ It’s not that blockchains are ‘trustless’ — remember all the stakeholders you’re still dependent upon? — it’s that they are meant to “maximize the degree of confidence in the system as a means to indirectly reduce the need for trust.” You’re not any less vulnerable to the actions of others but you can be more confident that future transactions will follow the predetermined logic of the blockchain.
That there are challenges to the claim that blockchains enable ‘trustlessness’ makes the righteous teenager deep inside me smirk. But, at 37, it’s less satisfying to say ‘a-ha, you’re wrong about this thing that you care about!’ So, I feel the need to assert the stakes: the ‘trustless’ frame is problematic because it perpetuates the notion of self-reliance and the myth of meritocracy while hiding them both. As sociologists Silvia Semenzin and Alessandro Gandini argue, it creates the impression that trust is “something that may be created in a vacuum, delinked from its underlying social relations and power structures,” while at the same time it “reproduces and exacerbates the libertarian, hyper-individualized vision of society that animated its creators’ vision of the world.” This shouldn’t be surprising — crypto was started by anarcho-libertarian communities, remember? Thus there is a prioritization of pseudonymity and financial incentives premised on rational self-interest, and a de-prioritization of all things social, like trust. In Semenzin’s research, she finds that this idea “fits perfectly with the neoliberal culture of meritocracy that characterizes the startup world, by which the onus of success and failure falls firmly onto the individual and that individual’s hard work.”
If you supposedly don’t have to trust anyone, who can you rely upon but yourself? If you alone are responsible for your success and failure, how can you account for all of the social, political, and economic systems that bear on your opportunities, and your livelihood? Maybe that’s what ‘trustless’ means — we’re all on our own, and if something goes wrong, we have only ourselves to hold responsible. Unless, of course, an investment bank is there to save the day.
Comments