The price of distrust
What lengths would you be willing to go to in order to prevent humanitarian aid funds, raised by hardworking taxpayers on the other side of the world, from falling into the hands of a corrupt warlord? You could think of the bribes, skimming, and inefficiency as a kind of levy – an informal tax we pay to feed the hungry and heal the sick. With the rule of law and civil liberty, this levy would be avoidable, but building such systems is a long-term project that goes beyond even the most ambitious humanitarian effort. Meanwhile, disease and famine continue to claim innocent lives, condemning the futures of successive generations. Addressing these basic needs is the foundation upon which all social progress depends.
Yet distrust in the value for money and actual impact isn’t unjustified. By accepting the levy, you are condoning and, more importantly, funding the social structures that are at least partly responsible for the deprivation you are trying to alleviate. By making aid conditional on proper governance and fair access to the benefits of progress for all, you create a moral and economic incentive for reform in the communities best placed to achieve it.
There is no objective right answer, a fact that plays out clearly in the politics of international development in 2025. However, as long as some aid reaches those in need, more people die of starvation and treatable disease in one scenario than in the other. This is the price of distrust, whether or not it is justified.
This dilemma is far from unique to humanitarian aid; it is a structural feature of any complex system that deploys shared resources at scale (e.g. tax revenue) to solve a complex problem (e.g. famine). Whenever resources are pooled and distributed – whether by governments, charities, or corporations – the instinct to guard against misuse inevitably shapes how they are deployed to create value. In humanitarian aid, that instinct takes the form of conditions, audits, and due-diligence processes that dilute its impact but promise to protect donors from scandal. In publicly funded academia, it takes the form of grant applications, ethics committees, and progress reports – transparency mechanisms designed to reassure the proverbial taxpayer of the value of their investment. In corporations, business cases, project plans and status reports are the cornerstones of what conventional wisdom would recognise as governance best practice.
In all three cases, the system is optimised for oversight and control, not to build and maintain trust. It produces transparency at the expense of progress. This is not a critique of accountability itself – it’s a recognition that under our present system, accountability and progress are too often inversely correlated. The path less travelled by most major institutions (public or private) lies somewhere between reckless generosity and procedural gridlock, where trust and progress meet.
The Academic Paradox
Ezra Klein and Derek Thompson describe this dynamic vividly in Abundance through the story of Katalin Karikó, whose work on mRNA laid the foundations for the Covid-19 vaccines1. For decades, Karikó struggled to secure funding because her research was counter-cultural, focusing on the therapeutic potential of messenger RNA rather than DNA, which was the dominant idea of the time in her field. Karikó’s proposals did not promise short-term results or predictable returns, at a time when the Human Genome Project was dominating headlines and grant committees. The lesson is uncomfortable but clear: research institutions optimised for return-on-investment reward those who can justify funding, not necessarily those who can transform the world once they receive it.
Most public funding streams (as opposed to corporate ones) are designed to minimise uncertainty, not to maximise discovery. Research proposals are evaluated by panels of scientific peers who must justify their decisions to a public or political audience, so they rely on signals of credibility when allocating scarce funds. Those signals are far easier to find in research that looks familiar. When a proposal aligns with an already-popular field – one with many citations, active labs, and a strong evidence base – reviewers can more easily assess its likelihood of success. The methods are known, the outcomes are easier to quantify, and the risks appear bounded. Most importantly, these projects have a precedent of delivering results: other reputable researchers are already working in the area, so it must be worth pursuing. Incremental research, by definition, sits within an established field where the risk of failure is more knowable and comparatively low.
By contrast, a genuinely novel proposal lacks those signals. It may cross disciplines, use unorthodox methods, or produce results without a clear path to commercial value. This makes it harder to evaluate against standard review criteria such as feasibility, deliverability, and value for money. On paper, novel research looks like a bad investment: a known cost with an unknown return. At worst, novel research can pose a threat to experts in existing and established fields, threatening to disrupt their institutional dominance and future funding streams. The peer review system, comprising of these very experts, built to discern good research proposals from bad, thus unconsciously filters out novelty. The somewhat recursive, but nevertheless revealing, research about research corroborates this. A study of grant patterns found that proposals scoring highly on novelty were less likely to be funded than those that built on existing, well-cited work, even after controlling for quality and feasibility2. Another study found that reviewers systematically reward scientific conformity over originality, citing predictable deliverables and alignment with prior findings as proxies for reliability3. These biases are not malicious or intentional; they are simply the rational outcome of a process designed to avoid embarrassment. In this regard, the great academic institutions of today are more like conservative luxury brands than stalwarts of Enlightenment thinking and the scientific method.
Once funding is secured, the burden of distrust intensifies further. Every mechanism designed to ensure the prudent use of grant money - peer reviews, ethics committees, progress reports, and pathways to impact – slowly displaces the work it was meant to protect. An Australian study found that an average grant proposal took 38 working days to prepare, 66 if it required resubmission. Aggregated across one funding round, that amounted to 550 working years of effort and roughly A$66 million in salary cost - about 14% of the agency’s total budget4. Another survey of U.S. scientists with similar findings found no correlation between the time invested in writing a proposal and its probability of being funded5. The hours spent demonstrating worthiness did not, in practice, make the work more fundable. Complying with post-award conditions and reporting requirements consumes another large share of the researcher’s time6. The outcome is that the most successful academics become experts in justifying research, not necessarily in doing it.
Klein and Thompson contrast this with moments in history when research institutions embraced risk instead of avoiding it. During the COVID-19 pandemic, Operation Warp Speed demonstrated that governments can accelerate invention by underwriting uncertainty and simplifying oversight7. By guaranteeing markets, removing bureaucratic friction, and tolerating potential waste, they created the conditions for bringing a new vaccine to market in less than a year; a feat that typically takes 10-15 years. Although a crisis inevitably sharpens the mind, there are peace time examples that are no less inspiring. The modestly funded U.S. Defense Advanced Research Projects Agency (DARPA) became a titan of innovation in the 1960s owing to its unique setup. Its lead researchers were empowered to assemble unconventional teams, bypass layers of peer review, and tolerate early-stage failure. The result was a string of breakthroughs that defined the information age — the ARPANET (precursor to the internet), GPS, and early graphical user interfaces among them8.
DARPA’s genius was not merely in who it funded but in how it operated: small teams, short feedback loops, and trust in the judgement of those close to the work. The lead researchers were encouraged to take personal responsibility for bold bets and to shut down projects that failed to show promise, without fear of reputational ruin. Oversight existed, but it was lean and proportional. DARPA’s culture of delegated trust and tolerance for waste created a space where invention could breathe — an operating model almost unimaginable in today’s academic funding ecosystem.
This does not mean romanticising or accepting chaos. Systems of accountability exist for good reason – to prevent fraud and waste; the question is one of balance. When every project proposal must guarantee its success in advance, the system silently selects for the safe and the incremental. The true scandal is not the rare instances of fraud and waste, but the distrust levy exacted on recipients of public funding.
The Corporate Parallel: Governance as Performance Art
The corporate world has its own version of the academic paradox. It goes by another name - governance. The purpose of governance is ostensibly to create trust through the corporate hierarchy that resources are being managed effectively and are delivering the promised results. Every corporation is a chain of delegation as to how resources are deployed and the value they create - shareholders delegate to boards, boards to executives, executives to their teams. Each link in the chain must explain their decisions and inspire confidence in the one above it, or risk reputational damage, or worse, unemployment. Predictability and transparency therefore become a proxy for trust. The result is a kind of bureaucratic alchemy in which uncertainty is transfigured into confidence through the careful manipulation of language and colour: red, amber, green. Executives want confidence that their investment is being converted into value; the project manager, in turn, produces dashboards to demonstrate progress; those doing the value-creating work beneath them learn to translate their messy, uncertain reality into the tidy grammar of deliverables. The deeper the hierarchy, the more the system rewards the appearance of control over the creation of substantive value. Too often in practice, governance reanimates the lifeless corpse of trust and presents it as transparency and control.
Over the past three decades, project management has evolved from a niche discipline into a global industry. Once associated primarily with engineering and construction, it now spans every sector - from pharmaceuticals and financial services to IT and transformation of public services. Associations for project management practitioners report rapidly growing memberships and host annual congresses that draw thousands of delegates. The software ecosystem underpinning project management has expanded accordingly - valued at roughly $7bn today, forecast to exceed $20bn by 20309. Yet the ascendancy of the profession has not produced a corresponding rise in delivery of successful project. Decades of data show that large projects, particularly infrastructure megaprojects, still fail at astonishing rates. Nine out of ten megaprojects go over budget; on average, they cost 60–80% more and take 50% longer than planned10. Technology projects perform little better: McKinsey’s analysis of 5,000 IT initiatives found that most fall short of their promises and 17% threatened their company’s existence through cost or scope blowouts11.
The paradox is striking. While we have industrialised the governance of project through certifications and tools, we have not consistently improved their outcomes. It is therefore harder to see governance as a tool for progress than as theatre for performative reassurance. Its rituals - the business case, the Gantt chart, the RAG status report - create an illusion of control that comforts those who hold the purse strings, even as it constrains the work that creates value in the first place. This is the same pathology that afflicts academia’s grant economy. Both systems are driven by the same fear: that effort, if left unsupervised, will be wasted. And both have developed elaborate processes to manage that fear - progress reports, milestones, and metrics - that consume precisely the energy they were meant to protect. Both are responses to the same anxiety: a lack of trust in human judgement. The paradox deepens: the more an organisation seeks to eliminate risk, the less capable it becomes of producing reward. The most dangerous part of this dynamic is not the waste of time or talent, but the behavioural incentives it creates. When governance demands certainty in advance, uncertainty becomes a liability to be hidden, quietly narrowing ambitions to within the comfort zone of achievability. Innovation, which depends on exploring the unknown, is thus sacrificed at the altar of governance as distrust is domesticated into a process.
The Perils of Organised Shared Endeavour
The distinguishing feature of our species, as Yuval Noah Harari argues in Sapiens, is not strength, speed, or even intelligence in the narrow sense, but imagination - the ability to imagine abstract concepts and invent and share stories about them12. This capacity for shared fiction allows humans to cooperate flexibly in large numbers - a phenomenon observed in no other species at comparable scale. Other mammals and insects like bees can cooperate, but their collaboration is either limited to small kin groups (in the case of mammals) or genetically programmed behaviour (in the case of insects). Humans alone form complex social structures that unite thousands, even millions of strangers, bound not by kinship or instinct, but by shared beliefs. This ability is the foundation of civilisation itself and the reason we have universities, banks and governments. It is what allows us to send spacecraft to Mars and perform keyhole surgery on a beating heart. Each of these feats is the result of millennia of cumulative progress built on shared beliefs in systems - money, science, politics - that exist only because we agree they do. Perhaps the best example of the power of shared stories to mobilise effort across space and time is humanitarian aid: strangers separated by oceans willingly donate resources to help people they will never meet, trusting that abstract entities like charities or the United Nations will translate their empathy into action.
The capacity to pool resources and coordinate action beyond the family or tribe is what enabled homo sapiens to transcend the biological and environmental limits that bound every other species. It is our superpower - and also, as history shows, our perennial vulnerability. As collaboration scales, so do its costs. Larger groups are less stable, and coordination requires structure - hierarchies, roles, procedures, rules. These structures are the scaffolding of shared belief. They provide stability, but they also introduce distance: between intention and outcome, between the storyteller and the listener, between the ideal and its imperfect execution. Over time, the uniting narrative begins to fray under the weight of its own success as more people join the cause and bring with them different experiences, perspectives, and personal objectives. Absent wars and pandemics, the diversity of individuals in a group starts to dilute the purity of its founding story, forcing the shared myth to simplify, to round its edges and shed its detail so it can stretch across those differences. The story becomes easier to tell, but harder to believe.
This dynamic - the tension between unity and individuality - is built into human nature. We are, by temperament, pack animals who long to belong, yet we are also individuals who crave distinction. Every institution, from a start-up to a superpower, lives within that tension. The shared narrative must be strong enough to create productive alignment, but open enough to allow constructive dissent and evolution. When it fails, systems fracture into disorder; when it overreaches, it suppresses creativity. This is how the machinery of collective endeavour begins to drift toward bureaucracy. As the founding narrative erodes and the original spark of imagination starts to flicker, trust in the story gives way to trust in process. Our greatest strength - the ability to unite through shared belief - carries within it the seed of its own decay; the more people a story must unite, the less it can say.
What Do: Designing for Trust
If distrust is an inevitable consequence of scale, and the price of distrust is progress, the solution is neither to give up on ambition nor naïve faith, but better designed social structures.
- Trust through precedent
 
The easiest way to earn trust is through evidence of past performance, not perpetual proof of current compliance. High-performing teams tend to cluster around credible leaders with proven track records. They don’t need weekly justification rituals because their work speaks for itself. Designing around that principle means giving autonomy in proportion to demonstrated competence. Amazon’s two-pizza teams13 and DARPA’s small, accountable project groups operate on this logic. They decentralise control, contain risk, and enable scale without bureaucracy. The goal isn’t to eliminate oversight but to make it proportionate to the risk it is seeking to avoid.
- Trust through bounded risk
 
Design systems where failure is contained but permissible. Innovation depends on the freedom to explore uncertainty without fear of consequences. Instead of demanding certainty before funding an idea, organisations can limit exposure by running smaller, faster experiments. Start-ups and venture funds understand this intuitively: make many small bets, not one existential one. When failure is cheap, trust becomes easier. This is how science and corporate R&D used to work - incremental, iterative, self-correcting - before procedure displaced trust.
Once an organisation’s instinctive response to failure shifts from who is to blame? to what did we learn?, it becomes an unstoppable force for progress.
A Cure Worse than the Ailment
The root of distrust comes from good intentions - the wish to safeguard scarce resource - but metastasises into a system that protects nothing but itself. The cure is not less oversight but better oversight: one that distinguishes between recklessness and creativity.
Because the real price of distrust is not money wasted or targets missed. It is the slow erosion of our collective capacity to imagine, to take risks, to meet uncertainty with courage and curiosity. And that, in every field from aid to academia to enterprise, is the most expensive loss of all.
The irony is that modern civilisation was built on and is still powered by belief.
Belief that public servants act in good faith.
Belief that innovators aren’t reckless.
Belief that employees, given discretion, will do the right thing.
Without that belief, no amount of process can save us.
Klein, Ezra and Thompson, Derek. 2024. Abundance: The Story of Us and Our Endless Inventiveness. Chapter 4: Invent. New York: Scribner. ↩
Wang, J., Veugelers, R., & Stephan, P. (2017). Bias against novelty in science: A cautionary note for users of bibliometric indicators. Research Policy, 46(8), 1416–1436. https://doi.org/10.1016/j.respol.2017.06.006 ↩
Park, M., Leahey, E. & Funk, R.J. Papers and patents are becoming less disruptive over time. Nature 613, 138–144 (2023). https://doi.org/10.1038/s41586-022-05543-x ↩
Herbert, D. L., Barnett, A. G., & Graves, N. (2013). On the time spent preparing grant proposals: An observational study of Australian researchers. BMJ Open, 3(5): e002800. https://pmc.ncbi.nlm.nih.gov/articles/PMC3664356 ↩
von Hippel, T., & von Hippel, C. (2015). A survey analysis of grant writing costs and benefits. Research Evaluation, 24(3), 236–245. https://pmc.ncbi.nlm.nih.gov/articles/PMC4349454 ↩
Lacey, K., Corrigan, K., & Hobson, C. (2023). Post-award grant management and the administrative burden on researchers: a mixed-methods study. Research Integrity and Peer Review, 8(13). https://pmc.ncbi.nlm.nih.gov/articles/PMC10513312 ↩
Harvard T.H. Chan School of Public Health. (2021). Inside Operation Warp Speed and the U.S. COVID-19 response. https://hsph.harvard.edu/news/inside-operation-warp-speed-and-the-u-s-covid-19-response/ ↩
Defense Advanced Research Projects Agency (DARPA). (n.d.). The ARPANET and Beyond. https://www.darpa.mil/news/features/arpanet ↩
Flyvbjerg, B. (Ed.). (2017). The Oxford Handbook of Megaproject Management. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780198732242.001.0001 ↩
Grand View Research. (2024). Project Management Software Market Size, Share & Trends Analysis Report, 2024–2030.Available at: https://www.grandviewresearch.com/industry-analysis/project-management-software-market-report ↩
Bloch, M., Blumberg, S., & Laartz, J. (2012). Delivering large-scale IT projects on time, on budget, and on value.McKinsey & Company. Available at: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/delivering-large-scale-it-projects-on-time-on-budget-and-on-value#/ ↩
Harari, Y. N. (2014). Sapiens: A Brief History of Humankind. London: Harvill Secker. ↩
Slater, D. (2022). Powering Innovation and Speed with Amazon’s Two-Pizza Teams. AWS Executive Insights. https://aws.amazon.com/executive-insights/content/amazon-two-pizza-team/ ↩