Underwriting
Superintelligence
Rune Kvist, Rajiv Dattani, Brandon Wang
July 15, 2025
Insurance Unlocks Secure AI Progress
We’re navigating a tightrope as Superintelligence nears. If the West slows down unilaterally, China could dominate the 21st century. If we accelerate recklessly, accidents will halt progress, as with nuclear power.
Insurance, standards, and audits together create skin in the game for quantifying, communicating, and reducing AI risks so we can balance this tightrope. We call this the “Incentive Flywheel.”
Benjamin Franklin first discovered the Incentive Flywheel, when fires threatened Philadelphia’s growth. He gathered neighbors and founded America's first fire insurance company. They created volunteer fire departments and established the first building safety standards.
Since then, this Flywheel has been at the heart of balancing progress and security for new technology waves like electricity and the automobile.
But the Incentive Flywheel won’t appear fast enough on its own for AI: we need to jumpstart it. This essay outlines 25 actions entrepreneurs and policymakers must take by 2030 across agents, foundation models, and data centers.
Markets are a uniquely Western solution to risks. The Incentive Flywheel adapts faster than regulation, accelerates rather than slowing down progress, and has more teeth than voluntary commitments.
Benjamin Franklin and the Incentive Flywheel
Houses in Philadelphia in the 1700s had a bad habit of burning down. Made of wood and packed closely together, fire caught easily and spread quickly, killing many. Homeowners could not assess their own fire risk. And did not bear the full cost of their negligence. Ad-hoc volunteer responses failed. A single uncontained fire would often destroy entire city blocks.
As the population of Philadelphia grew tenfold in the 1700s, residents were building houses faster than the systems meant to contain them.
Progress requires security. An accident could cause significant damage, and threaten America’s lead in AI. Nuclear's promise of abundant energy died for a generation after accidents like Three Mile Island and Chernobyl accelerated public backlash and regulatory scrutiny. The same will be true if AI causes major harm — courts and voters will shut AI progress down.
Security powers progress. ChatGPT was created using an AI alignment technique called RLHF that made systems more steerable — and thus more useful. Steerable, reliable AI systems are simply more valuable.
More secure than voluntary commitments from AI companies: the rapid pace of AI progress and associated catastrophic risks mean that AI companies’ voluntary commitments will not inherently create security. The flywheel will align incentives and create accountability.
Figure 1: The Incentive Flywheel of Market-based Governance
INCENTIVE FLYWHEEL
The market mechanics are already taking shape:
Once this flywheel is spinning, investing in security will enable AI companies to grow faster by enabling confident customer adoption. Standards and audits help enterprise risk teams distinguish hype from reality, just as bond ratings help (1) investors act with confidence and (2) governments and regulators oversee financial institutions.
Historical Blueprint: Fire & Car Safety
This is not a new model.
When electricity created new fire hazards around the turn of the 20th century, Chicago Fire Underwriters' Association and the Western Insurance Union funded Underwriters Laboratories (UL) to research risks, certify products, and develop safety standards. The lightbulbs and toasters in your house today are almost certainly UL certified and marked today.
When demand for cars increased after WWII, the insurance industry established the Insurance Institute for Highway Safety (IIHS) in 1959, nearly a decade before federal government action. IIHS ratings and premium discounts created direct incentives to adopt seatbelts and airbags before they became mandatory. Deaths per mile plummeted 80% while driving surged 200%.
This Flywheel reduced risks, letting entrepreneurs build governance capacity long before government intervention.
Skin in the game is the driving force at play. Financial markets rely on risk assessments, like Moody's assigned AAA bond ratings to toxic mortgage securities before 2008, because they were paid by issuers, not affected by losses. Insurance is therefore the necessary skin in the game: when insurers misprice risk, they go bankrupt.
Agents, Foundation Models, and Data Centers
The Incentive Flywheel secures AI progress across all three critical layers of AI development:
Applications represent the majority of AI agent deployments in the real-world today. Enterprises must adopt AI agents to maintain competitiveness domestically and internationally.
Foundation model developers are racing to build superintelligence. They must build the confidence of their customers and stakeholders including the public to earn the right to continue investing in, and deploying these capabilities.
Data center developer infrastructure is critical for the application and model developers. They must build the confidence of their customers and stakeholders (including governments) to earn the right to scale investments to trillions of dollars and protect what could become the most valuable asset in the world.
Figure 2: Applying the Incentive Flywheel across AI development
Faster Than Legislation
Crafting comprehensive laws like the EU AI Act takes longer than it took for AI capabilities to advance from preschool to undergraduate level intelligence. In the last two years, two factors have completely changed the regulatory premise: token costs have dropped by more than 99% while open-source alternatives have emerged.
Markets leading regulation is a more effective way to satisfy all parties. For most types of risk, insurers are incentivized to develop and quickly iterate on core safety measures. Those risks can then be codified into fewer, more simplified pieces of regulation once proven (e.g. mandating airbags). Market-based governance prices in risk changes in real-time and insurance rates adjust monthly based on new data, enabling markets to clear the fog.
Only the government can deal with certain national security risks (e.g. ensuring international proliferation of standards, secure critical infrastructure, and national defense). In these areas governments should lead, partnering with the market to support the development of technologies and deployment as needed.
More Secure Than Voluntary Commitments
The accelerationist approach correctly identifies that markets excel at experimentation, learning, and adaptation. Capital chases promising ideas and bad products disappear over time (e.g. FTX crashed, while Coinbase thrives). However, the nascent AI markets suffer from market failures to prevent secure-by-default outcomes. Misaligned incentives and speed of progress mean companies do not face the consequences of cutting corners, while customers and investors lack the information to accurately assess security. There is a missing market to address these challenges:
Figure 3: Summary of why voluntary commitments are insufficient for secure AI progress
The Flywheel Is Already Emerging
Established insurers like Munich Re (Est. 1880) have teams dedicated to addressing generative AI risks. Cyber insurance companies like Coalition and Resilience (both valued at $1B+) have proven how to bundle insurance with deep technical expertise. Organizations like METR, Transluce, Haize Labs, and Virtue AI are pushing the technical evaluation frontiers. AI labs coalesce around “Frontier AI Safety Commitments" and share information and best practices through the Frontier Model Forum, while NIST has published AI Risk Management frameworks.
At the same time, an intellectual ecosystem is emerging. Jack Clark and Gillian Hadfield proposed regulatory markets; and more recently Gillian explored the role of insurance in regulatory markets; Dean Ball has suggested private AI governance with audits; Miles Brundage has written about how the triad can align incentives.
25 Immediate Actions to Accelerate the Flywheel
Below are 25 actions required in the coming years. Most can be led by private industry. The actions focus on what we will need in the coming years, but we can get started with much less. The place to start is insuring the near-term harms that already have clear liability, or where contractual indemnity can be established. For example in the case of agents: hallucinations, IP infringement, bias, harmful outputs. Insuring these risks with AI-specific insurance will incentivize data-collection across risk types, research into standards, and adherence to these best practices from developers. Insuring million dollar risks will pave the way to insuring the billion dollar risks.
Figure 4: 25 Immediate Actions to Accelerate the Flywheel
Building The Movement
This is merely a starting point. The fog around AI’s trajectory calls for a need to experiment with incentives quickly, fail, learn, and adapt. As evidenced by the rapid advances of AI research and application development, the stakes have never been higher, the timelines never more compressed. Now is the time to act.
Applying the incentive flywheel to underwrite secure AI progress needs the technologist’s ingenuity, the actuary’s carefulness, the business leader’s pragmatism, the economist’s incentive analysis, the legal scholar’s historical grounding and the researcher’s willingness to explore unusual futures.
The authors are building the incentive flywheel right now. If you are interested in contributing, reach out at rk@aiuc.com.
Footnotes
1
Table 2 below outlines examples across fire risk, car safety and AI.
5
Sean Heelan used OpenAI’s o3 model to find a zero day in the Linux Kernel’s SMB implementation, https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/
9
For example, the EU is considering pausing their flagship EU AI Act before it has even come into effect. https://www.dlapiper.com/en-gb/insights/publications/ai-outlook/2025/the-european-commission-considers-pause-on-ai-act-entry-into-application
11
AirCanada’s customer service chatbot hallucinated their refund policy. Courts found that companies are responsible for the promises their AI agents make: https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know
12
19
Examples: Microsoft shut down their AI chatbot Tay in 2016 after it spewed racist and Nazi ideology (link); Google’s Gemini outputted photos of people of color in Nazi uniforms in 2024 (link); OpenAI rolled back a overly sycophantic version of ChatGPT in April 2025 (link); Google committed to publishing safety papers significant AI model releases, but shipped Gemini 2.5 Pro without the promised safety documentation (link)
29
Sandbagging refers to AI systems deliberately changing behaviour when they know they are being evaluated