Research
Research and Experimentation at Optimism

Why We Experiment: Building a Culture of Experimentation

At Optimism, we’re committed to a bold vision: Build an equitable internet, where ownership and decision-making power is decentralized across developers, users, and creators. We’ve realized that if we want to achieve this goal and pioneer a new model of digital democratic governance, we need to understand what works and what doesn’t. And just like clinical drug trials, or impact evaluations in developmental economics, running controlled experiments is how we truly learn about cause and effect.

Designing a successful decentralized governance system is uncharted territory, so there’s no shortage of open questions about cause and effect that we need to understand. For instance: Do delegation reward programs improve delegation? Do prediction markets make better decisions than councils? Do veto powers increase legitimacy? Do various voting mechanisms decrease collusion? Does deliberation increase consensus? Do airdrops increase engagement? To name just a few.

With the amount of talent and energy across the Collective, there’s also no shortage of interesting ideas and initiatives to implement. In Optimism’s early days, tackling open design questions sometimes involved a less-scientific, trial-and-error approach — for example, trying multiple things at once with no clear way to measure impact other than anecdotal feedback. We’ve always been committed to taking an iterative approach (opens in a new tab) to learning and governance design, but we’ve realized along the way that we need a more rigorous, data-driven approach to truly understand how to build the best system.

This document provides an overview of Optimism’s approach to research and experimentation, highlighting (1) our experimental design principles, (2) our research prioritization framework, and (3) some examples of ongoing experiments as well as other important non-experimental research topics we’re working on.

How We Experiment: Principles for Designing Experiments

Below are the key principles guiding our approach to experimental design. Our goal behind each of these principles is to take a thoughtful, data-driven approach as we iteratively design a resilient governance system.

Principles for Designing Experiments

A note on the principle that randomization = causal learning: Random assignment of treatment ensures that other characteristics that might affect the outcome are balanced evenly between treatment and control groups. This removes bias that otherwise confounds results, so when possible we try to randomly select treatment and control group participants in our experiments.

In practice, though, it’s sometimes impractical or even unethical to randomly assign participants to an intervention. If this is the case and we still want to understand cause and effect, we can leverage a quasi-experiment to evaluate the effects of an intervention even without random assignment.

Examples of quasi-experimental approaches to teasing out causal learning might include:

If teasing out causation via a quasi-experiment is also not possible, then we simply interpret accordingly (i.e., inferring correlation rather than causation).

When We Experiment: Prioritization Framework

As we’ve discussed above, experiments are well-suited for a specific type of question (i.e., about cause and effect), though it’s sometime impractical to experiment with human behavior. And finally, experiments also take resources and time to execute well. With this in mind, here’s how we think about when to experiment:

Should this be an experiment

  1. Is this mission-critical?
    1. If yes, the research question is about existential parameters, then we want to experiment (if the next two answers are also “yes”)
    2. If no:
      1. If the research question is of medium importance, we take a trial and error approach and make sure to have clear outcome measurements
      2. If the research question is of low importance, we go ahead and ship
  2. Is this causal?
    1. If yes, we are testing a hypothesis about cause & effect, then we want to experiment (if the next answer is also “yes”)
    2. If no, we use different research tools for non-causal mission-critical research questions (e.g. deep research or data analysis) as described in a later section
  3. Is this feasible?
    1. If yes, it makes sense to run an experiment given practical constraints, then we design an experiment!
    2. If no, but this is a mission-critical question about cause and effect, we either
      1. Redefine the research question (tackle a piece of the topic that lends itself to behavioral experimentation) OR
      2. Redefine the research method (answer the question via non-experimental methods such as running simulations, analyzing existing data, collecting user research, or conducting deep research workstreams)

What We Experiment: Ongoing Studies at Optimism

We’ll continue to update this section as we analyze and publish ongoing studies. Some questions we’re currently studying experimentally include:

  • Does a deliberative process increase informed decision-making, social trust, or consensus on a contested topic?
  • Does a sample of guest voters allocate resources differently than web-of-trust voters? What is the relationship between social graph connections, vote clustering, and survey data on self-dealing and collusion?
  • Does civic duty, system security, or economic self-interest motivate participation in governance?
    • Intervention underway
  • Do airdrop 5 recipients exhibit a higher retention rate than non-participants? Does receiving the delegation bonus increase the median delegation time compared to non-recipients?
    • Full analysis coming soon
  • Are prediction markets a more accurate mechanism for capital allocation decisions than the council structure?

When We Don’t Experiment: Other Non-Experimental Research is Important, Too

While experiments let us answer causal questions without confounding, there is a significant amount of important non-causal research that we need to learn how to design the best governance system. Fortunately for us, many of these non-experimental studies are collaborations with very smart research partners. And often, these techniques can lay the groundwork for further experimental research.

Some of these non-experimental approaches (and specific examples) include:

Research approachOngoing study (selected examples)
Deep “desk research” workstreamsDesigning a system with dynamic veto designs
Modeling & simulationsEvaluating Voting Design Tradeoffs for Retro Funding Mission Request (opens in a new tab)
Network analysisSocial graph data analysis (Github, Twitter, and Farcaster) across the Collective; Measuring the Concentration of Power in the Collective Mission Request (opens in a new tab)
Performance trackingOP Labs data team’s OP Superchain Health dashboard (opens in a new tab)
Recurring survey dataBadgeholder post-voting survey; Collective Feedback Commission participant survey
Voting behavior analysisAnalysis of Retro Funding vote clustering; Analysis of Retro Funding capital allocation (opens in a new tab) distributions and growth grants (opens in a new tab)

Does any of this sound interesting and you’d like to be involved? Please visit our Grants (opens in a new tab) page with details on how to get a grant, including links to open RFPs.