Conducted by Eugene Leventhal and Mashal Waqar
Mint on Zora to contribute to the QV round:
Timeframe: Late June to early September 2023
Interviews: 20 individuals across 13 organizations
Grant Programs: Active, Sunset, and Quadratic Funding Operators
Over $500M in grants issued by programs like Algorand, Ethereum, NEAR, and Solana.
Total Grants Issued: Over half a billion dollars
Number of grants: 5900+
This report aims to demystify web3 grants programs, covering their evolution, types, and governance. It draws from both qualitative and quantitative data to provide a comprehensive view.
Aligning Intentions and Program Objectives
Mission-Driven Approach: Establish a clear mission statement for each grant program to guide decision-making.
Program Differentiation: Create specialized grant programs with distinct goals, such as an "Ecosystem Support Fund" for community projects and a "Growth Fund" for scaling initiatives.
Naming Conventions: Use clear and descriptive names for programs to avoid confusion, which in turn helps in selecting appropriate metrics and targeting the right audience.
Operations are Key
Dedicated Resources: Assign team members who are solely responsible for managing the program and deploying capital effectively.
Community Updates: Regularly update the community on grant progress, upcoming opportunities, and key performance indicators.
Experimental Approaches: For example, the Stacks Foundation uses an experimental approach to test different grant types and reporting mechanisms.
Exploring Grant Type Landscape
There are different types of grant approaches and each can be optimized depending on the objective.
Quadratic Funding: Effective for community-driven projects but requires strong marketing and community efforts to attract contributors.
Prospective Grants: These are open calls that can attract a large number of applicants but may require significant resources to review.
Request for Proposals (RFPs): These are targeted calls that require upfront due diligence but often result in high-quality projects.
Retrospective grants: These are great for supporting ongoing or existing projects. However, they may be counterproductive if projects require immediate support.
Research grants: they require technical reviewers and a mindset shift to better understand the more open-ended nature of research relative to engineering
Benefits and Dangers of Rubrics
Pros: A well-defined rubric can streamline the assessment process, making it easier for reviewers.
Cons: Over-reliance on a rubric can narrow the scope of projects and may not align with the program's broader mission.
Role of Transparency
Varied Approaches: Programs like Aave and Uniswap offer complete transparency, providing detailed accounting of funded projects. This approach is often lauded for its openness but may come with its own set of challenges, such as influencing future submissions. Programs that utilize Questbook’s tools, such as Compound (which was also administered by Questbook) and TON, make all of the proposals visible via the Questbook app. Whereas, larger programs like the Ethereum Foundation's ESP and Solana Foundation have adopted a more selective approach. They've found that too much transparency can lead to unintended consequences, such as the community misinterpreting data to conform to perceived norms around program priorities. Similarly, Gitcoin shifted from an open approach with reviewer’s comments, to a more closed approach to protect reviewers from lashback and personal attacks.
Community Impact: Transparency is seen as a very important part of allowing the community to have input or say of some sort over the grant program.
Grant Governance
Feedback loops, decision-making, and accountability are key factors in governance. Establish mechanisms for ongoing feedback from both grantees and the community. Use smart contracts or on-chain tools to ensure transparent and accountable fund deployment.
When planning grants governance think about:
Goals
decision-making structures
transparency and accountability to community
issue resolution
tools/mechanisms for review and deployment
grant categories
feedback and accountability loops
All in service of empowering experts while ensuring accountability to community.
Inter-Program Collaborations
This is rare but necessary for tackling complex issues. The lack of such collaboration is preventing tackling some of web3’s biggest problems, especially deeply complex and technical ones that could benefit from large scale efforts.
Lack of collaboration also limits the ability to build networks of reviewers that can serve multiple ecosystems as opposed to each one competing for the same expertise to help review.
Need for Consistency
Programs can collectively benefit by adjusting for more consistency in grant applications and the data that gets shared from programs.
Each grant program has notable differences between their applications, even for the portions that can be standardized (i.e. name of the project, team members, project description, etc.). Creating standardization can lead to grant app innovation: for example, grants applications as an open source layer that would integrate across tooling options. Such a tool would help create a reputation layer at the level of basic grantee information across programs, which in turn could help reduce grant farming.
Another area for consistency is grant data sharing and metadata. The main benefit of standardizing this would be: creating a shared conception of data transparency for grant programs and would make it easier for analysis of grants and grant programs. Having this metadata standard would make it easier to have a verifiable database of the number of grants funded and the total amount issued, along with any other data most grant programs would feel comfortable sharing.
Building Support Systems for Grantees
Maximizing grantee success requires building systems of support besides financial support.
This can include connecting grantees with each other, with other relevant communities, or with other relevant resources (including vetted service providers). Other examples of grantee support include grantee office hours, social events, or marketing support, to name a few. These support systems will differ depending on the goal of the grant program and the nature of grantees. A good place to start with adding more support can be asking grantees what they could have benefited from.
Conflicts of Interest and Accountability
Figuring out the right processes for disclosing conflicts of interests is important. This can present challenges in the web3 space, given that there are many anonymous contributors and given that conflict of interest disclosure is generally underdeveloped in the space. There have been some attempts at codifying such things, but they are mostly self-reported with little actual accountability. We are unaware of any violations that led to more than a forum discussion or someone voluntarily resigning from a role. Given the decentralized nature of most programs and their parent organizations, it remains unclear what kind of legal recourse or non-legal processes would be followed in cases of clear disregard and violation.
It’s important to clearly state who holds others accountable in the system. Even if relying on non-traditional legal options, at least having some team tasked with auditing and having clear processes of when things get managed internally within the org versus getting sent to a Klerios type arbitration process would go far in terms of providing much more robust accountability.
Ensuring Fairness
More robust systems of checks and balances are needed among reviewing teams and individuals, particularly when the reviewers and grantees have existing relationships.
Familiarity bias may lead to the mistaken belief that a known grantee is automatically more legitimate or qualified. This poses a challenge as applicants with established profiles could be unfairly prioritized over newcomers who haven't yet had a chance to build a track record.
A rigorous and objective assessment system is essential, which distinguishes between new projects from first-time grantees and those from recurring ones. Without such checks and balances, both the review process and broader governance and review structures risk becoming biased. Accountability mechanisms must be in place to detect, rectify, and prevent such biases, ensuring fairness and meritocracy in grant allocations.
Impact Reporting
Impact reporting should be an integral part of the program, not an afterthought. There’s a sense of fear around reporting for some, which feels counterintuitive to the transparency and growth ethos that are communicated as values to their communities.
Develop a set of key performance indicators to measure the success and impact of grants. Introducing KPIs or metrics tracking regularly can be one of the most effective ways to measure impact over the lifetime of a grants program. This can include pulling information from relevant experts and tapping the right people from the community (or from the stakeholders most affected by realizing its impact).
Default status tracking is missing in most programs unless voluntarily reported by the grantees. Most programs also recognize that purely quantitative metrics do not suffice in terms of capturing the full scope of impact, thus figuring out the right systems for capturing qualitative information is also important, if less clear.
Need for Continuous Evolution and Reassessment
A continuous cycle of evolution and reassessment is required to ensure the program remains aligned and responsive to ecosystem needs and challenges. With periodic re-evaluation, it’s possible to identify emerging trends, address new challenges, and cater to the changing requirements of developers, projects, and the broader community.
Proactiveness and adaptability ensures meaningful support, fosters genuine innovation, and paves the way for the sustainable growth of the ecosystem.
Eugene Leventhal is currently the Head of Operations and Partnerships and will be stepping in as the Interim Executive Director as of October 1 at Metagov, a governance research nonprofit. Prior to joining Metagov, Eugene was the Executive Director at the Smart Contract Research Forum, a nonprofit project focused on spurring more conversation around web3 research. Eugene also supported what later became the Secure Blockchain Initiative at CMU, where he worked as a project manager for 2 years after finishing his policy masters there, all of which came after spending 7 years in professional services in the finance industry. He first got into the space in 2016 when he worked on eduDAO, a DAO meant to help schools and nonprofits crowdfund more transparently. Eugene is passionate about DAOs and governance as a means of pushing towards a more cooperatively rooted future.
Contact: Email: eugene@metagov.org | Twitter: @bbeats1
Mashal Waqar is the Head of Growth & Partnerships at Bankless Publishing (publishing arm of BanklessDAO), and Managing Director at Milestone Ventures. Her recent projects in web3 include heading operations at a web3 venture studio, researching token models for seed club, QF and UNICEF research for Gitcoin, and being an affiliate researcher in Ethereum Foundation’s Summer of Protocols. She initially entered the space by starting the Security Practices and Research Student Association (SPARSA)’s RIT Dubai chapter in 2015, and writing security newsletters while pen-testing fintech products at TPS (Transaction Processing Systems). Post that, Mashal co-founded a global media company (The Tempest), an accelerator program for early stage female founders, and has co-authored a paper on challenges faced by them. Since 2021, she’s been DAOing with Shefi, Protein, RADAR, and BanklessDAO. Mashal holds a B.S. in Computing Security with a minor in International Business from Rochester Institute of Technology (RIT). She’s a Forbes Middle East 30 Under 30 and winner of the 19th WIL Economic Forum Young Leader of the Year award.Contact: Email: mashal@milestoneventures.co | Twitter: @arlery
Mint on Zora to contribute to the QV round: