Sigmaverse Update: Towards a More Transparent Ergo Ecosystem
Summary
Currently, applications within the Ergo ecosystem that share similar missions, such as Bene and MewFund, operate under radically different trust models. Bene is a trustless and P2P protocol, whereas MewFund is based on a centralized system that requires trust in its developers as intermediaries. This fundamental difference is not apparent to the average user, creating a potential risk.
The current practice of including “Know Your Assumptions” (KyA) pop-ups in applications is insufficient, as it suffers from potential developer bias and causes information overload fatigue for the user.
This proposal advocates for revitalizing Sigmaverse, under the stewardship of the Ergo Foundation, to become a neutral and community-managed source of truth. Through a system of public Pull Requests (PRs) and a visual quality standard with icons (the Sigmaverse Quality Standard), we can offer users a quick and reliable way to understand the trust assumptions of each application, fostering a more transparent and secure ecosystem.
Context
I am one of the developers of Bene, a fundraising campaign platform from Stability Nexus, an organization known for protocols like Djed (SigmaUSD) and Gluon (GluonGold).
A brief explanation of contribution campaigns: In a typical model, someone creates a campaign by defining a base token (e.g., SigUSD) and a contribution token. Those who wish to support the project send the base token and, in return, receive the contribution token at a fixed exchange rate. There is usually a minimum funding goal; if it is not reached by a deadline, contributors get their funds back.
Mew Finance, one of the most used applications in the ecosystem, launched MewFund shortly before we launched Bene. It is a similar application but with more features and functionality. After almost a year, I have investigated how MewFund works by exploring the traceability of a fundraising campaign and speaking with its lead developers, who very kindly answered all my questions.
Following this conversation, I have confirmed that MewFund does not have any type of smart contract in its architecture. The collected funds are sent to a P2PK address belonging to the Mew developers, and on the other hand, the campaign creator holds custody of the contribution tokens until they need to be sent. The Mew team acts as a trusted intermediary throughout the process, in addition to providing the tools to facilitate usability for both contributors and campaign creators.
Furthermore, being a centralized system, there is a risk that the developers could block access to the funds or the platform at any time during a campaign, either due to an unintentional error or a malicious action in their own self-interest.
Bene, on the other hand, is a much more limited protocol in terms of features, but it requires no server, and each contribution campaign is controlled by an on-chain script (smart contract), which allows for zero trust in the developers. It is a purely P2P application that only connects to an Ergo node and the explorer.
It is clear that although these applications appear to serve the same purpose, they are fundamentally different and have distinct pros and cons:
-
MewFund requires trusting the developers and the campaign creator (even though the traceability of funds can be followed), but it offers a wider range of options due to its low implementation cost.
-
Bene requires no trust in the developers or the campaign creator (as they have no control over the funds at any point), but its range of features is limited due to the high cost of covering all assumptions in trustless scenarios.
Depending on the use case, one application may be more suitable than the other. A similar case is the difference between algorithmic and fiat-backed stablecoins, or between trustless and centralized bridges.
Within the community, there has been an effort to add a “Know Your Assumptions (KyA)” message at the start of applications, usually as a pop-up that the user must accept. These texts should clearly state the assumptions the user agrees to when using the application. MewFund does not currently have a KyA, but the developers informed me that they would inform users by adding one somewhere in their application.
The Problem
On-application KyAs, while a step in the right direction, present certain issues:
-
Potential Bias: They are presented by the application’s own developer, who may not be incentivized to display all of the application’s assumptions with full transparency and associated risks.
-
Cognitive Overload: Exhaustively detailing all assumptions requires presenting too much information to the user. This creates a counterproductive effect known as information overload or cognitive fatigue, as a wall of text discourages reading and, therefore, genuine understanding of the risks.
Proposed Solution
A simple option that can solve this problem in the short term is to “dust off” Sigmaverse, a somewhat outdated website that lists the ecosystem’s applications and tools. Sigmaverse is a project maintained and hosted, as I understand it, by the Ergo Foundation or by Sigmanauts—institutions reputable enough to carry out its mission fairly (a completely trustless solution would be excessively complex to implement in the short term).
My proposal is to update Sigmaverse to allow for the following:
-
To solve developer bias (Problem 1):
-
Become a community-managed source of truth: Sigmaverse should contain the trust assumptions for each application, allowing anyone—not just the project’s developers—to open a Pull Request (PR) on its repository to propose updates.
-
Foster peer review: Invite all users and developers in the ecosystem to contribute to these KyAs. This creates a system where the community can verify that the assumptions are correct and complete. It allows other developers to investigate third-party solutions, judge their assumptions, and, if necessary, initiate a technical discussion through the Github PR itself to reach a consensus.
-
-
To solve information overload (Problem 2):
- Create the “Sigmaverse Quality Standard”: Based on other quality standards, we can agree on certain key characteristics associated with visual icons. These labels would be displayed prominently, facilitating a quick understanding of the most important assumptions.
Sigmaverse Quality Standard - Specification (DRAFT)
Core Principle: Action-Centric Analysis
The standard operates under the Principle of Action-Centric Analysis. A system is broken down into its fundamental actions (e.g., “create proposal,” “claim funds”). Each action is analyzed across two dimensions: the Trust Category of the process that authorizes it and the Access Category required to execute it, each with a numerical level.
Trust Categories
Ranked numerically, where a lower level indicates greater decentralization and less reliance on external actors.
-
Level 1: Direct Contract Validation (Trust-Minimized)
The action’s validity is exclusively determined by the immutable rules encoded in the smart contract script. Permission is entirely contained within the verifiable logic of the contract itself. Any actor who can construct a transaction that satisfies these rules can execute the action, without needing the intervention or permission of an external mediator. -
Level 2: Action Mediated by a Crypto-economic Actor (Crypto-economic Security)
The execution or validity of the action depends on the intervention of an external actor from a dynamic and permissionless set (e.g., oracles, keepers, validators). Confidence that these actors will behave honestly is based on explicit economic incentives, such as rewards for correct behavior or penalties (slashing) for malicious behavior. -
Level 3: Action Mediated by a Fiduciary Actor (Requires Fiduciary Trust)
The execution or validity of the action depends on the intervention of an external actor from a static and permissioned set (e.g., the developer’s address, a multisig council, a governance committee). Trust is placed in the reputation, identity, or integrity of this specific group, as there are no direct crypto-economic mechanisms to guarantee their behavior.
Access Categories
Ranked numerically, where a lower level indicates greater user sovereignty.
-
Level 1: Verifiable Artifact
The action is executed via a software artifact (e.g., a desktop app, a command-line interface, a client-side web app) that the user downloads and runs in their own environment. The user has full control and does not depend on a third-party service to interact with the blockchain. -
Level 2: Centralized Service Dependency
The action’s execution depends on a service hosted and operated by a third party (e.g., a project’s website, a centralized API). The availability and integrity of this service are necessary for the user to interact with the protocol.
Final Scores
For a quick summary, two final scores are calculated from the detailed analysis matrix of the application.
-
Weakest Link Score: This score reflects the single greatest risk in the system, determined by the highest numerical level found in either the Trust or Access categories across the entire system.
-
Average Risk Score: This score offers a holistic view of the system’s overall design, calculated as the average of all numerical levels assigned to all critical actions.
This standard allows the average user to quickly identify an application’s key properties through scores and icons and, if interested, delve deeper by reading the detailed assumptions in a structured matrix.
It is important to emphasize that the information on Sigmaverse should be exclusively formal and technical, keeping any commercial or advertising commentary separate. The best way to proceed would be to establish very specific guidelines on what a KyA should and should not contain.
Proposed Structure for Sigmaverse Guides
To implement this proposal in an orderly and standardized manner, I would like to suggest a basic structure for each application’s guide on Sigmaverse. We can consider each application as a “brand” (name, logo, etc.) that encompasses a set of technical components.
For each application, the following should be documented:
-
Category and Subcategories: This allows for grouping applications with similar missions (e.g., “DeFi > DEX,” “Crowdfunding,” “Tools”) to facilitate search and comparison.
-
General Information: This would include the brand (name, logo), a clear description of its purpose, a basic user guide, and official access points (links to the website, GitHub, social media).
-
Features: A list of the functionalities and services the application offers the user. This section focuses on “what it does.”
-
Trust Assumptions: A transparent breakdown of the trust vectors and potential points of failure, aligned with the Sigmaverse Quality Standard. This is where questions are answered, such as: Who or what must the user trust? Which parts are centralized? What could go wrong? This analysis will culminate in the calculation of the application’s final scores: the Weakest Link Score (highlighting the biggest risk) and the Average Risk Score (reflecting the overall system design).
Following this structure, our previous example would be perfectly framed. Both MewFund and Bene would be in the same “Crowdfunding” category. Each would have its general information, and when it comes to comparison, a user could see clearly—almost like in a product comparison table—that MewFund excels in the “Features” section with its wide range of options, while Bene excels in the “Trust Assumptions” section due to its high scores under the Sigmaverse Quality Standard.
Final Words
To be clear, this is not a criticism of the ecosystem or the Mew team. I believe their contribution has been and will continue to be very positive. However, failing to maintain clear and accessible trust assumptions greatly disincentivizes developers who are striving to build truly decentralized solutions.
People reason with ideas, and many users come and will come to Ergo precisely because of the trust assumptions it offers: a P2P system, more secure thanks to its smart contracts, and a model that encourages stateless applications.
Users must be clear about these same assumptions within the ecosystem. Otherwise, the exploitation of one of these centralized trust vectors (for example, a security failure or abuse by an intermediary) would not only harm the affected users but would damage the reputation and brand of the entire Ergo ecosystem.