New Order

Computer Aided Governance

Increasingly, Computer Aided Governance is a methodology used in DAOs and other decentralized projects to improve the quality of decisions made by incorporating evidence from data, analysis, and especially integrative “complex systems” simulation modeling.
In general, the types of evidence used during the course of DAO governance decision making, from most to least “value added” are: Reports / Analyses; Integrative Simulation Models; Supervised AI / ML / Statistical Models; Clustering and Unsupervised Learning; Data Visualization; Raw Data.
The figure below describes a prototypical Evidence-Based decision making process that incorporates Computer Aided Governance.
A Prototypical Evidence-Based Decision-Making Process
The New Order DAO will have a forum with subsections dedicated to a variety of topics including governance-related discussions. Specifically, there will be a subsection where data science and modeling contributors are dedicated to sourcing, sharing, and discussing evidence in support of decision making.
At the outset, imagine a proposal for a decision is before the DAO - such as deciding how to allocate resources. To support this decision, data is collected and analyzed with the help of simulation and other models, and the analysis is discussed by decision makers. As a consequence of the discussion, it may be apparent that more evidence will be needed in order to arrive at an informed decision, in which case more data is collected, analyzed and discussed.
If it is apparent from the discussion that there is enough evidence to form a rough consensus around the decision, then the process moves to a binding vote. Once the decision is actuated, the state of the DAO, the controlled system, and the environment changes, which generates new information that can lead to additional questions and decisions in the future.
Here, we leave out routine data science details such as sourcing, cleaning, and transforming raw data into something more convenient for creating models and analysis. Additional extensions that clearly can occur, but are not discussed explicitly here are that multiple decisions (and their processes) can be happening at once, for example, if there are multiple matters for the DAO to decide at the same time.
In the future we anticipate that this loop will become increasingly automated for some types of operational decisions such as real-time parameter adjustments. This process would see removed the human mediated stages, and replacing voting with AI or rules based decision making, becoming a more typical control loop.

Working through a specific example

Soon after launching the Open DeFi DAO will feature a governance discussion forum, with a section dedicated to producing evidence for decision support populated by a self-organised data-oriented community of contributors.
Let’s say, for example, a sub-topic of the forum is related to providing evidence to support the decision whether or not to incubate a new project involving a new innovative type of meta-vault.
Let’s say the forum topic related to this meta-vault decision contains several posts.
Two high-level reports are there, built on earlier, foundational analyses. The posted “Token Economics Report” presents results from and discusses earlier analyses and models posted on the forum. It adds value by bringing together and presenting the previous results comprehensively.
A selection of forum post titles is shown, with the “Token Economics Report” showing which previous posts it refers to and builds upon. The links are understood to be cryptographic hashes.
The “Report on New Vaults” is another top-level report that summarizes other evidence, although its constituent parts are not shown, to simplify the discussion.
In the spirit of Radical Transparency the norms of the community require reports and analyses to reference source material whether the source is raw data or some other artifact resulting from earlier analysis. Cryptographic hashes are used when referring to earlier evidence, providing a chain of provenance back to the original sources.
The provenance of sources underlying the decision support evidence on the forum.
The figure above shows the provenance of data and analytics that are the foundation for the “Token Economics Report”. It shows that the evidence from the “Machine Learned Model”, “Data Visualization”, and “Market Simulation”, in turn, are based on evidence posted earlier to the forum including “Raw Market Data”, “Blockchain Data”, and “Behavioral Data”. The “Report on New Vaults” is also based on earlier analyses, models, and data, but these are not shown for simplicity here.
Decision support evidence as a Directed Acyclic Graph (DAG) of nodes
We can form an Directed Acyclic Graph (DAG)
H\mathcal H
out of the evidence nodes which we will refer to as
hiHh_i \in H
is the set of nodes in
H\mathcal H
. The directed edge between nodes
denotes that the head points to the node referring to information in the node at the tail of the edge. In the figure above, node
incorporates information contained within nodes
h1,h2h_1, h_2
Labeling the Sinks, Sources, Successor, and Predecessor nodes for a node in an arbitrary DAG
H\mathcal H
be the set of sinks that satisfy
hsS+:deg(hs)=0\forall h_s \in S^+:deg^-(h_s) = 0
Furthermore, let
be the set of sources that satisfy
hsS:deg+(hs)=0\forall h_s \in S^-:deg^+(h_s) = 0
then, for any
HH \neq \emptyset
S+1|S^+| \ge 1
S+1|S^+| \ge 1
that is, there is at least one source node and at least one sink node, for any non-empty set of evidence nodes
H\mathcal H
may or may not be a rooted graph.

Utility of Posted Evidence to the Decision Making Process

denote a utility function that maps the evidence
to a utility
In general,
could take many forms, and could be assigned by a subset of (or all) DAO participants, a process which could be modeled as a dynamic system of interacting Economic Agents.
Assume that the utility function has the property
W(H)=i=0nW(hi)\mathbb{W}(H) = \sum_{i =0}^{n} \mathbb{W}(h_i)
n=Hn = |H|

The Changing State of Available Evidence

The state or set of evidence in
evolves through the addition of new evidence
to transition to a new state
Initially the forum begins with no evidence
H=H = \emptyset
And evidence is added over time,
H+=Hhj:hjHH^+=H\cup {h_j}: h_j \notin H
can be primary data, or source in the graph theory sense, that is,
hjSh_j \in S^-
can be the result of a complex transformation
HjHH_j \subseteq H
hj=Fj(Hj):HjHh_j = F_j(H_j) : H_j \subseteq H
is an arbitrary function, sequence of logical operations, or nonlinear algorithm including simulations and machine-learned transformations, or plain language argument.

Estimating Utility Contributions

We want to estimate
, the utility contribution of each
hiHh_i \in H
so that ultimately rewards can flow to the individual providers of
proportional to their contribution.
be the set of predecessor nodes to
be the set of successor nodes to

Bottom-Up Perspective

The utility of the evidence node is the utility of the supporting evidence plus the "lift" or additional evidence generated by performing transformation
. Thus, we argue that:
W(hi)=W(Bi)+ri\mathbb{W}(h_i) = \mathbb{W}(B_i) + r_i
is the value or Utility added by performing
. Then
can be thought of as the "credit" attributed to
and by association the account that posted it.
We note that when
W(hi)=W(Bi)\mathbb{W}(h_i) = \mathbb{W}(B_i)
this implies that
ri=0r_i = 0
which corresponds to the case where evidence
references sources but does not deliver any additional useful insight over the predecessor evidence. We assume that such evidence won't have any successor evidence and the credit assigned directly by forum users, will tend towards 0.
If we assume that the contribution of
is approximately proportional to the utility of its source evidence:
riW(Bi)r_i \approx \mathbb{W}(B_i)
then we can say,
W(hi)=2ri\mathbb{W}(h_i) = 2* r_i
ri=W(hi)/2r_i = \mathbb{W}(h_i) / 2
is bounded as follows:
W(hjJi)W(hi)W(Bi)\mathbb{W}(h_j\in J_i)\ge\mathbb{W}(h_i)\ge \mathbb{W}(B_i)

Top-Down Perspective

The total utility of evidence
from a "top down" perspective can be also expressed as,
W(hi)=W(Jihi)+W(Ci)\mathbb{W}(h_i) = \mathbb{W}(J_i|h_i) + \mathbb{W}(C_i)
As before, let
be the set of successor nodes to
is the computed estimate of the utility of the contribution of
to the successor nodes, and
is the assigned estimated utility as assigned by forum members.
An expression to describe
is then:
W(Jihi)=hjJiαW(hj)deg(hj)\mathbb{W}(J_i|h_i) = \sum_{h_j \in J_i}\frac{\alpha \cdot\mathbb{W}(h_j)}{deg^-(h_j)}
is an attenuation factor, which, in general adheres to
1α01\ge \alpha\ge 0

Combined Perspective

Finally, by substituting (4) into (3)
W(hi)=hjJiαW(hj)deg(hj)+W(Ci)\mathbb{W}(h_i) = \sum_{h_j \in J_i}\frac{\alpha \cdot\mathbb{W}(h_j)}{deg^-(h_j)} + \mathbb{W}(C_i)
and then substituting into (2) we get
ri=hjJiαW(hj)deg(hj)+W(Ci)2r_i = \frac{\sum_{h_j \in J_i}\frac{\alpha \cdot\mathbb{W}(h_j)}{deg^-(h_j)} + \mathbb{W}(C_i)}{2}

Voting Member's Credit Allocation

Each voting member in the DAO receives a number of tokens
per governance decision the member voted in; these specialized tokens are only used to assign credit to evidence posted in the governance forum.
In the set of voting members
each voting member
pPp\in P
assigns zero or more of their
tokens to signal the utility or importance of a particular piece of evidence
is the credit that user
assigns to
Ci=pPcpiC_i = \sum_{p\in P}c_{pi}
with the constraint that
icpiκ\sum_{i}c_{pi} \leq \kappa
Assuming during some epoch the total awards available for distribution are
then the portion of
allocated to
Ri=rijHrjR_i = \frac{r_i}{\sum_{j \in H}{r_j}}
The reward
gets sent to the address (user) who originally posted the evidence.
The above is a credit assignment model without each voting member having to explicitly judge the utility of evidence against the predecessor or source evidence used.