# Computer Aided Governance

**Increasingly,**

**Computer Aided Governance**

**is a methodology used in DAOs and other decentralized projects to improve the quality of decisions made by incorporating evidence from data, analysis, and especially integrative “complex systems” simulation modeling.**

In general, the types of evidence used during the course of DAO governance decision making, from most to least “value added” are: Reports / Analyses; Integrative Simulation Models; Supervised AI / ML / Statistical Models; Clustering and Unsupervised Learning; Data Visualization; Raw Data.

The figure below describes a prototypical Evidence-Based decision making process that incorporates Computer Aided Governance.

A Prototypical Evidence-Based Decision-Making Process

**The New Order DAO will have a forum with subsections dedicated to a variety of topics including governance-related discussions. Specifically, there will be a subsection where data science and modeling contributors are dedicated to sourcing, sharing, and discussing evidence in support of decision making.**

At the outset, imagine a proposal for a decision is before the DAO - such as deciding how to allocate resources. To support this decision, data is collected and analyzed with the help of simulation and other models, and the analysis is discussed by decision makers. As a consequence of the discussion, it may be apparent that more evidence will be needed in order to arrive at an informed decision, in which case more data is collected, analyzed and discussed.

If it is apparent from the discussion that there is enough evidence to form a rough consensus around the decision, then the process moves to a binding vote. Once the decision is actuated, the state of the DAO, the controlled system, and the environment changes, which generates new information that can lead to additional questions and decisions in the future.

Here, we leave out routine

*data science*details such as sourcing, cleaning, and transforming raw data into something more convenient for creating models and analysis. Additional extensions that clearly can occur, but are not discussed explicitly here are that multiple decisions (and their processes) can be happening at once, for example, if there are multiple matters for the DAO to decide at the same time.In the future we anticipate that this loop will become increasingly automated for some types of operational decisions such as real-time parameter adjustments. This process would see removed the human mediated stages, and replacing voting with AI or rules based decision making, becoming a more typical control loop.

Soon after launching the Open DeFi DAO will feature a governance discussion forum, with a section dedicated to producing evidence for decision support populated by a self-organised data-oriented community of contributors.

Let’s say, for example, a sub-topic of the forum is related to providing evidence to support the decision whether or not to incubate a new project involving a new innovative type of meta-vault.

Let’s say the forum topic related to this meta-vault decision contains several posts.

Two high-level reports are there, built on earlier, foundational analyses. The posted “Token Economics Report” presents results from and discusses earlier analyses and models posted on the forum. It adds value by bringing together and presenting the previous results comprehensively.

A selection of forum post titles is shown, with the “Token Economics Report” showing which previous posts it refers to and builds upon. The links are understood to be cryptographic hashes.

The “Report on New Vaults” is another top-level report that summarizes other evidence, although its constituent parts are not shown, to simplify the discussion.

In the spirit of Radical Transparency the norms of the community require reports and analyses to reference source material whether the source is raw data or some other artifact resulting from earlier analysis. Cryptographic hashes are used when referring to earlier evidence, providing a chain of provenance back to the original sources.

The provenance of sources underlying the decision support evidence on the forum.

The figure above shows the provenance of data and analytics that are the foundation for the “Token Economics Report”. It shows that the evidence from the “Machine Learned Model”, “Data Visualization”, and “Market Simulation”, in turn, are based on evidence posted earlier to the forum including “Raw Market Data”, “Blockchain Data”, and “Behavioral Data”. The “Report on New Vaults” is also based on earlier analyses, models, and data, but these are not shown for simplicity here.

Decision support evidence as a Directed Acyclic Graph (DAG) of nodes

We can form an Directed Acyclic Graph (DAG)

$\mathcal H$

out of the evidence nodes which we will refer to as $h_i \in H$

where $H$

is the set of nodes in $\mathcal H$

. The directed edge between nodes $h_i$

denotes that the head points to the node referring to information in the node at the tail of the edge. In the figure above, node $h_1$

incorporates information contained within nodes $h_1, h_2$

and $h_3$

.Labeling the Sinks, Sources, Successor, and Predecessor nodes for a node in an arbitrary DAG

$\mathcal H$

Let

$S^+$

be the set of *sinks*that satisfy$\forall h_s \in S^+:deg^-(h_s) = 0$

Furthermore, let

$S^-$

be the set of *sources*that satisfy$\forall h_s \in S^-:deg^+(h_s) = 0$

then, for any

$H \neq \emptyset$

, $|S^+| \ge 1$

and

$|S^+| \ge 1$

and that is, there is at least one

*source*node and at least one*sink*node, for any non-empty set of evidence nodes$H$

$\mathcal H$

may or may not be a *rooted*graph.Let

$\mathbb{W}(h_i)$

denote a utility function that maps the evidence $h_i$

to a utility $w_i$

.In general,

$\mathbb{W}$

could take many forms, and could be assigned by a subset of (or all) DAO participants, a process which could be modeled as a dynamic system of interacting Economic Agents.Assume that the utility function has the property

$\mathbb{W}(H) = \sum_{i =0}^{n} \mathbb{W}(h_i)$

where $n = |H|$

The state or set of evidence in

$H$

evolves through the addition of new evidence $h_j$

to transition to a new state $H^+$

Initially the forum begins with no evidence

$H = \emptyset$

And evidence is added over time,

$H^+=H\cup {h_j}: h_j \notin H$

$h_j$

can be primary data, or *source*in the graph theory sense, that is,$h_j \in S^-$

Alternatively,

$h_j$

can be the result of a complex transformation $F_j(H_j)$

where $H_j \subseteq H$

therefore, $h_j = F_j(H_j) : H_j \subseteq H$

Where $F_j$

is an arbitrary function, sequence of logical operations, or nonlinear algorithm including simulations and machine-learned transformations, or plain language argument.We want to estimate

$k_i$

, the utility contribution of each $h_i \in H$

so that ultimately rewards can flow to the individual providers of $h_i$

proportional to their contribution.Let

$B_i$

be the set of *predecessor*nodes to$h_i$

Let

$J_i$

be the set of *successor*nodes to$h_i$

The utility of the evidence node is the utility of the supporting evidence

*plus*the "lift" or additional evidence generated by performing transformation$F_j$

. Thus, we argue that: (1)

$\mathbb{W}(h_i) = \mathbb{W}(B_i) + r_i$

Where

$r_i$

is the value or Utility added by performing $F_i(B_i)$

. Then $r_i$

can be thought of as the "credit" attributed to $h_i$

and by association the account that posted it.We note that when

$\mathbb{W}(h_i) = \mathbb{W}(B_i)$

this implies that $r_i = 0$

which corresponds to the case where evidence $h_i$

references sources but does not deliver any additional useful insight over the predecessor evidence. We assume that such evidence won't have any successor evidence and the credit assigned directly by forum users, will tend towards 0.If we assume that the contribution of

$h_i$

is approximately proportional to the utility of its source evidence: $r_i \approx \mathbb{W}(B_i)$

then we can say, $\mathbb{W}(h_i) = 2* r_i$

and, (2)

$r_i = \mathbb{W}(h_i) / 2$

$\mathbb{W}(h_i)$

is bounded as follows:

$\mathbb{W}(h_j\in J_i)\ge\mathbb{W}(h_i)\ge \mathbb{W}(B_i)$

The total utility of evidence

$h_i$

from a "top down" perspective can be also expressed as,(3)

$\mathbb{W}(h_i) = \mathbb{W}(J_i|h_i) + \mathbb{W}(C_i)$

As before, let

$J_i$

be the set of *successor*nodes to$h_i$

$\mathbb{W}(J_i|h_i)$

is the *computed*estimate of the utility of the contribution of$h_i$

to the successor nodes, and $\mathbb{W}(C_i)$

is the *assigned*estimated utility as assigned by forum members.An expression to describe

$\mathbb{W}(J_i|h_i)$

is then:(4)

$\mathbb{W}(J_i|h_i) = \sum_{h_j \in J_i}\frac{\alpha \cdot\mathbb{W}(h_j)}{deg^-(h_j)}$

Where

$\alpha$

is an attenuation factor, which, in general adheres to $1\ge \alpha\ge 0$

Finally, by substituting (4) into (3)

$\mathbb{W}(h_i) = \sum_{h_j \in J_i}\frac{\alpha \cdot\mathbb{W}(h_j)}{deg^-(h_j)} + \mathbb{W}(C_i)$

$and then substituting into (2) we get

$r_i = \frac{\sum_{h_j \in J_i}\frac{\alpha \cdot\mathbb{W}(h_j)}{deg^-(h_j)} + \mathbb{W}(C_i)}{2}$

Each voting member in the DAO receives a number of tokens

$\kappa$

per governance decision the member voted in; these specialized tokens are only used to assign credit to evidence posted in the governance forum.In the set of voting members

$P$

each voting member $p\in P$

assigns zero or more of their $\kappa$

tokens to signal the utility or importance of a particular piece of evidence $h_i$

Then,

$c_{pi}$

is the credit that user $p$

assigns to $h_i$

and $C_i = \sum_{p\in P}c_{pi}$

with the constraint that $\sum_{i}c_{pi} \leq \kappa$

Assuming during some epoch the total awards available for distribution are

$R$

then the portion of $R$

allocated to $h_i$

is $R_i = \frac{r_i}{\sum_{j \in H}{r_j}}$

The reward

$R_i$

gets sent to the address (user) who originally posted the evidence.The above is a credit assignment model without each voting member having to explicitly judge the utility of evidence against the predecessor or source evidence used.

Last modified 1yr ago