Sovereign Software: Competitive mechanism design methodology for decentralized autonomous organizations

in #gametheory6 years ago

This is version 1 of my paper containing general theory and advice for designing mechanisms for decentralized autonomous organizations.

Live Version:
https://docs.google.com/document/d/1Oghfq1VFfGvScxzNWD14vNg_fEAC8HVrfVVVU3Al-gA/edit?usp=sharing

Static Copy of Version 1:

Sovereign Software:
Competitive mechanism design methodology
for decentralized autonomous organizations

Written by: Ryan Garner (imkharn)

My thanks to Joey Krug for hundreds of hours of mechanism debates during the design of Augur
My thanks to Micah Zoltu for advice and editing, and dozens of hours of mechanism debates.
My thanks to Vitalik Buterin for a few mechanism debates via skype, and educational blog posts.
My thanks to William Spaniel for his online game theory basics education course.
My thanks to Joshua Davis for funding the creation of a plan for decentralized insurance, and dozens of hours of mechanism debates.
My thanks to various other people who have offered suggestions and corrections via the google docs comments, and for comments yet to come.

(Abstract) - Blockchain based cloud computation enables software to plausibly exceed the capabilities of major institutions in regards to reliability and trustworthiness of handling possession of assets. Software can now possess its own assets, enter into agreements, and execute code that no entity has the power to erase or prevent. When software combines the capabilities of self ownership, property rights, and predictable behavior, with a competitive system of commerce facilitating incentive mechanisms; it can achieve sovereignty.

(Introduction to Relevant Game Theory Concepts) - A 7 hour ordered youtube playlist has been assembled to supplement this paper by covering available game theory education to better understand the terminology in this paper. It also provides a general understanding of how rules and incentives can influence rational behavior in creative ways. If you struggle to understand vocabulary and intent of multiple sections of this paper, this educational course is recommended.

(Decentralized Autonomous Organization) - An entity that exists in multiple redundant locations, has sovereign control over internal capital, and prohibits unintended modifications of its assets or mechanisms.

(Objective) - The goal of DAO architects is to design an incentive compatible system of rules for interactions between actors that as cheaply as possible achieves a mission statement while maintaining a competitive level of resistance to collusion and external influence. As Meher Roy summarizes: “To build software rules that facilitate the creation of useful economic equilibria".

(Mechanism) - In the context of game theory, a mechanism refers to an incentive mechanism. Mechanisms take in data related to the behavior of an agent and pass that data through a function to determine the reward or punishment the agent is subjected to. Mechanisms are made efficient by reducing the punishment and rewards to as low as possible while still making undesirable behavior irrational. Typically, many mechanisms are available to incentivize behaviors. Choosing an appropriate mechanism to accomplish a desired behavior by actors involves being mindful of the financial and labor costs it imposes on the agent or the system, as well as the difficulty of bypassing or exploiting the mechanism.

(Mission Statement) - Every DAO has a mission statement because its creators have an intent for the purpose of the system. The mission statement represents the scope of rational behavior and mechanism modifications as desired by the creators of the system. A mission statement for any mechanism architecture includes a goal such as a societal benefit, or the maximization of some value such as total money directed to a particular destination.

(Autonomy) - Autonomy is accomplished through a mechanism scheme that pools private information or assets under the control of a system which leverages them in mechanisms that obtain honest mission critical information from other parties and resist modifications to the mission statement. If a system relies on information from untrusted sources, the system can utilize an independent interpretation of reality to reduce the cost of incentive compatibility. This is because cost saving mechanisms exist that achieve the goal of honest behavior for less cost than simply paying an amount that overcomes the desire or incentive to be dishonest.
There are varying levels of autonomy in different systems. The degree of autonomy a system has is determined by the cost of obtaining arbitrary external control over mission statement, determinations of truth, or mission critical mechanisms. A system is not autonomous if incentives preventing open ended changes to mission critical mechanisms or behaviors that violate the mission statement are intentionally less than the plausible maximum. Put another way, if the system intentionally has a weakness that enables agents to override the intended functioning of the system, it is not autonomous.

(Incentive Compatibility) - Incentive compatibility is achieved when on average a strategy of fraud is not profitable for any role on the system. A system of incentive mechanisms in a public setting means that its agents are influenced by more rules and incentives than exist in the system design. This includes incentives related to collective action and external organizations with unknown utilities. It is challenging to design mechanisms that are simultaneously compatible with many categories of utilities and behaviors. When designing mechanisms, it is best to examine each category separately. The categories of incentive compatibility are individual, collective, and external. Individual protects the system from direct interaction with agents, collective incentives protect the ability of the system to accomplish its mission statement, and external compatibility represents the ability to operate competitively in an environment of external utilities and mechanisms.

“The best protocols are protocols that work well under a variety of models and assumptions — economic rationality with coordinated choice, economic rationality with individual choice, simple fault tolerance, Byzantine fault tolerance (ideally both the adaptive and non-adaptive adversary variants), Ariely/Kahneman-inspired behavioral economic models (“we all cheat just a little”) and ideally any other model that’s realistic and practical to reason about. It is important to have both layers of defense: economic incentives to discourage centralized cartels from acting anti-socially, and anti-centralization incentives to discourage cartels from forming in the first place.” - Vitalik Buterin

(Individual Incentives) - Individual incentive compatibility is achieved when on average it is not profitable for any actor on the system to behave fraudulently in the absence of contact with other actors and third party mechanisms. Individual incentive compatibility is achieved through mechanisms like escrow, bonds, bets, identity, and reputation scores. Individual incentives mechanisms should be designed to as cheaply as possible be incentive compatible.

(Collective Incentives) - Collective incentive compatibility is achieved when it is not profitable to collude to influence the information the system is aware of in such a way that the system misinterprets fraudulent behavior as honest. If at any point it becomes profitable to collude on a DAO, the result tends to be highly costly. Due to the relatively high damage that collusion causes, there should be virtually no tolerance for situations where collusion becomes profitable. Edge cases should be considered when attempting to secure collective incentive compatibility. Collective incentive compatibility is achieved through mechanisms like forking, decentralization of power, compartmentalization of assets, anti-sybil schemes, anti-sockpuppet schemes, auditing/whistleblowing schemes, commit and reveal schemes, delayed divestment of liability, futarchy, and many other schemes and mechanisms which increase the cost of successful collusion and decrease costs to the system in the event collusion is successful. Often, multiple anti-collusion mechanisms are simultaneously implemented in order to increase the scope of incentive compatible situations to include as many edge cases as possible, and as explained below, to maximize the cost of eternal attack from competitors. Collective incentive mechanisms should attempt to increase the difficulty of internal conspiracy as high as competitively affordable.

(External Incentives) - External incentive compatibility is achieved when no external nor internal actor finds the cost of harming the system low enough to be worth that actors predicted benefits of the system losing a competitive advantage. Attempting to cause harm to a DAO in this way is often referred to as a Byzantine attack, as it is an attack where the attacker loses utility within an internal mechanism scheme in order to gain external utility. Harm to the system is measured by comparing the intended amount of assets in each party's possession if all parties behaved honestly to the actual distribution of assets. If agents interacting with the system collectively have X less assets than intended, the system is either being harmed by X or has less incentive compatibility than expected. This harm can either be irrational behavior or intentional. External incentive compatibility is typically impossible to achieve when considering edge cases. Mechanism designers instead should attempt to discover the most cost effective way to harm the system as measured by financial harm to the system divided by cost of attack. Once this formula is known it becomes possible to estimate the total cost of causing enough harm to the system for it to no longer be a competitive option. It is important to be aware of the cost for a competitor to sabotage a system as the enemy not only knows the system, they can use creative means and mechanisms to exploit it. Trustless incentive compatibility is impressively difficult when exploiters can invent organizations that create honor even among thieves. If the cost of attacking a system is low enough for powerful adversaries to attack, this represents a risk to users that will reduce use of the system. On the other hand, it is worth noting that one can expect, but not rely on altruism on behalf of the community to artificially raise the cost of attack. Altruistic individuals, like attackers are able to coordinate outside of the protocol and make use of creative means to protect a system from attack. Systems can be harmed by spam, fraud, sock-puppets, distributed denial of service, bribes, corruption, cryptographic security weaknesses, software exploits, 3rd party mechanisms, violence, propaganda, and public relations.

(Public Relations) - Public perception of the system influences the level of use of the system and should be considered when determining the appropriate scope of a mission statement. It is important to segregate ethical and unethical systems to avoid a loss of business for the ethical sector.

(Ethics) - Internal ethics are arguably measured by the average profitability for behavior that initiates violence or fraud in regards to every decision possible within every mechanism. Externally there are both consequentialist ethics where the ends justify the means, and categorical where intent determines ethics. External consequentialist ethics are achieved when on average the existence of of your mechanisms promotes more social good than collective harm. It is of course opinion what constitutes “social good/harm”, however, widespread agreement about what is socially harmful can have a major impact on demand for and use of the system. Because DAOs are technological tools that do not make decisions, under categorical ethics it is neither good nor bad, however the decision to create the mechanism scheme if done with intent to increase the initiation of violence or fraud in society, may externally be seen as a categorically unethical decision.

(External Security Mechanisms) - Fake/duplicate content and users are regulated through mechanisms such as fees, reputation, and peak-load pricing mechanisms. Bribes and corruption are prevented by decentralization of power, and by always assuming for the purpose of obtaining incentive compatibility that assets can always be traded and actors are able to communicate and agree to unsanctioned deals. Security weaknesses in the software can be prevented by security audits, asset compartmentalization, asset time locking, simplifying code, and access restrictions. Violence, propaganda, and public relations issues can be prevented by decentralization and maintaining a positive public perception of system ethics.

(Coordinated Violent Shutdown) - Another external threat is a coordinated violent attempt to stop the system from functioning. There are are 3 requirements to making violent shutdowns implausible: no servers, no buildings, and no centralization. The cost of coordinating a violent attack on the physical location of every computer critical to the system should be estimated as it represents a possible attack strategy for adversaries. No buildings refers to avoiding a static place of conducting business that could be targeted for attack. The cost of disrupting commerce taking place at physical locations should also be estimated as a cost barrier to attack the system. No centralization refers to the centralization of power. Also consider the cost of compromising the computers of, or finding and coordinating attack on the physical location of enough actors on the system to harm the system enough to lose the ability to compete. Be mindful that organizations that are predisposed to violence are often willing to use violence even if it is not the cheapest way to harm the system. Thus, violence preventing security measures often need to be more robust than security against fraud and exploitation. This is especially true for systems that threaten the existence or profitability of major institutions.

(Different Environment, Different Architecture) - Avoid assumptions that result in trying to take the structure of centralized autocratic organizations and duplicating it using a decentralized platform. A non-violent, pure competition, open source marketplace creates an environment where the most successful organizations are ones that have priorities which often contrast typical capitalism. Without a barrier to entry, the opportunity for charging customers more than operating costs is rare or negligible. This is because competitors can can copy any DAO and undercut the original product in circumstances where the original DAO was charging more than required to achieve incentive compatibility plus the cost of obtaining a competitive external security margin. In a nearly perfectly competitive market, the most successful organizations will have mechanisms that maximize employee/customer freedom, minimize waste, minimize profit, minimize bureaucracy, decentralize power, and maximize satisfaction. The organization should not seek the power to claim exclusive use over humans, objects, or ideas. It will eventually be possible for DAOs to have full authority over the means of production once such assets can be inspected or guarded by agents of the DAO and insured against theft and violence by decentralized means. Until physical security is possible, the system is vulnerable to losses without recourse. Thus, the means of production should whenever possible be bought and owned by the employees; this includes devices, bandwidth, voting assets, and accounts. Be mindful that employees are typically risk averse, and the system can achieve a lower perceived cost for actors if risk is outsourced to a separate role that exclusively accepts risk in exchange for profit.

(Violent Incentives) - Any potential liability to the system secured by threat of violence should be avoided. Any attempt at a DAO obtaining government granted rights is a liability, this includes patents, physical property secured by government, taxes, ability to sue or be sued, and signing government enforced contracts. There should be no incentives present in a DAO where violence is a critical incentive mechanism. This is because violence is expensive, has unreliable results, and requires sharing knowledge of physical location. Seek to eliminate the incentives for violence and fraud rather than avenge victims. If an activity is profitable enough to cover costs, that activity is inevitable, and this includes harming others for gain. If there is any scheme imaginable that would profitably harm others on your system, it will be done to the maximum extent possible. Reliance on justice, government, or violence will be not be sustainable.

(Make Humans the Robots) - From the perspective of the system, humans should be seen as basic tools for reporting facts, making simple decisions, and moving matter around. For any role in the system there should be a sufficient supply of laborers willing to perform on demand a single simple service for equitable pay. Avoid giving humans power over other humans, and if hierarchical power is required, ensure it is always possible to compensate those harmed by abuse of power at the expense of the powerful. To reduce the incentive for abuse of power, whenever possible allow the autonomous logic to make decisions based on objective facts provided by actors. A decentralized autonomous organization performs the role of manager of human behavior within an organization. It is only able to get information by paying potentially untrustworthy humans to give up information, yet despite potential incentive to misrepresent, it must be able to discern truth in order to determine if behavior is honest or fraudulent. Humans may attempt to exploit the authority of the autonomous system by colluding to provide misinformation, and by exploiting its programming. Humans can also exploit the immutability of hard coded constant values and assumptions by exposing the system to edge cases. For the software to maintain its sovereignty it must be designed in a way that it continues to efficiently carry out its immutable mission objective, and resist attempts from humans to deceive or manipulate its ability to determine relevant objective reality.


STEP BY STEP

Much of the process of creating a DAO involves continually adding and refining mechanisms and roles on the system and then again analyzing ways to exploit the system and searching for unintended consequences of the suggested mechanisms. Thus many of these steps listed below will be passed over many times as the system continues to be built. The open ended nature of discovering exploits and unusual ways people may behave under even a few interacting rules, is likely too much to cover for a single person. It is best to design the mechanisms with multiple people all pitching ideas and ways to exploit or improve on those ideas. It may be impossible to discover all exploits, and many that will be discovered are ones that seem obvious only once considered.

Break your system into a list of human roles, where each role has a minimal scope comprising of a single type of decision or single type of action. Each role should represent a single attribute or skill.

For each role, represent in a formula the incentive to behave or respond in a corrupt manner in an environment where no protection mechanisms existed. This formula approximates the direct cost of making this role incentive compatible. The direct cost of incentive compatibility is the cost required for the most simple value transfer mechanism directly from one party to another.

Once you determine the cost of incentive compatibility for individual actors in each role, create another formula that compares the potential income from collusion to the cost of obtaining that level of collusion. If income is greater than cost, collusion is inherently profitable, it becomes inevitable that 3rd party mechanisms to be created that will eventually make collusion a strictly dominant strategy for actors in that role.

For each role create a list of all plausible benefits and costs including but not limited to: rewards, penalties, bonds, labor, changes in currency value, illiquidity of assets, and changes to reputation. The following form is strongly recommended for filling out for each role on the system:

ROLE NAME: (the preferred name of the role to be used in internal documentation and discussion)

OBJECTIVE: (the desired behavior for actors in this role)

COSTS: (every single cost associated with performing this role including opportunity cost and labor cost)

BENEFITS: (every single way an actor can benefit from performing this role)

STRATEGY: (the expected dominate strategies that will result from the above costs and benefits, multiple strategies may be needed when considering different states the system may be in such as under attack, rapidly growing, black swan, or what happens to the strategy when particular costs and benefits are amplified or reduced in strength)

PREDICTION: (make educated guesses as to how the above costs, benefits and strategies will change over time as the system grows and becomes more popular. For example you may predict the formation of 3rd party pools that reduce the costs or risks for this role)

Power should be measured by power to harm, and any power that has to be given to humans should be counterbalanced by the ability to lose greater power. The system must at all times be capable of executing a transfer function that inflicts negative utility upon an actor in an amount that is at least as much as their incentive for corruption. This agent liability is typically avoided through deposits, escrow, or reputation systems.

Implement a mechanism that utilizes backwards induction to drastically reduce labor costs without sacrificing security. This can be accomplished by allowing repetitive decisions to be handled under the authority of one or a few people but add a delay before decisions are enacted to allow time for auditing. Auditing mechanics allow a system to more efficiently utilize pooled labor, assets, and information. This audited delegation mechanism is explained in further in the “Determining the Details” section below.

For each role where authority is conditionally centralized to save labor costs, create an auditor role. Auditors should have a liability equal to the harm they are able to cause to the system. Calculate the financial harm of a frivolous investigation and implement a transfer function that requires auditors increase their liability by this amount. If an auditor behaves correctly, their liability should be reduced by this amount. In order to meet the participation constraint, every role including auditor must on average give honest actors enough profit to generate a reasonable supply of labor. As auditing with only a frivolous cost bond is naturally zero sum, the participation constraint necessitates a mechanism that subsidizes the role at the expense of agents in other roles. To obtain funding to meet the participation constraint, auditor labor costs are typically covered by value transfers from corrupt agents in centralized power roles.

Every role must be able to meet the participation constraint. For any role that is zero sum or on average unprofitable for an honest actor, a benefit must be added where the source of the benefit is either another role, or external revenue. The amount the system spends on labor should be able to fluctuate with supply and demand for this role such that the profitability of the role naturally settles at competitive labor rates for the given skill.

Continue to update the benefit, cost, strategy, and long term behavior prediction lists for each role any time a mechanism is added.

For every role, imagine the optimal strategy for actors to take and list predicted strategies for each role. For any role where a profitable strategy exists to act in opposition to the desired function of the DAO, a mechanism should be added or modified in a way that removes this profitable strategy.

For every role, imagine how the strategies of each role will evolve over time, or under unusual conditions. Prepare for the worst. Always formulate for extreme behavior. What if everyone did this, what if no one does it? How does the system respond to everyone conspiring to harm it, how does it begin to operate differently if everyone is horrible at their task? If everyone acts honestly, do security costs decrease? What would happen if there were a sudden change in the number of people doing a role? Continue to adjust or add mechanisms to compensate for unintended or meta behavior.

For every role, imagine how an actor might exploit the system by creating rational incentive compatible mechanisms that actors can optionally participate in. Are there any incentive compatible rules you could add to your system that cause behavior outside the scope of the mission statement? Adjust or add mechanisms to prevent other mechanism designers from exploiting the system by creating a third party decentralized application. Mechanisms that rely on altruism, the lack of credible commitment, the inability to coordinate, or the compartmentalization of information are typically exploitable with third party systems.


DETERMINING THE DETAILS

(Adapt to the Environment) - Avoid immutable arbitrary numbers especially involving limits and incentives. It is key to be able to express a formula that exactly calculates incentives. Being able to express exact calculations is very important in an environment that being capable of no waste and no profit is critical due to the tiny barrier of entry and open source code. While profit may be possible, it is still important to understand the minimum cost required to achieve incentive compatibility. In some cases though it is not plausible to calculate what a value should be or argue it outright with logic. In these cases developers are often left assuming what sounds like a reasonable value. Hard coded arbitrary numbers that just feel right or that can be reasonably predicted to work correctly in a system are very dangerous both when it comes to competition using a cheaper but still incentive compatible value, and in the future when the use of the system drastically changes. An example of an immutable arbitrary number causing problems is the static block size limit for bitcoin. The block size was set with a particular level of use and type of use predicted, instead of being flexible for various conditions. For any incentive coefficient in a system where there is no clear logical basis for determining its appropriate value, that value should float based on other values, often related to supply and demand. Consider what avoidable costs or issues the system might incur if the value were too high or too low. Determine metrics for these issues and create a formula based on these metrics that sets the value of the coefficient to be correlated with its need to exist.

(Labor Costs) - Allow all labor expenses to be variable in a way that influences quality. Decide a metric or formula for determining quality of behavior for each role. The best metric for measuring overall quality might be a single number or might be better represented by a formula that weighs in multiple variables. This metric should influence the amount spent on labor such that when quality is below target, labor costs increase, and when quality is above target labor costs decrease. The optimal quality to target is based on what methods are competitively possible to efficiently incentivize mission critical behavior.

(Bonds) - Bonds are a common mechanism to prevent harm to the system from individual decisions by increasing the liability of an agent. Agents place a bond in order to gain potential power to harm the system with the understanding that that bond is forfeited if the power is abused, and returned if the power is used correctly. Mechanisms like bonds should be priced such that the liability an agent exposes the system to is equal to than the liability the system imposes on the agent. An underpriced bond likely represents an exploit that can profitably harm the system, which is far more harmful than an overpriced bond, as this the result is an inefficiency that needlessly reduces use of the system and an opportunity for competitors to undercut.

(Audited Delegation) - Audited delegation is a mechanism that reduces the cost to obtain incentive compatibility for a decision making role. It allows decisions to quickly and cheaply be made by a single or a few agents with little liability relative to the potential impact of the decisions. Audited delegation is a mechanism paid for by delegates typically through bonds or fees. It creates a new role called auditor and transfers value to this role conditional upon the outcome of an investigation. Delegates should be charged an amount equivalent to the labor cost of auditors plus an amount to cover the opportunity cost of the investigation bond. Auditors should be required to pay a bond to request an investigation, while the value of this bond should be equal to the total financial harm the system incurs from a frivolous investigation. If the investigation is found to be frivolous, the investigation bond should be paid to the parties harmed by the investigation, otherwise the investigation bond is returned to the auditor along with an award that covers labor and opportunity cost. An investigation can be any form of higher expense determination of facts and quality, and typically causes delayed decisions, greater labor costs, and increased illiquidity of assets.

(Attention Costs) - Be aware of the labor cost agents experience when required to view information, particularly spam. Increasing the amount of information laborers or customers must digest should be seen as a cost to the system and can be cheaply resisted by user interface filtering and simple mechanisms. These mechanisms introduce a cost such as money or reputation that restricts the supply of information. Protection against spam attacks can be increased using Peak-load pricing, where the cost of producing content is priced by comparing current activity to historical trends. Peak-load pricing allows a system to minimize content creation costs except when preventing rapid increases in the demand for content creation from overwhelming the relatively inelastic attention supply.

(Irreconcilable Dispute Resolution) - A natural backstop of security exists through forking where the customers and employees decide which option has the better history and architecture. Forking is a natural means by which customers have the ability to overthrow those with power, however this relies on a debate among conflicting interests while the establishment has an upper hand resulting in a divisive, suboptimal last second outcome. Designers should expect disagreements large enough to cause a fork to happen occasionally. Systems should be given the capability to resolve irreconcilable disagreements through prearranged means. Include a mechanism that allows auditors to pay a large bond to request a fork over a disagreement about mechanisms or facts. Then include a mechanism for determining which system or facts customers prefer. One such method is to fork a proprietary asset to determine which is able to sustain a higher market cap. Such a method allows a market based means to reduce the risk of unmediated power struggles.

(Proprietary Tokens) - In order to create proprietary tokens and have them maintain long term value, the dapp you are creating must have a barrier of entry. Typically this is accomplished through economies of scale. In general if a dapp operates more competitively with increased assets, it creates a barrier of entry stopping someone from simply copying the architecture and editing to utilize a more stable or preferable internal asset. Currently most DAO developers fund their development by programming a proprietary token and giving themselves all starting stake in the token. However, proprietary tokens are not just a means for financing the development of the software. Even if the creators of the software chose to give themselves none of the starting stake, the existence of an asset that has value correlating to the future success of a system component allows mechanism designers to leverage this asset as an predictive incentive mechanism while imposing little or no additional cost to customers.

(Liquidity Restrictions) - In general, liquidity restrictions are a useful mechanism when restricting the transfer of assets increases incentive compatibility. A common use of this is preventing assets with pending value reductions from being traded. Assets that have a value correlated to the success of a system are useful as an incentive mechanism if you require people with power to harm the system to have stake in the token, and be unable to sell it immediately after making decisions. An example of this mechanism in use is Dai/MKR currency system, where the value of MKR is correlated to the demand for the Dai. The MKR holders are incentivized by this mechanism to only vote for changes to the Dai that will increase demand for it.

(Countercoin) - Any asset that serves as counterparty by assuming risk on behalf of another asset in order to obtain a financial attribute for one of the assets. Countercoins are a mechanism that allows stake-holding decision makers to be directly incentivized by the success of another lower risk asset. After a decision, countercoin stakeholders should be unable to transfer assets for the length of time required for the market to react to the decision. An example countercoin is MKR which utilizes inflation to assume financial risk for the Dai in order to increase price stability of the Dai.

(Floating Supply) - Floating supply is a mechanism to increase the price stability of an collateralized asset. Supply is automatically increased as needed to reduce price. Supply is decreased through a mechanism that sets the dominant strategy to destroy assets that have fallen below their collateralization requirement. Effectively, if the price ever falls below target, it becomes rational to destroy the stable asset in order to redeem the slightly more valuable collateral backing it.

(Bias Normalization) - In any system that incentivizes unique identities or has a working reputation system and needs to make objective quality determinations, incentive compatibility can be increased by any method of detecting and counteracting opinion bias over time. An example of bias normalization can be found in Colony’s system for rating co-workers.

(Commit-Reveal) - A commit-reveal scheme has the system accept encrypted information from multiple parties up until a deadline. After this deadline the system stops accepting encrypted submissions and begins accepting keys to decrypt the submissions. This scheme allows for private information to be collected over time without information revealed by earlier submitters from affecting the behavior of later submitters. The benefits of commit-reveal are typically easy to exploit or bypass using third party smart contracts. A third party smart contract is able to guarantee payment for revealing of information and can pay as little as it takes to change the dominant strategy to sharing private information. A suggested counter-measure is to introduce a whistleblower system where co-conspirators can be rewarded for reporting anyone that shares their commit before the reveal. This additional mechanism does raise the cost of attack but does not prevent it. The third party smart contract can respond by making use of bonds sizeable enough to punish anyone that whistle blows to change the dominant strategy back in favor of the attackers.

(Reputation) - A reputation scheme is a system of rules for summarizing information into a value score, with the intention of that value being correlated with quality. Agents with higher reputation are generally given a competitive advantage on the system. The power to influence the ability of an agent to make future profits on the system gives the system leverage for use in mechanisms. Proprietary reputation mechanisms are very weak to external influence because they only pose a credible threat to agents that rationally plan to continue commerce within the system. When designing reputation systems that influence revenue it is important to mitigate the resulting incentive to conduct an exit scam. Because reputation is an unreliable and weak incentive mechanism, the disparity of advantage created by a reputation scheme is typically implemented primarily for the purpose of reducing attention costs in the user interface by sorting content and not for mission critical purposes.

(PID Controller) - In some circumstances a system must determine the strength of a mechanism, but a calculation of optimal value is implausible and the costs of creating an incentive compatible role to guess a value are prohibitively expensive. One solution to autonomously discover the optimal value of an incalculable number that presents conflicts of interest over its value is a proportional–integral–derivative controller formula. A PID controller is a learning algorithm that constantly adjusts the value of a coefficient in attempt to influence a correlated benefit towards a target value. Once the reactivity to change, cumulative change, and trends are tuned, a PID controller needs no further modification in order to endlessly attempt to approach a target output metric.
To implement a PID formula, first determine the purpose of the coefficient in regards to the overall effect it has on the behavior of the system. If the coefficient only exists to influence one measurable aspect of the system, then that measurement is the goal. Create a value or formula that calculates the optimal system behavior the PID controller should target. The goal may be a single thing like 1% overall profit margin, or an arbitrary value derived from multiple weighted sources such as meaningful activity, reputation, or quality. Next is to determine the frequency at which the PID controller re-evaluates its performance. The greater the frequency the smoother and quicker the controller is able to respond to volatility, so set the frequency so high it either approaches pointlessness or begins to incur noticeable costs to the system such as overfitting or transaction fees becoming unbearable.
The first portion of the PID formula is the proportional gain which, relative to the other coefficients, determines how strongly the controller reacts to sudden changes that were not part of a growing issue. It is a simple comparison of ratios between current results and target results.

Shown above is a demonstration of how a PID formula
responds over time to a sudden change of target (blue line),
given 3 different strengths of proportional gain.

Second is the integral gain which determines how strongly the formula reacts to not achieving its target for a long period of time. It responds with a strength respective to the time since target was last achieved times the average error margin during this period.

Shown above is a demonstration of how a PID formula
responds over time to a sudden change of target (blue line),
given 3 different strengths of integral gain.
Third is to tune the derivative gain which determines how strongly the system reacts to long term trends. 75% of PID controllers discard the derivative portion of the formula. This is likely due to the incompatibility of extrapolating complex real world behavior using simple linear trends, and because the integral portion already largely mitigates long excursions away from target. However, if it is likely for the difficulty of achieving your target to trend one direction as time passes, it may be worthwhile to include the derivative portion.
Tuning these gains are done by first making a fairly inaccurate guess for starting gains at reasonable values. Then subject various gain settings to sample scenarios and observe how many observation and adjustment cycles it takes your PID controller to achieve its target. A properly tuned PID controller will respond to error like someone learning how hard to throw an object, initially quickly improving then making small final adjustments. A properly tuned controller also is able to handle plausible edge cases without drastic or reverberating over-correction.

Shown above is a gif of a PID formula having all
3 parameters tuned in order to reach an unchanging target.
Do not read too much into this gif, just because your
PID controller can smoothly and rapidly handle a
change from 0 to 1 does not mean it can handle
other scenarios appropriately.

(Fee Schedule) - A mechanism where in order to participate, agents are required to pay a fee that is determined by the behavior or type of agent. The goal of a fee schedule is to incentivize behavior among one type of agent, or to adjust the profitability of different types of agents. Fee schedules are either a chart that declares what each type of agent will pay which fee, a chart that declares which actions correspond to different fees, or a formula that inputs behavior and/or type of agent and outputs the fee required. Even if you do not plan to set fees as low as possible, it is important to understand the lowest amount the fees can be set to without losing incentive compatibility. If fees are larger than required to achieve incentive compatibility, this represents opportunity for competitors or exploiters to avoid costs or undercut fees. This is a serious risk, remember this is not a closed environment, and anyone that can add a cost-saving incentive compatible contract to the environment that exists only to replace an overpriced mechanism within the system. Fees above the amount required for incentive compatibility can typically only be added when there exists a barrier of entry for bypassing this part of the system with external contractual agreements. These barriers of entry include economies of scale, proprietary hardware, and ease of use.

(Prediction Market) - A mechanism where the system pays agents to provide their knowledge of the future. Agents with better predictions than provided thus far are incentivized to share the information because the system will pay profits to agents that improve the accuracy of the odds. Agents with inaccurate information are incentivized to not participate because they are more likely to lose value than gain. Prediction markets typically collect bets from agents placed on different outcomes, then pay out all collected bets exclusively to the agents that bet on the correct outcome for the event. Prediction markets are proven to be capable of being the most accurate way to predict the future, especially when they possess several traits:
Liquidity - Liquidity is the amount of money invested in a prediction market. Because liquidity can be placed inaccurately, it is only loosely correlated with prediction accuracy. It does however attract additional participants. Liquidity can be increased with a maker-taker fee schedule. This is a scheme where agents that make conditional trades (makers) are charged less, or even negative fees, while agents that accept existing offers are charged more.
Number of Agents - Is strongly correlated to the prediction accuracy.
Knowledge of Agents - Is strongly correlated to prediction accuracy. Target groups that have a high knowledge on the subject for participation.
Potential Profit to Fee Ratio - If fees are the same size as potential profit, it becomes irrational to participate. For example, if fees are set to 1% and applied to volume, then it becomes pointless to purchase shares of 99% odds. While prediction markets are more accurate with no fees, if fees are applied, it is important to scale them to potential profit. This results in a quadratic fee schedule: Fee = Flat_Fee * 4 * (odds) * (1-odds) , where flat_fee is the effective maximum fee possible, and odds is a number from 0 to 1 representing the current odds of the shares being purchased.
Optional Participation - When participation is mandatory, accuracy is reduced because those less knowledgeable about the subject will participate and thereby reduce the accuracy.
Flexible Investment - When agents can decide the amount to risk on an outcome, the more confident participants are able to signal their higher level of confidence for their prediction which influences the odds towards more accurate numbers.
Information Inequality - When the subject matter has a high amount of information inequality, those knowledgeable about the subject are able to earn even more profits by participating. This increased participation leads to higher accuracy.
Public Interest - If the prediction market is on a subject that people find entertaining such as elections or sports, the number of participants is increased.
Subsidies - Prediction markets are typically zero sum, or less due to fees. Agents still choose to participate in these cases because of interest, enjoyment, and information inequality. However, if accurate predictions about a subject are desired by the system for a subject with little interest, subsidies can be used. A simple way to subsidize is to reduce fees to zero and increase the payout on winning shares above 100%. A more complex way to subsidize but that further increases accuracy of odds, is a maker subsidy. This can be done by distributing the full subsidy when the market closes to each agent that had a successfully completed order book transaction weighted based on size of order times amount of time on order book. Another way to increase participation is to start the market with initial liquidity by using a large starting bet on an outcome that has little chance of happening. Check closely that there is no way to exploit a subsidy especially if using a custom method. Vulnerable prediction market subsidies are likely to be exploited by repeating the same action over and over, or by using sockpuppet accounts.

FUTURE ADDITION PLANS:
Lazy Evaluation (client side computation, with on-chain result verification)
Lazy Insertion (sorting large on-chain lists without any gas costs)
Lazy Dividends (piper miriam's method to only distribute dividends upon request)
Mechanical Exploitability (complexity of system reduces security)
Add a section on profit margin, and what is required to achieve it. The answer is likely to be that natural centralization allows profit (first to market, trusted coders, user interface, barrier to entry).
Discover additional mechanisms employed or discussed in the blockchain application industry.
Review Vitaliks blog posts to include terminology, philosophy, and mechanisms he invented that are missing from this paper such as “security margin”.
Add a section of “bad ideas” that includes any mechanisms that have been empirically proven to be exploitable, or ideas that people are prone to consider for mechanism use, along with why that mechanism is exploitable.
Add discussion about different ways decentralization can be applied, along with the benefits and costs of different types of centralization. Relate to this article but instead focus on relation to design decisions. https://medium.com/@VitalikButerin/the-meaning-of-decentralization-a0c92b76a274#.5cr6gdy19
Any knowledge in this article that is missing and applicable: https://medium.com/@graycoding/lessons-learned-from-making-a-chess-game-for-ethereum-6917c01178b6

Coin Marketplace

STEEM 0.32
TRX 0.11
JST 0.034
BTC 66569.64
ETH 3235.92
USDT 1.00
SBD 4.31