A better approach to Turing Complete Smart Contracts

in #blockchain7 years ago (edited)

Ethereum is the current standard for general purpose smart contracts, but anyone who has spent time developing for Ethereum knows there are some very real challenges with its design. I have spent some time this past week working on a proof of concept for a new Turing complete smart contract system and in the process have identified some major differences in philosophical approaches.

Ethereum’s Technical Challenges

Before getting into the details of what I have learned, lets review the technical challenges faced by smart contract developers on Ethereum.

1. Performance

A performance analysis in Feb 2016 showed that it took the Parity Ethereum client over an hour to process 6 months worth of transactions (1 million blocks). All 1M blocks were prior to the recent denial of service attack on the Ethereum network.

To put this in perspective, processing 6 million Steem blocks with an average transaction rate that is significantly higher Ethereum can be processed in just a few minutes. This represents over a 20x difference in processing speed.

The speed at which a single CPU thread can process the virtual machine directly impacts that potential transaction throughput of the network. The current Ethereum network is able to sustain little more than 20 transactions per second. Steem, on the other hand, can sustain 1000 transactions per second.

A recent attack on Ethereum was able to completely saturate the network and deny service to others. Steem, on the other hand, easily survived the flood attacks thrown at it without disrupting service and all without any transaction fees!

There are several reasons why the EVM (Ethereum Virtual Machine) is slow.

  1. Accessing Storage based on Level DB and 32 byte key/value pairs
  2. 256 bit operations are much slower for normal calculations
  3. Calculating GAS consumption is part of consensus
  4. Internal Memory Layout of Script is also part of consensus. (see 1.)
  5. Few opportunities to optimize EVM scripts

Regarding claims of Unlimited Scalability


Vitalik Buterin claims that Ethereum will offer “unlimited” scalability within 2 years. This claim is based upon the idea that not all nodes need to process all transactions. This is a bold claim that I believe will fail for some very practical reasons which I will address below.

Their approach is to “shard” the blockchain, which can be viewed as a way of making the blockchain “multi-threaded”. Each node will run “one thread” and each “thread” will be capable of 20 transactions per second. In theory they can add an unlimited number of nodes and the transaction volume can scale to an limitless amount.

Lets assume there exists two completely independent smart contracts. These two contracts could each run in their own shard at 20 transactions per second. But what happens if these two contracts need to communicate with each other? The solution is to pass messages from one contract to another contract. Anyone who has implemented multi-threaded programs with message passing knows that it isn’t worth the effort unless the computation to message passing overhead is low enough.

The overhead of message passing among nodes over the internet is very high. Adding cryptographic validation also introduces significant overhead. The “cost” to “read” a single value from another shard will be significant.

Lastly developers of multi-threaded programs are familiar of the concept of each thread “owning” the data it manages. Everything that wants to touch that data goes through its owner. What happens when a single shard owns a piece of data that gets requests greater than 20 per second? At some point the single thread becomes the bottle neck.

Implementing Steem with “sharding” would end up bottlenecking on the “global state” that every vote impacts. The same thing would happen for any market processing limit orders. Sharding simply doesn’t scale linearly and certainly not in an unlimited manner.

2. Pricing

In order to run code on Ethereum, contracts must pay with GAS. The EVM counts every instruction executed and looks up how much gas that instruction costs and bills it. If the contract runs out of GAS then all changes made by the contract are reverted, but block producer still keeps the fee for GAS.

Implementing something like Steem on Ethereum has the major challenge that users would have to pay $0.01 per vote and more per post. As the number of users grows the network would get saturated pushing the price of GAS higher.

Now imagine that Steem wasn’t the only application running on Ethereum, imagine that Golos and Augur both became popular with a million user’s each. The price of GAS would go up until it stunted the growth of all three applications.

The only way to bring prices down is to increase transaction throughput by improving efficiency.

Improving efficiency isn’t a uniform process. Installing a faster disk will not improve the efficiency of computation. Making computations faster will not help with disk access. All attempts at improving efficiency will necessarily impact the relative GAS cost of each operation.

Ethereum was recently forced to execute a Hard Fork to change gas costs. Last time Ethereum had a hard fork it resulted in the creation of Ethereum Classic!

It is safe to say that all attempts to optimize the EVM will change the relative cost of the operations. The GAS price can only be reduced by an amount proportional to the instruction that sees the least optimization.

While optimizing some instructions may increase the profit margin of the block validators, Smart Contract developers are still stuck paying higher prices.

Because GAS is part of consensus, all nodes need to continue processing old blocks using the old GAS calculations up until a hard fork occurs. This means that future optimizations are constrained by the need to maintain the original accounting.

3. Optimizing Code

One of the biggest sources of optimization is not through improving the hardware of your computer, but through improving the software. In particular, compilers can work wonders at improving the performance of code running on the same machine. Compilers have the ability to optimize because they have access to more information about the programmers intent. Once the code is converted to assembly many opportunities for optimization are lost.

Imagine someone wanted to optimize an entire contract by providing a native implementation. The native implementation would cause all the same outputs given the same inputs, except it wouldn’t know how to calculate the GAS costs because it wasn’t run on the EVM.

4. Programer Intent

Ethereum smart contracts are published as compiled bytecode which the interpreter processes. In order for people to process and comprehend the smart contract they need to read code, but the blockchain doesn’t store source code, it stores assembly. People are forced to validate the “compiled code” matches the expected output of the source code.

There are several problems with this approach. It requires that all compiler developers generate the same code and make the same optimizations or it requires that all contracts be validated based upon the chosen compiler.

In either case, the compiled code is one step removed from the expressed intent of the contract writers. Bugs in the compiler now become violations of programer intent and these bugs cannot be fixed by fixing the consensus interpretation because consensus does not know the source code.

A Different Approach

The creators of the C++ language have a philosophy of defining the expected behavior of a block of code without defining how that behavior should be implemented. This means that different compilers generate different code with different memory layouts on different platforms.

It also means that developers can focus on what they want to express and they can get the results they expect without unneeded restrictions on the compiler developers or the underlying hardware. This maximizes the ability to optimize performance while still conforming to a spec.

Imagine a smart contract platform where developers publish the code they want to run, the blockchain consensus is bound to a proper interpretation of the code, but not bound to how the code should be executed.

In this example, a script could be replaced with a precompiled binary using a different algorithm and everything would be ok so long as the inputs and outputs of the black box remain the same. This is not possible with Etheruem because the black box would need to calculate exactly how much GAS was consumed.

A better approach to GAS

GAS is a crude approach to calculate a deterministic execution time. In an ideal world we would simply use wall clock time, but different computers with different specifications and loads will all get different results. While it may not be possible to reach a deterministic consensus on exactly how much time something takes, it should be possible to reach consensus on whether or not to include the transaction.

Imagine a bunch of people in a room attempting to reach consensus on whether or not to include a transaction. Each of them measures the wall clock time it takes them to process the transaction and using preemptive scheduling breaks execution if it takes too long.

After taking their measurements they all vote and if the majority say it was “ok”, then everyone includes the transaction. The network does not know “how long it took”, it only knows that the transaction took an approved amount of time. An individual computer will then execute the transactions regardless of how long they take once they know consensus has been reached.

From a consensus perspective, this means all scripts pay the same fee regardless of the actual computation performed. Scripts are paying for “fixed length time slices” rather than paying for “computations”. In terms operating system developers may be familiar with, scripts must execute within the allotted quantum or they will be preempted and their work lost.

The above approach is very abstract and wouldn’t be practical in a direct voting implementation, but there is a way to implement this that scales without much more overhead than is currently used by Steem. For starters, all block producers are on a tight schedule to produce their block. If they miss their time slot then the next witness will go. This means that block producers must be able to apply their block and get it propagated across the network to the majority of nodes (including the next witness) before the next block time.

This means that the mere presence of a transaction in a block is a sign that the network was able to process the block and all of its transactions in a timely manner. Each node in the network also gets a “vote” on how long a block and its transactions took to process. In effect, a node does not need to relay a block if it thinks the transactions exceeded their allocated time.

A node that objects to a block based upon its perceived execution time will still accept new blocks building on top of the perceived “bad” block. At some point either the node will come across a longer fork and switch or the “bad” block will be buried under enough confirmations (votes) that it becomes irreversible. Once it is irreversible the node will begin relaying that block and everything after it.

A block producer, who desires to get paid, will want to make sure that his blocks propagate and will therefore be “conservative” in his estimates of wall clock time that other nodes will have. The network will need to adjust block rewards to be proportional to the number of transactions included.

Due to the natural “rate limiting” enforced by bandwidth / quantum per vesting stake it would require a large stake for any individual miner to fill their own block just to collect the bonus.

Preventing Denial of Service

One of the challenges with scripts is that it costs an attacker nothing to generate an infinite loop. Validating nodes end up consuming resources even if the final conclusion is to reject the script. In this case the validator doesn’t get paid for the resources they consumed.

There are two ways that validators can deal with this kind of abuse:

  1. Local Blacklist / White list scripts, accounts, and/or peers
  2. Require a proof-of-work on each script

Using proof of work it is possible for a validator to know that the producer of the script consumed a minimum amount of effort. The more work done, the greater the “wall clock” time the validator will allow the script to execute up to a maximum limit. Someone wishing to propagate a transaction that is computationally expensive will need to generate a more difficult proof of work than someone generating a transaction that is less expensive.

This proof-of-work combined with TaPoS (Transactions as Proof of Stake) means we now have Transactions as Proof of Work which collectively secure the entire network. This approach has the side effect of preventing witnesses from “stuffing their own block” just to get paid, because each transaction they generate will require a proof of work. The blockchain can therefore reward witnesses for transactions based upon the difficulty of the proof of work as an objective proxy for the difficulty of executing the script.

Proof of Concept

I recently developed some code using the Wren scripting language integrated with experimental blockchain operations. Here is how you would implement a basic “crypto currency” in a single smart contract:

test_script = R"(
    class SimpleCoin {
        static transfer( from, to, amount ) {
          var from_balance = 0
          var to_balance = 0

          if( from != Db.current_account_authority().toString ) 
              Fiber.abort( "invalid authority" )

          var a = Num.fromString( amount )
          if( a < 0  ) Fiber.abort( "cannot transfer negative balance" )

          if( Db.has( from ) ) 
               from_balance = Num.fromString(Db.fetch( from ))
          if( Db.has( to ) )   
               to_balance   = Num.fromString(Db.fetch( to ))
            
          if( from_balance <= 0 && Db.script_account().toString != from) 
               Fiber.abort( "insufficient balance" )

          from_balance = from_balance - a
          to_balance   = to_balance + a

          Db.store( from, from_balance.toString )
          Db.store( to, to_balance.toString )
        }
    }
)";

 trx.operations.emplace_back( set_script{ 0, test_script } );
 trx.operations.emplace_back( 
     call_script{ 
             account_authority_level{1,1}, 0, 
             "SimpleCoin", 
              "transfer(_,_,_)", {"1","0","33"}
      }
 );

I introduced two blockchain level operations: set_script, and call_script. The first operation assigns the script to an account (account 0), and the second operation invokes a method defined by the script.

The scripting environment has access to the blockchain state via the Db api. From this API it can load and store script-specific data as well as query information about the current authority level of an operation. The call_script operation will assert that “account 1” authority level “1”, aka active authority, has approved the call. It will then invoke the script on “account 0”, and call SimpleCoin.transfer( “1”, “0”, “33” ).

The transfer method is able to verify that the current_account_authority matches the from field of the transfer call.

Benchmark of Proof of Concept

I ran a simulation processing 1000’s of transactions each containing a single call to ‘SimpleCoin.transfer’ and measured the time it took to execute. All told, my machine was able to process over 1100 transactions per second through the interpreter. This level of performance is prior to any optimization and/or caching of the script. In other words, the measured 1100 transactions per second included compiling the script 1100 times. A smarter implementation would cache the compiled code for significant improvements.

To put this in perspective, assuming Ethereum “Unlimited” scaled perfectly it would take 55 nodes to process the same transactions that my single node just processed. By the time Wren is optimized with proper caching and Ethereum is discounted for necessary synchronization overhead a smart contract platform based upon Steem and Wren technology could be hundreds of times more efficient than Ethereum.

Why Turing Complete Smart Contracts

Decentralized governance depends impart upon decentralized enforcement of contracts. A blockchain cannot know in advance every contract that might be beneficial and the politically centralizing requirement of hard forks to support new smart contracts limits the ability to organically discover what works.

With a proper smart contract engine, developers can experiment with “low performance scripts” and then replace their “slow scripts” with high performance implementations that conform to the same black-box interface. Eliminating the need to deterministically calculate GAS and memory layout is absolutely critical to realizing the desired performance enhancements.

Conclusion

The technology behind Steem combined with the described smart contract design will be hundreds of times more scalable than Ethereum while entirely eliminating transaction fees for executing smart contracts. This can open up thousands of new applications that are not economically viable on Ethereum.

Sort:  

Have you thought yet about how Wren would create and modify objects to be persisted across transactions/operations (with proper chain reorganization behavior, i.e. somehow implementing those dynamic objects specified in Wren within ChainBase)? What about billing consumption of this persistent memory held (for who knows how long) as part of the database state? The PoW + SP-based rate limiting only "bills" computation time for running the operations.

Also, if you no longer need to bill at the instruction level, why use an interpreted language like Wren? Use something that can be compiled (either JIT or AOT). It is fine if it is a managed language for safety reasons, but you could probably handle unmanaged ones as well with proper process sandboxing and IPC. This would give a lot more flexibility to the developers in their choice of a language/VM to use to develop their smart contract or DApp.

How about Java or Scala? Scala is Java but more compact, elegant, with all the benefits of type checking and annotations.

Well, we could use the JVM and Java bytecode. Then, third-party developers could compile their code written in Java, Scala, or other JVM-compatible languages to Java bytecode and post that bytecode to the blockchain. Another possibility is CLR with Mono (could support the C# language). The good thing about both choices is that we don't need to use process isolation to protect against invalid memory access (which if allowed could let the third-party code corrupt the ChainBase database) assuming that any submitted bytecode that uses unsafe operations and unauthorized native APIs are disallowed by the system. By doing it this way with these managed languages (i.e. not allowing languages like C or C++), we can avoid doing costly IPC (inter-process communication). Nevertheless, how to handle a generalized object mapping solution between the safe VM and the raw data available in the ChainBase memory region is still not very clear to me.

Keep in mind that there may be some licensing issues with the JVM. I think the licensing issues are actually better with Mono now that Microsoft acquired Xamarin (they relicensed Mono under MIT).

Nice article. Being a BitShares maximalist myself, I'm wondering what part of this article would have been different if you had substituted "the technology behind Steemit, BitShares, and other Graphene based blockchains." Is there something about Steemit that makes it the only one you can mention in this post?

In theory any graphene-based chain could easily adapt the code.

This post has been linked to from another place on Steem.

Learn more about and upvote to support linkback bot v0.5. Flag this comment if you don't want the bot to continue posting linkbacks for your posts.

Built by @ontofractal

I have some questions.

  1. If it uses Solidity instead of Wren, how much will be the difference in performances? Is it too low to run apps in Steem as well?
  2. Can Solidity easily ported to Wren?

My point is with regards of marketing. If Steem can adopt Solidity with much faster and cheaper than Ethereum, we can absorb Ethereum's DApps and their developers very easily. "Same DApps on the faster and cheaper platform" will sound really attractive slogan for developers.
If Solidity vs. Wren is like Android vs iOS, my suggestion is like iPhone 4 (Ethereum) vs. iPhone 7 Plus.

This would be awesome if it could be integrated with Steem. I'm sure I"m not the only one who has lost faith in Ethereum and having used it a lot I find the blockchain to be incredibly slow compared to Steem. I would just love it if we could beat the Ethereum guys to create a scalable smart contract solution. It would be one more added use for the Steem blockchain.

I would love to have that :)

Great article and a food for some mind twisting since I don't understand (yet) all the principles behind the technology.

Great news also about the improvements to the UI. They will we warmly welcomed, I am certain.

A question - is is feasible to apply the sharding concept to the graphene based blockchain and avoid bottlenecks that you mention? OK, you might answer that it isn't necessary nor possible but ... Just asking :)

TNX!

Short answer is, yes we can apply sharding to Steem, BitShares, and any other blockchain.

Steem, on the other hand, easily survived the flood attacks thrown at it without disrupting service and all without any transaction fees!

Were those bandwidth DDoS attacks filtered by perimeter nodes, or validation attacks absorbed by validating nodes?

The price of GAS would go up until it stunted the growth of all three applications.

Incorrect. If the price of GAS would increase due to higher demand but the lesser amount of GAS needed would still reflect the unchanged cost of validating a script at that higher price.

The native implementation would cause all the same outputs given the same inputs, except it wouldn’t know how to calculate the GAS costs because it wasn’t run on the EVM.

It could simply compute its own cost based on some counters. If it knows its optimized implementation is less costly than the EVM, then it doesn't harm (i.e. remains compliant) by keeping the GAS if it is depleted before the script completes. Others verifying the depletion case would run the EVM, as this wouldn't cost them more than running native version. For non-depleted scripts, validators run their most efficient native version.

Require a proof-of-work on each script

Unless this is more expensive in resources than the cost of validating the script, then the attacker has an asymmetric DoS advantage. So all you've done is shifted the cost of paying the fee to generating the equivalent proof-of-work.

And unless each script consumer has access to a premium ASIC, then the attacker still has an asymmetric advantage. And if you say the script consumer can farm out this ASIC, then you've shifted the DoS attack to the said farm.

Local Blacklist / White list scripts, accounts, and/or peers

That is effective for bandwidth DDoS, but Nash Equilibirium can be gamed by an open system w.r.t. to submitting data for validation.

I think perhaps you don't understand why cross-sharding breaks Nash Equilibrium.

That's great to see some light on practical details of smart contract on Steem. Looking forward to get my hands on Chainbase upgrade and Wren scripting...

Releasing a pre-release of steem using memory mapped files today. Start up and shutdown times are now almost instant and with the rare exception of crashing in the middle of applying a block the chain database is almost never corrupted.

Is steemd able to identify that the database is corrupted on resume and do an automatic reindex, or do you just get undefined behavior? And same question but for the scenario of the computer suddenly shutting off in the middle of writing to the database?

If the computer shuts off in the middle of a write then it may result in undefined behavior; same if the program crashes while updating or if the file is modified by an outside program.

We plan to add a flag that we set when we start writing and we clear when we finish writing. Once this flag is in place then we can detect any crashes that leave the memory in undefined state.

The OS should take care of synchronizing the memory pages if the process crashes outside of a write operation.

The blockchain log is an append-only file now, which means it should never get corrupted and you should always be able to resync.

whether his or her sbd hargar can go up once a week or every month boss

Coin Marketplace

STEEM 0.29
TRX 0.12
JST 0.033
BTC 62934.09
ETH 3118.65
USDT 1.00
SBD 3.85