A better approach to Turing Complete Smart Contracts

in #blockchain4 years ago (edited)

Ethereum is the current standard for general purpose smart contracts, but anyone who has spent time developing for Ethereum knows there are some very real challenges with its design. I have spent some time this past week working on a proof of concept for a new Turing complete smart contract system and in the process have identified some major differences in philosophical approaches.

Ethereum’s Technical Challenges

Before getting into the details of what I have learned, lets review the technical challenges faced by smart contract developers on Ethereum.

1. Performance

A performance analysis in Feb 2016 showed that it took the Parity Ethereum client over an hour to process 6 months worth of transactions (1 million blocks). All 1M blocks were prior to the recent denial of service attack on the Ethereum network.

To put this in perspective, processing 6 million Steem blocks with an average transaction rate that is significantly higher Ethereum can be processed in just a few minutes. This represents over a 20x difference in processing speed.

The speed at which a single CPU thread can process the virtual machine directly impacts that potential transaction throughput of the network. The current Ethereum network is able to sustain little more than 20 transactions per second. Steem, on the other hand, can sustain 1000 transactions per second.

A recent attack on Ethereum was able to completely saturate the network and deny service to others. Steem, on the other hand, easily survived the flood attacks thrown at it without disrupting service and all without any transaction fees!

There are several reasons why the EVM (Ethereum Virtual Machine) is slow.

  1. Accessing Storage based on Level DB and 32 byte key/value pairs
  2. 256 bit operations are much slower for normal calculations
  3. Calculating GAS consumption is part of consensus
  4. Internal Memory Layout of Script is also part of consensus. (see 1.)
  5. Few opportunities to optimize EVM scripts

Regarding claims of Unlimited Scalability

Vitalik Buterin claims that Ethereum will offer “unlimited” scalability within 2 years. This claim is based upon the idea that not all nodes need to process all transactions. This is a bold claim that I believe will fail for some very practical reasons which I will address below.

Their approach is to “shard” the blockchain, which can be viewed as a way of making the blockchain “multi-threaded”. Each node will run “one thread” and each “thread” will be capable of 20 transactions per second. In theory they can add an unlimited number of nodes and the transaction volume can scale to an limitless amount.

Lets assume there exists two completely independent smart contracts. These two contracts could each run in their own shard at 20 transactions per second. But what happens if these two contracts need to communicate with each other? The solution is to pass messages from one contract to another contract. Anyone who has implemented multi-threaded programs with message passing knows that it isn’t worth the effort unless the computation to message passing overhead is low enough.

The overhead of message passing among nodes over the internet is very high. Adding cryptographic validation also introduces significant overhead. The “cost” to “read” a single value from another shard will be significant.

Lastly developers of multi-threaded programs are familiar of the concept of each thread “owning” the data it manages. Everything that wants to touch that data goes through its owner. What happens when a single shard owns a piece of data that gets requests greater than 20 per second? At some point the single thread becomes the bottle neck.

Implementing Steem with “sharding” would end up bottlenecking on the “global state” that every vote impacts. The same thing would happen for any market processing limit orders. Sharding simply doesn’t scale linearly and certainly not in an unlimited manner.

2. Pricing

In order to run code on Ethereum, contracts must pay with GAS. The EVM counts every instruction executed and looks up how much gas that instruction costs and bills it. If the contract runs out of GAS then all changes made by the contract are reverted, but block producer still keeps the fee for GAS.

Implementing something like Steem on Ethereum has the major challenge that users would have to pay $0.01 per vote and more per post. As the number of users grows the network would get saturated pushing the price of GAS higher.

Now imagine that Steem wasn’t the only application running on Ethereum, imagine that Golos and Augur both became popular with a million user’s each. The price of GAS would go up until it stunted the growth of all three applications.

The only way to bring prices down is to increase transaction throughput by improving efficiency.

Improving efficiency isn’t a uniform process. Installing a faster disk will not improve the efficiency of computation. Making computations faster will not help with disk access. All attempts at improving efficiency will necessarily impact the relative GAS cost of each operation.

Ethereum was recently forced to execute a Hard Fork to change gas costs. Last time Ethereum had a hard fork it resulted in the creation of Ethereum Classic!

It is safe to say that all attempts to optimize the EVM will change the relative cost of the operations. The GAS price can only be reduced by an amount proportional to the instruction that sees the least optimization.

While optimizing some instructions may increase the profit margin of the block validators, Smart Contract developers are still stuck paying higher prices.

Because GAS is part of consensus, all nodes need to continue processing old blocks using the old GAS calculations up until a hard fork occurs. This means that future optimizations are constrained by the need to maintain the original accounting.

3. Optimizing Code

One of the biggest sources of optimization is not through improving the hardware of your computer, but through improving the software. In particular, compilers can work wonders at improving the performance of code running on the same machine. Compilers have the ability to optimize because they have access to more information about the programmers intent. Once the code is converted to assembly many opportunities for optimization are lost.

Imagine someone wanted to optimize an entire contract by providing a native implementation. The native implementation would cause all the same outputs given the same inputs, except it wouldn’t know how to calculate the GAS costs because it wasn’t run on the EVM.

4. Programer Intent

Ethereum smart contracts are published as compiled bytecode which the interpreter processes. In order for people to process and comprehend the smart contract they need to read code, but the blockchain doesn’t store source code, it stores assembly. People are forced to validate the “compiled code” matches the expected output of the source code.

There are several problems with this approach. It requires that all compiler developers generate the same code and make the same optimizations or it requires that all contracts be validated based upon the chosen compiler.

In either case, the compiled code is one step removed from the expressed intent of the contract writers. Bugs in the compiler now become violations of programer intent and these bugs cannot be fixed by fixing the consensus interpretation because consensus does not know the source code.

A Different Approach

The creators of the C++ language have a philosophy of defining the expected behavior of a block of code without defining how that behavior should be implemented. This means that different compilers generate different code with different memory layouts on different platforms.

It also means that developers can focus on what they want to express and they can get the results they expect without unneeded restrictions on the compiler developers or the underlying hardware. This maximizes the ability to optimize performance while still conforming to a spec.

Imagine a smart contract platform where developers publish the code they want to run, the blockchain consensus is bound to a proper interpretation of the code, but not bound to how the code should be executed.

In this example, a script could be replaced with a precompiled binary using a different algorithm and everything would be ok so long as the inputs and outputs of the black box remain the same. This is not possible with Etheruem because the black box would need to calculate exactly how much GAS was consumed.

A better approach to GAS

GAS is a crude approach to calculate a deterministic execution time. In an ideal world we would simply use wall clock time, but different computers with different specifications and loads will all get different results. While it may not be possible to reach a deterministic consensus on exactly how much time something takes, it should be possible to reach consensus on whether or not to include the transaction.

Imagine a bunch of people in a room attempting to reach consensus on whether or not to include a transaction. Each of them measures the wall clock time it takes them to process the transaction and using preemptive scheduling breaks execution if it takes too long.

After taking their measurements they all vote and if the majority say it was “ok”, then everyone includes the transaction. The network does not know “how long it took”, it only knows that the transaction took an approved amount of time. An individual computer will then execute the transactions regardless of how long they take once they know consensus has been reached.

From a consensus perspective, this means all scripts pay the same fee regardless of the actual computation performed. Scripts are paying for “fixed length time slices” rather than paying for “computations”. In terms operating system developers may be familiar with, scripts must execute within the allotted quantum or they will be preempted and their work lost.

The above approach is very abstract and wouldn’t be practical in a direct voting implementation, but there is a way to implement this that scales without much more overhead than is currently used by Steem. For starters, all block producers are on a tight schedule to produce their block. If they miss their time slot then the next witness will go. This means that block producers must be able to apply their block and get it propagated across the network to the majority of nodes (including the next witness) before the next block time.

This means that the mere presence of a transaction in a block is a sign that the network was able to process the block and all of its transactions in a timely manner. Each node in the network also gets a “vote” on how long a block and its transactions took to process. In effect, a node does not need to relay a block if it thinks the transactions exceeded their allocated time.

A node that objects to a block based upon its perceived execution time will still accept new blocks building on top of the perceived “bad” block. At some point either the node will come across a longer fork and switch or the “bad” block will be buried under enough confirmations (votes) that it becomes irreversible. Once it is irreversible the node will begin relaying that block and everything after it.

A block producer, who desires to get paid, will want to make sure that his blocks propagate and will therefore be “conservative” in his estimates of wall clock time that other nodes will have. The network will need to adjust block rewards to be proportional to the number of transactions included.

Due to the natural “rate limiting” enforced by bandwidth / quantum per vesting stake it would require a large stake for any individual miner to fill their own block just to collect the bonus.

Preventing Denial of Service

One of the challenges with scripts is that it costs an attacker nothing to generate an infinite loop. Validating nodes end up consuming resources even if the final conclusion is to reject the script. In this case the validator doesn’t get paid for the resources they consumed.

There are two ways that validators can deal with this kind of abuse:

  1. Local Blacklist / White list scripts, accounts, and/or peers
  2. Require a proof-of-work on each script

Using proof of work it is possible for a validator to know that the producer of the script consumed a minimum amount of effort. The more work done, the greater the “wall clock” time the validator will allow the script to execute up to a maximum limit. Someone wishing to propagate a transaction that is computationally expensive will need to generate a more difficult proof of work than someone generating a transaction that is less expensive.

This proof-of-work combined with TaPoS (Transactions as Proof of Stake) means we now have Transactions as Proof of Work which collectively secure the entire network. This approach has the side effect of preventing witnesses from “stuffing their own block” just to get paid, because each transaction they generate will require a proof of work. The blockchain can therefore reward witnesses for transactions based upon the difficulty of the proof of work as an objective proxy for the difficulty of executing the script.

Proof of Concept

I recently developed some code using the Wren scripting language integrated with experimental blockchain operations. Here is how you would implement a basic “crypto currency” in a single smart contract:

test_script = R"(
    class SimpleCoin {
        static transfer( from, to, amount ) {
          var from_balance = 0
          var to_balance = 0

          if( from != Db.current_account_authority().toString ) 
              Fiber.abort( "invalid authority" )

          var a = Num.fromString( amount )
          if( a < 0  ) Fiber.abort( "cannot transfer negative balance" )

          if( Db.has( from ) ) 
               from_balance = Num.fromString(Db.fetch( from ))
          if( Db.has( to ) )   
               to_balance   = Num.fromString(Db.fetch( to ))
          if( from_balance <= 0 && Db.script_account().toString != from) 
               Fiber.abort( "insufficient balance" )

          from_balance = from_balance - a
          to_balance   = to_balance + a

          Db.store( from, from_balance.toString )
          Db.store( to, to_balance.toString )

 trx.operations.emplace_back( set_script{ 0, test_script } );
             account_authority_level{1,1}, 0, 
              "transfer(_,_,_)", {"1","0","33"}

I introduced two blockchain level operations: set_script, and call_script. The first operation assigns the script to an account (account 0), and the second operation invokes a method defined by the script.

The scripting environment has access to the blockchain state via the Db api. From this API it can load and store script-specific data as well as query information about the current authority level of an operation. The call_script operation will assert that “account 1” authority level “1”, aka active authority, has approved the call. It will then invoke the script on “account 0”, and call SimpleCoin.transfer( “1”, “0”, “33” ).

The transfer method is able to verify that the current_account_authority matches the from field of the transfer call.

Benchmark of Proof of Concept

I ran a simulation processing 1000’s of transactions each containing a single call to ‘SimpleCoin.transfer’ and measured the time it took to execute. All told, my machine was able to process over 1100 transactions per second through the interpreter. This level of performance is prior to any optimization and/or caching of the script. In other words, the measured 1100 transactions per second included compiling the script 1100 times. A smarter implementation would cache the compiled code for significant improvements.

To put this in perspective, assuming Ethereum “Unlimited” scaled perfectly it would take 55 nodes to process the same transactions that my single node just processed. By the time Wren is optimized with proper caching and Ethereum is discounted for necessary synchronization overhead a smart contract platform based upon Steem and Wren technology could be hundreds of times more efficient than Ethereum.

Why Turing Complete Smart Contracts

Decentralized governance depends impart upon decentralized enforcement of contracts. A blockchain cannot know in advance every contract that might be beneficial and the politically centralizing requirement of hard forks to support new smart contracts limits the ability to organically discover what works.

With a proper smart contract engine, developers can experiment with “low performance scripts” and then replace their “slow scripts” with high performance implementations that conform to the same black-box interface. Eliminating the need to deterministically calculate GAS and memory layout is absolutely critical to realizing the desired performance enhancements.


The technology behind Steem combined with the described smart contract design will be hundreds of times more scalable than Ethereum while entirely eliminating transaction fees for executing smart contracts. This can open up thousands of new applications that are not economically viable on Ethereum.


Have you thought yet about how Wren would create and modify objects to be persisted across transactions/operations (with proper chain reorganization behavior, i.e. somehow implementing those dynamic objects specified in Wren within ChainBase)? What about billing consumption of this persistent memory held (for who knows how long) as part of the database state? The PoW + SP-based rate limiting only "bills" computation time for running the operations.

Also, if you no longer need to bill at the instruction level, why use an interpreted language like Wren? Use something that can be compiled (either JIT or AOT). It is fine if it is a managed language for safety reasons, but you could probably handle unmanaged ones as well with proper process sandboxing and IPC. This would give a lot more flexibility to the developers in their choice of a language/VM to use to develop their smart contract or DApp.

How about Java or Scala? Scala is Java but more compact, elegant, with all the benefits of type checking and annotations.

Well, we could use the JVM and Java bytecode. Then, third-party developers could compile their code written in Java, Scala, or other JVM-compatible languages to Java bytecode and post that bytecode to the blockchain. Another possibility is CLR with Mono (could support the C# language). The good thing about both choices is that we don't need to use process isolation to protect against invalid memory access (which if allowed could let the third-party code corrupt the ChainBase database) assuming that any submitted bytecode that uses unsafe operations and unauthorized native APIs are disallowed by the system. By doing it this way with these managed languages (i.e. not allowing languages like C or C++), we can avoid doing costly IPC (inter-process communication). Nevertheless, how to handle a generalized object mapping solution between the safe VM and the raw data available in the ChainBase memory region is still not very clear to me.

Keep in mind that there may be some licensing issues with the JVM. I think the licensing issues are actually better with Mono now that Microsoft acquired Xamarin (they relicensed Mono under MIT).

Nice article. Being a BitShares maximalist myself, I'm wondering what part of this article would have been different if you had substituted "the technology behind Steemit, BitShares, and other Graphene based blockchains." Is there something about Steemit that makes it the only one you can mention in this post?

In theory any graphene-based chain could easily adapt the code.

This post has been linked to from another place on Steem.

Learn more about and upvote to support linkback bot v0.5. Flag this comment if you don't want the bot to continue posting linkbacks for your posts.

Built by @ontofractal

I have some questions.

  1. If it uses Solidity instead of Wren, how much will be the difference in performances? Is it too low to run apps in Steem as well?
  2. Can Solidity easily ported to Wren?

My point is with regards of marketing. If Steem can adopt Solidity with much faster and cheaper than Ethereum, we can absorb Ethereum's DApps and their developers very easily. "Same DApps on the faster and cheaper platform" will sound really attractive slogan for developers.
If Solidity vs. Wren is like Android vs iOS, my suggestion is like iPhone 4 (Ethereum) vs. iPhone 7 Plus.

This would be awesome if it could be integrated with Steem. I'm sure I"m not the only one who has lost faith in Ethereum and having used it a lot I find the blockchain to be incredibly slow compared to Steem. I would just love it if we could beat the Ethereum guys to create a scalable smart contract solution. It would be one more added use for the Steem blockchain.

I would love to have that :)

Great article and a food for some mind twisting since I don't understand (yet) all the principles behind the technology.

Great news also about the improvements to the UI. They will we warmly welcomed, I am certain.

A question - is is feasible to apply the sharding concept to the graphene based blockchain and avoid bottlenecks that you mention? OK, you might answer that it isn't necessary nor possible but ... Just asking :)


Short answer is, yes we can apply sharding to Steem, BitShares, and any other blockchain.

Steem, on the other hand, easily survived the flood attacks thrown at it without disrupting service and all without any transaction fees!

Were those bandwidth DDoS attacks filtered by perimeter nodes, or validation attacks absorbed by validating nodes?

The price of GAS would go up until it stunted the growth of all three applications.

Incorrect. If the price of GAS would increase due to higher demand but the lesser amount of GAS needed would still reflect the unchanged cost of validating a script at that higher price.

The native implementation would cause all the same outputs given the same inputs, except it wouldn’t know how to calculate the GAS costs because it wasn’t run on the EVM.

It could simply compute its own cost based on some counters. If it knows its optimized implementation is less costly than the EVM, then it doesn't harm (i.e. remains compliant) by keeping the GAS if it is depleted before the script completes. Others verifying the depletion case would run the EVM, as this wouldn't cost them more than running native version. For non-depleted scripts, validators run their most efficient native version.

Require a proof-of-work on each script

Unless this is more expensive in resources than the cost of validating the script, then the attacker has an asymmetric DoS advantage. So all you've done is shifted the cost of paying the fee to generating the equivalent proof-of-work.

And unless each script consumer has access to a premium ASIC, then the attacker still has an asymmetric advantage. And if you say the script consumer can farm out this ASIC, then you've shifted the DoS attack to the said farm.

Local Blacklist / White list scripts, accounts, and/or peers

That is effective for bandwidth DDoS, but Nash Equilibirium can be gamed by an open system w.r.t. to submitting data for validation.

I think perhaps you don't understand why cross-sharding breaks Nash Equilibrium.

That's great to see some light on practical details of smart contract on Steem. Looking forward to get my hands on Chainbase upgrade and Wren scripting...

Releasing a pre-release of steem using memory mapped files today. Start up and shutdown times are now almost instant and with the rare exception of crashing in the middle of applying a block the chain database is almost never corrupted.

Is steemd able to identify that the database is corrupted on resume and do an automatic reindex, or do you just get undefined behavior? And same question but for the scenario of the computer suddenly shutting off in the middle of writing to the database?

If the computer shuts off in the middle of a write then it may result in undefined behavior; same if the program crashes while updating or if the file is modified by an outside program.

We plan to add a flag that we set when we start writing and we clear when we finish writing. Once this flag is in place then we can detect any crashes that leave the memory in undefined state.

The OS should take care of synchronizing the memory pages if the process crashes outside of a write operation.

The blockchain log is an append-only file now, which means it should never get corrupted and you should always be able to resync.

whether his or her sbd hargar can go up once a week or every month boss

Hello, I'm new to Steemit. May I please use the image of the handshaking for a video presentation? Thank you.

Could this mean a hard fork is required to implement the system to steem?

          if( from_balance <= 0 && Db.script_account().toString != from) 
               Fiber.abort( "insufficient balance" )

This code looks strange.
By the way, do we need to check whether from_balance >= a?

The name of Vitalik distracted me from investing any money into Ethereum.
So, I'm sincerely wishing you that Steem would worth at least 100x ether worth :D

but Russians are so trustworthy, how can you say that :).... how about saying Vitalic is the attacker for all the attacks on etherum, and the DAO, that would make a great story someday

Haha we both know that a great story can be made out of anything :D
Actually I hate the name Vitalik since my childhood... But, nevermind lol

Let it go @richman, share it with us :)

On a different note it's pretty much strange that this post earns more than the post about my hamster...
How come? What's wrong with the world today? Your thoughts?

Just being cautious about making a hampster richer than richman maybe.

Good to hear from you again, I have linked this post on Ethereum's subreddit as I would like to have some feedback from their community on the matter.

Is there a tentative roadmap for this approach? I have a use case involving something relatively similar to Steemit, but with slightly more flexibility (just blocks of text saved on the bockchain but with a few more modifiers, not simply upvoting / downvoting, something a bit more complex. Still user generated content, not just transactions. I wonder if in the scenario presented by you - a new breed of smart contracts - this would be feasible. Thank you for the update, really appreciate it.

Scripting will not be ready for months. There is a very large amount of work required to test Scripting. At least now we have a working proof of concept to start refining.

Thank you. If there's anything I can do to support when you start testing, I'd be happy to help.

Thank you for this great and very comprehensive article! I am invested in Ether and still believe it will take off but I am more convinced that Steemit will be huge! Reading your article that there are possibilities to have smart contracts running on Steem technology as well is a added bonus! I just powered up 40 ETH to steem power. Great work and please keep us updated on this project.

Reading... great article. Informative. You need to post more often it is always interesting.

4. Programer Intent should read 4. Programmer Intent. Also an instance in that section that also lacks an extra M. Not being nit picky. I know what your "Intent" was but you are one of the faces of Steemit/Steem, so trying to be helpful.

While this is being tested and developed, can we also get some basic user features for Steemit? The value of Steem is supposed to be based on the social media platform built on top of it. But there doesn't appear to be much that is attracting potential users to the site - other than the possibility of making money. With the price continuing to plummet and SBD payouts being halted for now, that incentive is losing its luster. Can we expect some of the desired and needed features any time in the near future?

Your concept for smart contracts looks pretty solid, though. It would be great if it can be implemented. That would definitely give Steem a leg up on the competition.

User facing features are being actively worked on by 4 full time engineers. Among these include a next generation editor that is fantastic! Image uploading and notifications are also almost done. We should be deploying these features to a public test later today.

Well that's great news! Thank you for sharing that.

You should also look at @dan-atstarlite's blog post today about a virtual marketplace. I know it has been mentioned in the past, but those ideas can be a gateway for non-crypto and non-bloggers to come to Steemit and to familiarize themselves with the site and with cryptocurrencies.

The account link doesn't seem to be valid.

Sorry. Fixed now.

That is great news! I think the drop in Steem prices with no bottom in sight, and SBDs no longer being paid out in rewards for posting really has the morale of the Steemit community pretty low. Are you guys going to comment on that in the near future, or announce any plans that might rectify the current downward spiral which appears to be gaining momentum as it falls?

Supply and demand will reach parity soon. One of the challenges with Steem Power is that those looking to cashout out are forced to do it over time. This means they cannot "dump" it all and drive the price down quickly (or prevent it from rising quickly).

At some point the price will fall to a level where the early miners are no longer willing to sell. At that point things will consolidate.

At some point the price will fall to a level where the early miners are no longer willing to sell.

Where do you think that point might be? The Steem price is already well below the lows from early May and late June/early July. Market cap is around $25 million, down from the near $400 million peak. There doesn't appear to be any slowing of the downward momentum.

If most of the large stakeholders are still selling into that, then what would make us believe that they wouldn't just keep selling whatever they have left, despite the price continuing to crash? The return on the initial "investment" has probably been more than sufficient to not really care if they're getting a few hundred dollars per week or a few thousand per week at this point. Those who are witnesses and are actively curating (which is becoming more automated) on the platform aren't even really seeing their stake decrease. So powering down and selling, even if they're only getting $200 for it, is still likely a weekly profit for them. The price could drop to a penny and they would still get hundreds of dollars per week.

Dan, is there any data available to compute the price of Steem for the early miners? This may help determine how much more downside risk remains. The early miners can sell below this figure, but it would help to know.

IMHO, most if not all early miners have got their investment (on mining) back long time before (even before the pump in July).

This is exciting! A new user interface for steemit.

I think this blog post has the potential to convert ether investors into steem investors, and that's good for the price of steem :)

I was not aware that SBD payments were halted... link?

Also, just to clarify in case there was confusion - payments have not 'stopped'; they are just being payed in STEEM instead of SBD.

There's no link to an announcement. It's just the current state of payouts due to Steem/SBD supplies and market cap.

It doesn't exactly tell us what's happening now, but this is from August 20th, by @dantheman -


@dantheman is busy enough but I want to get into the Larieum token early!

I wonder why would you not use c++ rather than an obscure language like Wern.

Preventing Denial of Service

  1. Local black list white list scripts

If POW does not work out so well, I like this one the best. I think we could build a trusted relay connection table.. A relay is a full node that will run the contract and ensure it meets the threshold. It will only relay the contract / transaction if it is good. A recipient will trust that connection based on the accuracy of this calculation (it will of course do the work too). Nodes should seek to get a healthy amount of trusted connections. So a network that is attacked should naturally form a larger mesh of trusted connections and therefore make more hops the attacker must get through to get to the witnesses.

Proof of work is fundamentally wasteful. There are a lot of old ASICS out there that can't be used as they are not profitable. Those could be used to pass on a large cost to our network by solving POW. I doubt this would be a sustained attack but if it were we might need to run contracts on ASICS to keep up and incur the hardware and electricity costs being used against us. That would really suck for the life of the platform if that got into the blockchain.

Im excited to hear that Steem will be a leader in the Smart Contract field as it will be an important tool for commerce in the world of Steem.

Thank you for this post. It´s too technical for me, but I keep reading and learning.

I am interested in the use of blockchain technology in science. I was astonished to see Turing machines mentioned here. I have been toying with the idea of using blockchain as a novel approach to the most essential subject of NP-complete problems.

There is a wonderful book written for laymen, most certainly you know about it already. It´s about the Traveling Salesman Problem (TSP), a very simple case of the problem that became sort of the flagship on the class of NP problems:


Maybe you can recommend some reading regarding the potential use of blockchain in science, particularly in the area of NP- complete problems and it´s flagship instance, the TSP problem.

PS Besides beying an old age student of physics, I´m an accountant. I arrived to this post thanks to @belerophon and his gentle indication of readings regarding triple entry accounting.

Great article, your participation is very valuable to be updated on the subject. Congratulations

Nice to see you back @dantheman!

Great read!

I ask only one thing if you want my support for this. Name the compiler stimpy. Please!!!