How to Process 100M Transfers / Second on a Single Blockchain

in blockchain •  2 years ago

In yesterday’s article I suggested that blockchains need to be able to process transactions at a rate of 10 million per second (offline) in order to sustain 1000 transactions per second realtime while only adding 1 hour per year to the replay time. Today I would like to present a solution to achieving this level of performance for the specific case of transfers.

Lets assume there are 3000 transfers in a block (blocks are 3 seconds apart). To process these 3000 transfers at a rate of 10M per second means we have 300us (.3 milliseconds) to apply the transfers. Using the current ChainBase architecture performing these transfers would take 0.15 seconds (assuming 5us per transfer). We need to accelerate this operation by a factor of 500 to achieve our goals.

Single Thread is not Viable

The act of transferring from one account to another involves subtracting from one balance and adding to another balance. In computational terms it doesn’t get much simpler than this and there is very little room to improve single threaded performance.

Order of Operations Matters

Lets assume an initial condition where Alice has 100 SBD and Bob has 0 SBD. A single block can support Alice transferring 100 SBD to Bob and Bob transferring 100 SBD to Eve so long as Alice’s transfer is applied first.

As things are currently defined it is not possible to perform two transfers in parallel because Bob’s transfer is not valid until after Alice’s transfer completes. Furthermore, if they were operating in parallel there would be a race condition on reading and writing the balances.

Redefining the Semantics of a Transfer

What if we redefined the requirements for a valid block to require that each account may receive at most one deposit or one withdraw. Under these more restrictive definitions we can now apply all 3000 transfers in parallel. Since each transfer only takes 5us and we have 15us, we can allow each account up to 3 deposits or withdraws per block. With 3 second blocks this means that each account has a maximum transaction rate of 1 TPS but the blockchain as a whole can process 200,000 * THREAD COUNT transactions per second. It is entirely possible to build a workstation that supports 44 cores which means that this blockchain should be able to process 8.8M transfers per second.

Achieving 100M transfers per second

With some small tweaks to memory layout and operation ordering it should be easy to make up the difference required to get to 10M transfers per second. With a bit more optimization designed to run the blockchain on a GPU this could scale to 100M transfers per second.

Real World Throughput

Having a CPU that can process 10M transfers per second, means we have a blockchain that can sustain 1000 transactions per second with a growth rate of 100KB/sec or 3 TB per year with a rsync rate of 1 hour per year and require reading the blockchain from disk at 1GB per second.

It should be obvious that downloading a 30TB blockchain over a gigabit connection would take days after 10 years of operation at 1000 transactions per second. Keeping track of all blockchain history for all time will be a very expensive undertaking at 1000 transactions per second.

Eliminating the Need to Replay

A large part of our scalability problem is requiring all nodes to replay all transactions to reliably derive the current state. This replay time could be entirely eliminated if a blockchain had a fixed and well defined “current state”. If the only thing the blockchain was concerned with was balance transfers, then every smartphone owner in the world could have an account with a database size of less than 256GB.

Steem has intentionally kept the structure of the blockchain state “undefined” to give us the greatest flexibility to optimize going forward, especially as we keep adding features. A simple currency blockchain has no need to add an unbounded number of features. This means we can define an optimal data-structure for its state. The hash of this data structure could be included in the blockchain from time to time and new users could simply download this state and start applying new blocks.

Key to Growth

A blockchain that desires to scale must achieve the following:

  1. perform a small number of well defined tasks
  2. operate on a protocol defined state
  3. minimal change in function over time
  4. ensure that all transactions in a block are independent
  5. minimize the number of sequential steps per block

The best way to achieve these properties while maintaining flexibility is to have a robust cross-chain communication protocol and keep all new / changing features on separate chains. You could have one chain for STEEM, one for SBD, one for STEALTH STEEM, and one for STEALTH SBD. Each of these parallel chains would have the potential to process 1000’s of transactions per second and have a fixed, well-defined state. This alone gives the transfer throughput a 4x scalability boost.

Leveraging Ample Real-Time Capacity

There is a clear gap between the 1000 transactions per second being processed in real-time and the 10M transactions per second that get processed during replay. During real time evaluation of the blockchain the CPU is idle 99.9% of the time. We can use this idle time to perform extra computations that are not necessary during replay.

Among these calculations are the scheduling of operations in a block. The protocol can require that all valid blocks keep operations sorted by account. It can even organize operations by memory access patterns and calculate the best execution strategy. These calculations take extra time to perform while building the blockchain, but can be completely skipped during replay. When replaying the blockchain we are only concerned with deriving the proper state, not verifying that all the steps are authorized and well formed (that is guaranteed by the hash linking of blocks).

This can be viewed like a compiler interpreting a program to generate assembly. Once the assembly is generated the CPU can process it very quickly. Another analogy would be a CPU that re-orders instructions to maximize performance. Block producers are responsible for building blocks that properly order transactions for maximum performance. Any block that is produced with conflicting memory accesses would be rejected.

Conclusion

A poorly defined problem can demand an entirely sequential solution; however, if we add a few small constraints on the problem it can easily become trivially parallel. Block producers can do a little bit of extra work up front to ensure that all blocks can be replayed using parallel operation evaluation. By simplifying the problem and defining the output state format as part of the protocol we can also eliminate the need to replay all together.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

What if we redefined the requirements for a valid block to require that each account may receive at most one deposit or one withdraw.

What would happen if someone spams (zero fee) Alice's account at each block with 0.0000000001 STEEM ? When would Alice get the chance to transfer/withdraw her 100 SBD since on every block her account state would be in deposit mode?

·

Rate limiting would prevent it.

·
·

Also it would bring extra limitations if Alice NEED to receive a big quantity of payments in a short period, for example in the scenario of Alibaba Nov. 11 event (link).

IMO deposit in parallel should be allowed. And perhaps withdrawals should have higher priority than deposit.

(edit: rate limit should still apply, to determine which account has higher priority to withdraw, but then it should be made into consensus, and the downside is it will cause longer replay time if need to check it during replay(edit 2: seems it's not able to make it into consensus nor able to check during replay, as suggested in the post, a bit more computation while generating block, but less while validating)).

(edit 3: theoretically it's possible to include one withdrawal and many deposits into one block, if the amount of withdrawal is less than the initial balance. However, in a single block, if there are more than one operation on same balance object, no matter they're multiple deposits or one withdrawal and some deposits, it means need to modify that object serially, which will have a negative impact on performance)

·
·
·

Also it would bring extra limitations if Alice NEED to receive a big quantity of payments in a short period, for example in the scenario of Alibaba Nov. 11 event (link).

This! Amazon could have an account and dozens of people may want to pay at the same time, especially during Christmas session!

·
·
·
·

They can have multiple accounts.

·
·
·
·
·

That sounds more than a hack than a solution.
Would it be not troublesome for the merchant and the customers to care about that?

·
·

Let me get this straight:

  • We prevent people from spending more than once a block because of double-spending during parallel evaluation
  • We prevent people from receiving more than once a block because of write-locks to the object in the database

Correct?

·
·
·

The limit is 3x per block.

·
·

But a "strong" account could delay (attack) other accounts activity until "they give up", I mean until it makes enough damage to the network(?)...

·
·
·

There can be account to account rates and other protections.

·
·
·
·

Thank you!

·
·
·

We can allow an unlimited number of transfers per account if we assume the block producer "summarizes" the changes. The real time validation of the block may take .15 seconds, but replay should be massively parallel.

·
·
·
·

Hello! I need help, urgently! @dantheman A few days ago I wanted to transfer from my account in Steemit my 29,468 Steem account in Poloniex but I was wrong with the key of Memorandum, put the one of Steemit when the reality was the one of Poloniex (that I realized when I started a see because there was no Done) How can I do to fix this? How can I recover my 29,468 Steem? Please someone help me. I'm new to Steem, I've been on this platform for less than a month. Thank you !

·
·
·
·
·

Sandra, please contact poloniex.com support, they are the ones who have the Steem.

Thanks!

·
·
·
·
·

@dantheman In my wallet below in my movement history, this detailed..
3 days ago Transfer 29.468 STEEM to poloniex STM8KQjJQAB8PLTutqWfsA7RoVQVe4qsTgxwQvFyKgdYSejnZcCPY

·
·
·
·

@ned No, because the error was here in Steemit. Poloniex has nothing to do, because the error was mine instead of placing the key Memon mia of Poloniex, I put the key Memorandum of Steemit

·
·
·
·

@ned It is as if he had transferred to this user of Steemit, because it has the same name of (Poloniex) https://steemit.com/@poloniex I discovered this by clicking on the name that appears in my movement history in my wallet

·
·
·
·
·

Hello! @sandra16 I understand because I was also a victim of this fraud and because many people who happened to the same, I decided to make a post to which I invite you to accompany me to share and be able to spread it to the community of Steemit to prevent them from choosing this user in Our platform We fight for our own not to return what they stole from us. Talk to you soon, regards!
Attached the link of the Post where I communicate to the community of Steemit what happened
https://steemit.com/steem/@donation/attention-stamp-notification-in-steem

The best way to achieve these properties while maintaining flexibility is to have a robust cross-chain communication protocol and keep all new / changing features on separate chains. You could have one chain for STEEM, one for SBD, one for STEALTH STEEM, and one for STEALTH SBD.

If we were to go this route, would each chain need to have its own coin associated with it, along with miners, witnesses, etc. - or would they all somehow be tied together via STEEM?

·

Just one token and set of witnesses

·
·

Cool, that's awesome!

·

OK so is this the same as side-chains on Bitcoin or the Lightning network?

·
·

Definitely not.

Both achieve the effect of scale, but to be frank this is much, much better. Once segwit and lightning kick in, I anticipate some issues in the world of bitcoin, specifically because they more or less move some transactions off of the blockchain. @dantheman's solution does not propose such madness.

·
·
·

OK cool. I am also wondering if we could take some of the ethereum thunder with regards to "smart contracts" too. We have the speed and scaling - which they have not yet fixed and from what I am reading there seems to be a loss of confidence in their blockchain overall. I'm not being mean to ethereum but it seems there is an opportunity there right now.

·
·
·
·

I must toss out an alternate view.

I've been a critic of steem-- but I hope, a "known friendly" critic. I say this because I now feel compelled to compare to ethereum. AFAIK, ethereum, and thus "ether" the "gas" that pays for transactions.... both do a grand total of nothing at all. Really.

So here's my reply on that: I do not think that we want any of that thunder. For more on some pretty awesome stuff, have a look into these things, comparatively:

  • Graphene
  • Tendermint
  • Hyperledger

From my limited (but-limited-knowledge-puts-me-in the top 1% of programmers for comparative blockchain knowledge and top .001% of humans for the same) knowledge, I want to say that there are way better things on the horizon. G, TM, and HL all share some common points. There's an incredible discussion occurring here:

https://github.com/cosmos/cosmos/issues/43

Very nice.

The best way to achieve these properties while maintaining flexibility is to have a robust cross-chain communication protocol and keep all new / changing features on separate chains.

Would also give you the ability to create contextual chains. To transfer the perspective of the chain away from transactions and towards the context in which we would like to see the data. You could also build a chain from links of chains. IPFS is the closest I've seen come to this.

That's a genius solution!!
The limitations of one 'transfer' (e.g. balance changing op) per account per block makes sense to enable parallel execution of ops. I like the idea a lot, however, I am not sure if our sister Blockchain BitShares can deal with this limitation as it means that you cannot build trading bots that trade on multiple markets at the same time, but would require operators to run multiple accounts (still not a show-stopper).

Anyways, I like where this is going!

·

Do we'll ever need this kind of velocity with BitShares ? It makes sense with a social plateform but the 100,000 Tx/sec seems to be enough even with BitShares eating a large % of Visa and Mastercard markets. Does it ?

·
·

Good point .. but BitShares is not just VISA, but also NYSE on a global scale ..

·
·
·

Good point .. ;p

·
·
·

How to get it there, though?

The potential is there for sure. How to make it go over the top?

If you say so dan ;)) I read every word with great interest and much excitement. I understood the words and some of the concepts but that's it. what I understand for sure is that you and your team are putting mainstream suitability of blockchains and potential for their acceptance, the best chance in the whole of crypto.

It is about dividing the tasks, one per graph node per block is not the right solution - every transaction involves two nodes by user, and multiple nodes by token. Maybe one per payer. Synchronisation with massive parallelisation requires prioritisation of propagation, ensuring frequent transactions to be clustered, but whenever possible, spread across many nodes, by frequency. Geographical aggregation and association aggregation form transient maps that show how to keep sync without total convergence of blockchain state. So long as the nodes always see the current state of the parts they work with (to some degree of immediately visible provenance), it doesn't have to fully converge, or be permanently stored.

There is many approaches to this, i am very interested to see what people turn up.

Resteemed and tweeted

Now you have the reason why @dantheman wasn't at the SteemFest! He's working to fix this world and I love it!

instead of limiting the transactions per account per block, wouldnt it be enough to simply sort the transactions in an block after the accounts?

Dear CTO,

Resteemed and Upvoted. Excellent post! Thanks!


0.1 Steem was sent to CTO.

<3 Loving this.

Is it possible for us to build a database free client that runs off of the heads of the block or something?
Think similar to Electrum for Bitcoin.

Since each transfer only takes 5us and we have 15us, we can allow each account up to 3 deposits or withdraws per block. With 3 second blocks this means that each account has a maximum transaction rate of 1 TPS but the blockchain as a whole can process 200,000 * THREAD COUNT transactions per second.

It's a very good deal. Also there is Moore's law for further scaling. ;)

upvoted, followed.

Wow, this is really interesting.

As a small-time mining enthusiast I always wondered if the extra computing power that is being used to mine a cryptocurrency could be used to do something different instead than just raise the difficulty to keep the block-timer proportional.

Do you think it would be possible to assign an algorythm that let's miners "idle" during low transaction output but if "something big happens" or huge activity causes transactions to increase then it slowly starts working harder with the GPU/CPU software to keep that up? Or is that maybe a completely different field and the GPU usage is required for the hashes anyway.. I am not so technical I just like to imagine solutions to problems. :P

So if we want to do 100000 tps realtime microtransactions. What do we do?

Wow! That was so well explained that I understood the first third of it.

I visualize it as the same sort of thing that voice to text does when it is only expecting numbers (phone dialing), they rarely get it wrong because they know what to expect. Because of the constraints, it only has to do a small amount of work to verify its analysis. This frees up the channel for outgoing work.

Then you said the word "replay" and I started getting lost. But that's good, a month ago I would not have understood a thing.

·

Replaying means verifying all existing transactions when a new node downloads the blockchain from scratch.

In the past three-and-a-half years, I learned a lot about crypto by doing exactly what you did here: I kept reading articles I didn't fully understand.

·

Steem_Land Steem_Land tweeted @ 02 Dec 2016 - 04:05 UTC

How to Process 100M Transfers / Second on a Single Blockchain — Steemit

steemit.com/blockchain/@da… / https://t.co/SE0t5wDjlr

@SteemUps @SteemitPosts @steemit @steemiobot

Disclaimer: I am just a bot trying to be helpful.

if we add a few small constraints on the problem it can easily become trivially parallel.

That's incredibly beautiful, and (to me) a bit counterintuitive. Maybe it feels natural to someone who's used to thinking about the nuts and bolts of computation, but it's not what I immediately would have expected. Good post!

I wonder how one would combat internet fraud if everything would be done in the finance world by blockchain...?

·

ееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееееее

Dan you might just save the world keep up the work

·

свеча

It sounds like a great step forward for Steem and the blockchain technology overall. I don't understand it all, but adding a conditional requirement made more transfers possible at one time by including the same accounts in one block, that I do get. How much is the sorting by account or memory access pattern going to improve the speed?

·

Any step you don't perform while producing a block, you end up performing every block every time you reindex. Even if it only saved .000001 second it would add up to meaningful time after millions of blocks. Sorting 3000 transactions can take a lot more than .000001 seconds.

Great post

This post has been linked to from another place on Steem.

Learn more about and upvote to support linkback bot v0.5. Flag this comment if you don't want the bot to continue posting linkbacks for your posts.

Built by @ontofractal

It is entirely possible to build a workstation that supports 44 cores which means that this blockchain should be able to process 8.8M transfers per second.

Porting to GPU threading might be more practical, although I don't know if there are any obstacles.

A blockchain that desires to scale must achieve the following:
-perform a small number of well defined tasks
-operate on a protocol defined state
-minimal change in function over time
-ensure that all transactions in a block are independent
-minimize the number of sequential steps per block

I'd also add an encoding scheme (in effect a custom-made compression), to minimize bytes used. The less bytes there are, the easier it is to fit in faster mediums of storage... It may not be a problem now, but over time it will be if a blockchain processes a lot of data...

Let's make it happen!!!