Seed - Dev. Discussion - Transaction Squashing - Considerations For Jitter (Part 2)

in #blockchain7 years ago

SeedTransactionSquashing.png

This writeup is Part 2 of a series of writings on transaction squashing. If you have not yet read about the proposed transaction squashing solution, Part 1 can be found here.


Previously, we discussed my proposed solution for reducing blockchain sizes. I refer to this proposed solution as transaction squashing.

Transaction squashing is the simple concept of merging sequential changes into one single change. In Github, a technology created to allow for open source code contributions, they use a directed acyclic graph (DAG) to manage their user code changes commits, and have proven the ability to squash multiple sequential commits into one merged commit, while still being done from a DAG. Without going into the specifics, transaction squashing would be very different under the hood in its implementation, however the general idea is the same.

We’re now going to go into an analysis on the proposed solution. This post will be my attempt at poking holes in the mechanism, debating whether certain aspects are flaws or not, and then to conclude with whether the proposal is worth pursuing.


Jitter in Transaction Squashing

In the transaction squashing proposal, transactions get pulled from a directed acyclic graph (DAG) of transactions once a transaction is validated which has met a certain criteria. This lucky transaction and all it’s children are pulled out of the DAG. Due to one convenient feature of DAG’s each node and it’s children can be viewed as a tree. Thanks to that feature, these transactions are effectively already in a Merkel tree for us. These transactions get squashed together into their own testament block, which is a block which represents the summed updated of the transactions it represents.

Due to the nature of DAGs confirming transactions out of order, transactions go through this process, being pruned from the DAG, asynchronously and out-of-order. For example, we may have a transaction A come through which meets the require criteria, then a few seconds later a transaction B come through which also has the required criteria to start the pruning mechanism. However, transaction B may be confirmed first for a number of reasons, which could cause a testament block to be created out of transaction B’s pruning before transaction A’s.

So, we have to assume that the testament blocks have jitter between them, representing transactions that happened in various points in time.

Is Jitter In Testament Blocks A Problem?

Now that we have established how this transaction squashing mechanism causes jitter in the order of testament blocks, we must question whether this causes a conflict or not. I do not believe this will cause a concern, and the reason is because transaction squashing is not the first step in the blockchain system. That is, the entanglement of unconfirmed transactions in the transaction pool is the first step. At this phase, there is no jitter (well, there is jitter in propagation times of users, however not the more severe jitter issue explained above), as transactions join asynchronously to the DAG. No mechanism is waiting for certain transactions to get squashed into testament blocks. Users may be waiting for certain transactions to reach a certain level of validity, however an individuals transactions are not directly represented by whether or not they are in a testament block. They are separate mechanisms that are not reliant on each other.

By the time a transaction is being merged into a testament block, it has already sat in the DAG of transactions for an adequate amount of time in order for it and its children to be fully trusted. When a user validates a transaction, it must validated other transactions which are either in the DAG, or are in a first generation testament block.


Is Data Recoverable?

One potential huge flaw is the inability to recreate everything in recorded history, as data gets discarded or one transactions who’s effects are already invoked on the network. However, I do not necessarily believe that’s a bad thing. When I have fun playing a video game, I don’t ever want to go back in history and look at my x position changes or each time I ever saved a description change on my profile. For important transactions, yes, we do want a recorded history, however not everything needs to be saved in full.

Does that make state recovery impossible? Potentially, unless some good willed people simply store the blocks on their own web server that clients could request the full history from. In the same way a blockchain can be loaded by reading in all the blocks in order of creation, this blockchain can as well. We simply designed it so the storage is an implementation detail, as the protocol itself does not rely on the the ability to recreate history.


How To Get Caught Up On History?

Because we cannot guarantee that a user has access to the full list of transactions, we must consider the scenario that a user comes online after not using the system for a few days and new legacy blocks have arrived. How do they get caught up on history?

User would simply have to request the testament blocks and check their validations. Despite scraping the contents of transactions, the raw validation info such as signatures, public keys which signed them, and data hashes for all transactions and transaction validations are stored. Users can prove that the volunteer host has a valid history which leads to current day by doing a few checks on the legacy block and comparing a few hashes.


What If No User Is Online

Because our blockchain does not rely on miners and pulls transactions out of the DAG, we inherit the DAG behaviour in certain scenarios, such as what happens if no user is online for a certain amount of time sending transactions. If no other users are online, then your transaction will have no one to validate it and it will never go through. In Blockchain technologies, this will not happen as the miners are always listening for transactions, even if only one transactions (yours) ends up occurring during that block time. This is one difference between DAG cryptocurrencies compared to blockchain ones. IOTA solves this problem with the Coordinater, a program which acts as training wheels for the network. However, I personally do not like that solution, as I feel it centralizes the network a lot early on. I’m personally fine with the developers centralizing the network during the development phase, however I feel once it is at the scale where it is being traded as a currency by real users, it should be decentralized or in the immediate process of decentralizing. Therefore, we had to find a different approach to this problem.

For us, a simple solution from a user ecosystem approach is already implemented into our protocol. Our goal is to create an abstract networking layer on top of blockchain technology which can not only support MMO video games, but scale with their usage. In a MMO video game such as RuneScape or World of Warcraft, the user base never falls down while online. They may have at the very start, but once they have an adequate user base, someone is always playing. From my research, this point hits at a few thousand users, which is not that many in the grand scheme of things. We simply need a video game, or a different software which requires a lot users and high throughput, to be created early on in our ecosystem’s growth.

It has not been announced yet, however my business partner Jaegar (@jaegar) in my soon-to-be released Ethereum dApp has his own research project he must do to finish his degree as well. His research project proposal has not yet approved, so I cannot say too much, however his proposed project was to create the first dApp on our system which satisfies the requirements stated above. Essentially, instead of the Coordinator, we will create a high throughput MMO video game as to attract users to be on the network validating transactions frequently. This also will serve a second purpose, to prove our use case of supporting such clients. However, I should not talk about it more till things are finalized.

As per another simple solution, we have been internally debating creating lean storageless ping packets. Any user can send this transaction, which is an empty transaction that simply lets a user validate previous transactions. Ping transactions can still be validated like any transaction. The entire transaction data is simply discarded with no ledger changes, increasing the blockchain size by simply the minimum validation data.

This allows users themselves to host simply nodes which broadcast this transaction should they choose to run their own user-driven Coordinator to help push the system along. Relay nodes will be user hosted as well, and we will be hosting a Relay node which utilizes this ping support, which will be open sourced. Merging this feature with relay nodes allows for the ping node to conveniently know about all the users in order to make informed decisions.


I hope you enjoyed my analysis on my transaction squashing proposal! There’s more to talk about, but this was a good thought experiment on transaction squashing and the weird side effect it would have on jitter in recorded history. Soon I’ll look into other problems that it may face, such as a potential synchronization flaw between testament block generation changes and the potential for loosely-connected Sharding-DAG’s.

If you enjoyed the writeup, a follow or up vote are always appreciated!

Coin Marketplace

STEEM 0.17
TRX 0.13
JST 0.027
BTC 59796.03
ETH 2732.76
USDT 1.00
SBD 2.52