The bandwidth limitation hasn't been resolved yet. It remains a daily bottleneck to many minnows with low SP. The issue has been discussed for a while now among the witnesses, but no concrete action has been implemented. The problem is multidimensional. However, there are some measures that everyone can undertake to help mitigate the issue to reach an acceptable solution. But first, I'll explain the basics of the problem.
Global bandwidth parameters
The Maximum Virtual Bandwidth represents the amount of the bytes available for all users to share in their transactions. A transaction can be a vote, fund transfer, comment, post... every little thing that's done on the blockchain is a transaction that is signed by the user's private key and validated by the system using the user's public key.
The Maximum Virtual Bandwidth is calculated with this equation:
max_virtual_bandwidth = number_of_blocks_per_week * maximum_block_size * current_reserve_ratio
- number_of_blocks_per_week, constant value calculated by:
20 blocks/min * 60 min/hour * 24 hour/day * 7 days/week = 201600
- maximum_block_size, parameter set by witnesses and is currently 65536 bytes.
So in theory, with only those two parameters, we can have a maximum of:
201600 blocks * 65536 bytes = 13212057600 bytes (or 12.3 GB) per week
However, the third parameter current_reserve_ratio acts like an anti-spam mechanism to prevent an overflow of the maximum block size. For example, if a block contains more than 65536 bytes, the exceeding transaction(s) would transfer to the next block unless they expire. So to prevent this overflow, the current_reserve_ratio automatically adjusts when the block size hits 25% of its maximum (25% * 65536 = 16384 bytes), thereby decreasing the max_virtual_bandwidth for everyone.
Notice that when querying the blockchain, e.g. with
current_reserve_ratio has a maximum of 200,000,000 with a precision of 10^5 (to avoid decimals), so it's actually 20,000.
max_virtual_bandwidth has a maximum of 264241152000000000000 with a precision of 10^6 so its value is 264,241,152,000,000 bytes (see below).
So back to our calculations, the total weekly data allowed by the system is:
201600 blocks * 65536 bytes * 20000 = 264241152000000 bytes or (brace yourselves) 240 Terabytes per week
It's a ridiculously astronomical figure that would make Dr Evil very happy.
But, since the reserve ratio is kicking in at 25%, we're down to 60 TB per week (still a big number). Fortunately, those are potential numbers allowed by the system. To transfer 60 TB of data per week would require a sustained amount of transactions saturating the system, which is not the case for now (unless Steem grows to hundreds of millions of users). Furthermore, I don't know why the developers decided to set that 20000 value for the reserve ratio, but something much smaller (e.g. 2000) would have been more realistic.
Individual bandwidth parameters
So far I've covered the global bandwidth parameters handled by the system. The other important consideration is the bandwidth allocation per user. Briefly, the calculation relies on a few parameters:
vesting_shares, user's own vests
delegated_vesting_shares, vests delegated by the user (outgoing)
received_vesting_shares, vests received (incoming)
total_vesting_shares, global vesting shares
allocated bandwidth = (vesting_shares + received_vesting_shares - delegated_vesting_shares) / total_vesting_shares * max_virtual_bandwidth / 1000000
(remember that 10^6 I mentioned earlier)
There's another parameter average_bandwidth (weekly average bandwidth), that is used to determine a user's remaining bandwidth. For a detailed calculation check https://steemit.com/utopian-io/@bloodviolet/how-to-calculate-your-remaining-bandwidth-using-steem-python
Hence, the allocated bandwidth depends on the user's total Steem Power and average weekly bandwidth. By increasing the Steem Power and lowering the weekly usage, a user can increase their allocated bandwidth.
Bandwidth limitation solutions
By understanding how the bandwidth works, we can think of a few solutions for all the blockchain actors to remedy the limitation problem.
Witnesses can increase the maximum block size which is currently at 65536 bytes. I'm personally in favor of this measure, however two concerns have been brought up:
- Spammers may spam more.
- The hardware requirements will increase, especially the RAM, which is a huge setback for scalability at the moment. This not a problem for a witness or exchange node, but for a full RPC node it's more serious.
AppBase should solve some of the scalability issues:
Also, Hardfork 20 promised multi-threading for the
steemd daemon, but to my knowledge there hasn't been any mention about RAM optimizations neither in AppBase nor in HF20.
Because the algorithms and the daily usage are complex, we can't predict with certainty the extent of increasing the block size. Nonetheless, we can increment the block size by 10% for example and monitor the behavior of the reserve ratio to see if there's any improvement, then adjust the parameter accordingly. If modifying the block size doesn't help, at least we gain this information and move on to other solutions.
Users can check their bandwidth status at https://steemd.com/@accountname (replace "accountname" with your own, of course). As explained above, users can easily address their own issues by:
- Reducing the weekly transactions.
- Powering up more STEEM.
- Avoiding the rush hours.
Many bots usually write a comment when they do their action (e.g. upvote a post, welcome message, etc). Some of those comments are long and often span multiple paragraphs. Eliminating those automated comments or reducing their size can be beneficial. Although not malicious, repetitive comments is spam after all, so do less of it.
One factor that has caused more bandwidth issues for minnows is the fact that Steemit used to create accounts with 57,000 VESTS (~29.5 SP) delegation, but has cut that nearly in half to 29,700 VESTS (~14.5 SP). Also, Steemit has been systematically reducing the old delegations to ~14.5 SP. The reason for that is of course to accommodate the increased sign-ups, but by doing so they unintentionally paved the way to the current bandwidth limitation affecting the minnows.
Fortunately, HF20 promises to address the on-boarding by removing the need to delegate to new accounts by burning the account creation fee instead. So until HF20 is released, I see a few possible options that Steemit can do:
- Reduce or halt the sign-ups.
- Reinstate the 29.5 SP delegation.
- Remove the delegations from accounts that have accumulated a minimally acceptable SP to transact (e.g. 30 SP).
- Accelerate the release of HF20 (obviously).
- The developers could tweak the algorithms to be more permissive by increasing the 25% reserve ratio threshold to something like 30-35%, especially during the rush hours.
- Add more verbose information about the bandwidth in the profile page, instead of constantly having to check that on https://steemd.com.
I hope the bandwidth limitation problem improves soon, because it has been frustrating many users for weeks already. It's important for Steemit to provide a comfortable experience. It's one thing to encourage adoption and on-boarding, but it's another thing to see users slapped in the face when they can't transact and being left in the dark about why this is happening.
How about you? Do you have any solutions you'd like to share? Please comment.
I have mentioned rush hours. I will cover that topic in an upcoming post.
Available & Reliable. I am your Witness. I want to represent You.
🗳 If you like what I do, consider voting for me 🗳
If you never voted before, I wrote a detailed guide about Voting for Witnesses.
Go to https://steemit.com/~witnesses. My name is listed in the Top 50. Click once.