You are viewing a single comment's thread from:

RE: 3 Reasons Steem Price Will Go Up - And One Reason It Will Not Reach The Moon

in #steemit7 years ago (edited)

Most VPS providers use SAN storage that they label SSD, which is SSD but you're going out over their local network to hit their array of SSD's - this is typically high latency. If you have a physically attached fast SSD it's acceptable to place the entire shared memory file on it and run without much difficulty.

Sort:  

So what you're telling is that the current latency of SSD is enough for the shared memory file for a witness or RPC node? I never used VPS, just bare metal machines.

Yes, a sufficiently fast SSD is enough for the shared memory file on a full RPC node (not just witness). However, it will handle less requests than one run entirely out of RAM - but that can be mitigated using a caching layer (like jussi for example).

Here's a test you can do to 'prove' it if you want. I just tried it using a 32GB digital ocean droplet. You can spin one up there for a few hours and then kill it (although they are expensive, they bill hourly - so as long as you don't forget to kill the droplet a few hours would probably be a few dollars or so).

Install docker from get.docker.com. Run docker run -d --env USE_WAY_TOO_MUCH_RAM=1 --env USE_FULL_WEB_NODE=1 --env USE_PUBLIC_SHARED_MEMORY=1 --env USE_NGINX_FRONTEND=1 -p 2001:2001 -p 8090:8090 --name steemd-full steemit/steem:0.19.1-p2pfix-bumpram and then to follow along use docker logs -f steemd-full. That particular branch will have a current state file that can be pulled in at the moment (this changes regularly since it pulls from the dev environment where different branches are tested so expect it to change/become stale at any given point). NOTE: The reason that option isn't specified in the README is because we don't recommend trusting anyone else's state file for transactions. For any production setup, you should really generate your own state files. This is ok for testing though and getting a full node up a little more quickly.

Anyway, that will pull in a ~24gb state file while decompressing it at the same time to shortcut getting the node started. It's pulling externally from a bucket in S3 us-east-1 WHILE decompressing it so expect it to take a while (maybe 45 minutes-ish). It will take an additional hour or so for the node to 'catch up' most likely. So in 2-3 hours and you will have a fully synced full RPC node running on a 32GB RAM droplet running just off the SSD. More RAM is better for performance because of OS disk paging but for the example I wanted to use a minimum for a full node.

The digital ocean droplets still do not have physically attached drives (afaik) but they have particularly fast disk i/o for a VPS provider and is enough as an 'example'. Results will be much better (faster syncing) on bare metal with physically attached disks for better disk i/o. tldr; I would not use a minimum VPS to run a full node. But, the example does prove the point :)

Thanks a lot for taking the time to write this, I'll try it next week. As steem.supply is getting more and more traffic I need to set up a specific node for it. I also have a couple of other Steemit projects in the pipeline so my own RPC mode is becoming a necessity. I may not set it up on bare metal earlier than mid-September but I will spin up a few instances to test the behavior. Appreciate your efforts guys, I know this is not an easy project.

That's really helpful... I'll probably try that later. How long would you expect that VPS to take to complete a full replay if I asked it to? would it be several days?

Coin Marketplace

STEEM 0.18
TRX 0.16
JST 0.029
BTC 76343.25
ETH 3042.42
USDT 1.00
SBD 2.62