Running full steemd nodes /w reduced RAM requirements
After a handful of naive attempts and hardware configurations, I have converged onto a stable setup. It is probably not optimal and I believe @gtg is doing extensive research in this domain, so unless in a hurry, it is worthwhile to wait for his post.
In an effort to start moving towards HA, while keeping the costs reasonable, I've started looking into a hybrid solution for steemd nodes. Rather than keeping the entire shared memory file in RAM, some of it is being swapped onto very fast NVMe drives. So far it seems like the concept works well enough for practical purposes.
I'm currently using SP-128's /w NVMe servers by OVH.
CPU: Intel Xeon E5-1650v3 - 6c/12t - 3.5GHz /3.8GHz RAM: 128GB DDR4 ECC 2133 MHz Storage: SoftRaid 2x 1.2TB NVMe
A very basic initial config is available on GitHub. I'm still running things in Docker for ease of deployment/upgrades.
SteemData is currently being migrated to more powerful hardware in a new datacenter (OVH - Gravelines, France). OVH is quite a fair bit more expensive, however they do have excellent connectivity, and a fair amount of bandwidth between EU/US which allows for future geo-expansion. Once the migration is complete, the switch will happen on DNS level, so no changes are necessary for end users.
An apology for taking so long
Whenever SteemData experiences interruptions, I'm reminded by how many people depend on it.
I'm expecting the SteemData upgrade to complete by the end of January. I apologize for slow progress - due to my full-time commitment @ view.ly, I am only able to work on it in evenings and on weekends.
The price of STEEM has gone up significantly in the past 2 weeks, and as of right now at least, the witness pay is more than sufficient to cover the expansion costs. As such, I strongly feel like right thing to do is to pass the funds to other Steem projects in need.