Updates on steem-python, SteemData, and the node situation
Users of the Python Steem library should upgrade to
0.18.93 as soon as possible. This version comes with new fixes, one of which prevents undefined behavior when all steemd nodes fail. It will now throw a
SteemdNoResponse exception after exhausting automated failover and all of 200 retries (10 node failovers, 20 retries per node).
I have been receiving a fair amount of complaints about SteemData state falling behind.
There are two ways to remedy this issue:
1.) Write a high performance http -> websocket proxy , as well as perform internal
state reconstruction to avoid querying steemd.
2.) Deploy a dedicated, self-healing steemd cluster.
Needless to say, both solutions are time and resource intensive, and unfortunately, I am not
abundant in either right now.
Big thanks to @gtg for letting me use his personal steemd node(s), which, when online, keep SteemData in sync.
Call for help
Full rpc  steemd nodes are a challenge to run. Currently, they require expensive servers with 128GB of RAM, and regular baby-sitting. To my knowledge, there are no open solutions for managing steemd clusters .
As rpc nodes became harder and more expensive to run, community powered nodes disappeared.
Aside from @gtg's node, I am not aware of any public nodes. As far as I know, two of the best community developers, @jesta and @good-karma run their own. Steemit's nodes are generally a bit slow, and occasionally unreliable .
If we'd like the Steem ecosystem to support third party apps and developers, we need better infrastructure.
We need to create a node cluster that is:
- worldwide distributed, with geo-location based load balancing
- scalable, openly monitored, and self-healing (trough automated remediation workflows)
- capable of running on all servers, and not locked down to a single vendor like AWS
There many talented people in our community, and I hope we can tackle this challenge.
 Steemit is already working on one, jussi
 Nodes with all plugins/api's enabled, serving at high throughput.
 Based on my experience with extended usage of their nodes for SteemData and Witness Tools.