Updates to anyx.io Infrastructure (Including Hivemind Support)

in steem •  7 days ago

server-2160321_1920.jpg

A few months ago, I announced anyx.io, a high performance Full-Node API. The goal of this infrastructure project is to create a high throughput, high availability alternative to Steemit Inc's own API offering (api.steemit.com). Furthermore, the idea is to promote and increase the decentralization of Steem: if everyone uses Steemits' severs, we lose the ability to publicly audit the information they provide us, and moreover, succumb to to any censorship they might enact.

In this post, I wanted to outline a few of the things I've been working on since the announcement. To summarize, here's whats new:

  1. Hivemind full-node queries are now available at https://hive.anyx.io.
  2. Legacy API Support Continues at https://anyx.io.
  3. HTTP (non-ssl, port 80) support now offered.
  4. Improvements to custom middleware solution (Jussi replacement).
  5. Bugfixes (e.g. large payload issues).

1. Hivemind Support Now Offered

Steemit recently announced the use of Hivemind in production. What this means for application developers is that many API offerings have changed, and going forward, certain API's traditionally offered have become deprecated. In this sense, api.steemit.com no longer provides a "full node" as we are used to, but has moved to a different standard.

In the goal of increasing decentralization of API services, I have added a new stack to my infrastructure that includes a full hivemind node, accessible at hive.anyx.io. This offers a public alternative to the same semantics that api.steemit.com offers.

How this was done specifically will be explained in part 4, but one important note that needs to be made about hive is that it requires a steemd node to link into in order to build its state. I've noticed that some people offer hivemind nodes but are not clear about what steemd node provides the back end, and indeed, if one uses api.steemit.com to fill the information in the hive node, it's not really auditing the information. In my case, the steemd node it retrieves its information from is part of the anyx.io stack.

2. Legacy API Support Continues

Since Steemit dropped support for previous "full node" semantics quite quickly, many developers were caught unprepared and have not updated their applications yet. To support these developers, anyx.io will continue to support the legacy "full node" API for as long as it's economical to do so. (Please consider voting for me as a witness!)

In addition to the "full node" semantics, websocket support continues for legacy applications such as the desktop wallet Vessel.

Adding hive.anyx.io is intended to aid developers using my infrastructure to try out their applications with the new API semantics without having to rely on api.steemit.com. Eventually legacy support will likely end, and so developers relying on legacy support should try out their applications sooner rather than later!

3. HTTP Support Added

Previously, anyx.io was only reachable via SSL (https, port 443). The reason for this was due to a limitation of Jussi, Steemit's provided middleware. As I have dropped this and replaced it with my own middleware (see part 4), I now have also opened regular http (port 80) support.

For most users, you should continue to use https. Honestly, if you don't know why you would want to use http instead, you should certainly continue to use https. Only those who know and understand the trade-offs and ramifications of http-only should consider it.

That being said, for anyone testing latency as opposed to throughput as a performance metric (looking at you @holger80), testing http://anyx.io is preferable, as it is slightly more optimized for latency. https://anyx.io continues to be optimized for throughput.

For those that don't understand the difference: Latency is a metric of how quickly you can retrieve a response back after you request it. Throughput is how many total requests can be served. As an example, if 100 clients can retrieve data every 1 second, the throughput is 100 r/s with a latency of 1s. However, if 500 clients can retrieve data once every 2 seconds, the throughput is 250 r/s with a latency of 2s. At a high level, the entirety of the anyx.io stack is optimized for throughput, as the intent is to offer a public node that gives fairness to many, many concurrent clients.

4. Custom Router Development

As mentioned previously, I have replaced Jussi (Steemit's middleware solution) with my own custom solution. There were several reasons for avoiding Jussi, primarily:

  • Lack of support for port management
  • Poor performance in throughput
  • Caching often too overzealous
  • Does not support unix sockets

For the replacement, I built a custom solution in Golang that connects to steemd via unix sockets (this is I feature I added to Steem itself, here, for much better local performance and to avoid the tcp/ip stack where possible) and offers better performance in general due to being a compiled, static language rather than a dynamic one (how python operates). As an outcome, I've noticed a drastic decrease in timeouts compared to jussi, as all requests are served concurrently with excellent throughput.

For caching, my solution is less zealous and will attempt to retrieve new information as soon as possible. In general, this will mean more up-to-date results compared to solutions with heavy caching.

Finally, hivemind support was added to the stack. If your request is made to hive.anyx.io instead of anyx.io, the hivemind API calls (which can be found here) are intercepted by this middleware and sent to the hivemind stack, but any other calls will continue to the anyx.io stack as usual.

Notably, with this new middleware has come a few issues -- as semantics do not perfectly match those of Jussi (which many other API's offer). As such, this is a work-in-progress, so if you notice any discrepancies between my API node and any others, please feel free to let me know.

5. Bugfixes and Performance Improvements

A side note that's important to mention is that tweaks and improvements are ongoing! Certain issues like caching returning out-of-date results have been resolved (opting to be more sensitive to time), and some nginx edge-cases like payload size causing interference have been fixed (see this hivemind issue). I also recently added support for batched requests (of limited size) since certain application developers require it.

If you run into any issues, please feel free to poke me so that I can resolve them! The goal is to provide a feature-complete API alternative to remove dependency and reliance on Steemit Inc., and so any improvements I can make will help me reach that goal.



Like what I'm doing for Steem? You can read more about my witness candidacy here:
https://steemit.com/witness/@anyx/updated-witness-application

Then please consider voting for me as a witness here!
https://steemit.com/~witnesses

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Nice work! Do you plan to release your router as open source so others can use and improve upon it?

·

Yep, definitely. It's not actually that complex, but it was basically designed directly for my stack... so I'll clean it up a little first then throw it on GitHub sometime soon.

Great work! Jussi was a weak point so its great that you've designed a replacement. Looking forward to it being open source.

Cool stuff. We need people like you to keep Steem working as it grows. Thanks for all your work.

Posted using Steeve, an AI-powered Steem interface

Great work!
Just tested ur node in SteemWallet, works flawless. (as expected :D) - nice that you redirect any non hivemind calls to the "regular" full node.

·

Great! Do you actually use hivemind calls though? If not just use anyx.io :)

Good job. Hivemind seems good, but how does one use it? How are requests made, what are the methods/functions and their parameters to retrieve data? Can you help developers find information in this regard? The github for Hivemind doesn't even demonstrate how this can be used by an app/site, for instance. Thanks!

·

Indeed, the sparsity of "how to use it" is one of the reasons many developers were left out when steemit quickly jumped over.
Basically, it has taken many of the API calls, such as those from here:
https://developers.steem.io/apidefinitions/#apidefinitions-follow-api
And replaced the way they are responded to with a different program, in an attempt to save costs.
The GitHub shows which calls were replaced.

Otherwise, it should be the same semantics for using an API node -- it's the back end that a front end like an app talks to.

Awesome, I'll try to catch up just after going back from my trip.

Nice update!

This story was recommended by Steeve to its users and upvoted by one or more of them.

Check @steeveapp to learn more about Steeve, an AI-powered Steem interface.

Throughput is how many total requests can be served.

·

'Total' is volume. Throughput is volume per unit of time.

Congratulations @anyx!
Your post was mentioned in the Steem Hit Parade in the following category:

  • Pending payout - Ranked 7 with $ 69,66

This post has been included in the latest edition of SoS Daily News - a digest of all you need to know about the State of Steem.

I have a question for @anyx. Is there any danger that my posts will get flag/vote/comment from @cheetah or @steemcleaners if I use @dlike or @share2steem?

Posted using Partiko Android

·

Cheetah might pop up if you copy and paste a lot of text, that's what she does -- finds copy and paste. She doesn't flag though, just comments.

Steemcleaners would only come after you if you're performing some kind of fraud like identity theft or stealing other peoples content.

Regardless of which platform you use, the above apply.

Your post has been good. I love it. This type of post further did.I would like to see more about this.