Update your STEEM apps! Big changes coming for 3rd party developers

in steemitdev •  24 days ago

We have created a new public jussi endpoint for third party applications to use. Jussi is our custom built caching layer, for use with steemd and other various services (such as SBDS). The jussi endpoint is available now at https://api.steemit.com. Condenser (the front-end application for steemit.com) is already using api.steemit.com today. We encourage all third party developers to begin using the new endpoint right away. We are planning to deprecate the steemd.steemit.com endpoint in favor of api.steemit.com in the near future.

What does this mean for third party developers?

For our public steemd endpoint using api.steemit.com, apps will need to speak to it through http/jsonrpc instead of websockets. The libraries we maintain will soon be updated to default to api.steemit.com instead of steemd.steemit.com, which will cover a lot of apps that don't set an endpoint and just use the default.

JSONRPC has been chosen to be used for all of our infrastructure for a variety of reasons; the two biggest being the ability to more easily load balance and manage connections, and the ease of use for new developers - as JSONRPC is much more common than websockets.

Is it going to be difficult to update my steem apps?

In most cases it will be extremely easy to make this change. The four most popular steem libraries (steem-js, steem-python, radiator, and dsteem) that the majority of steem apps are built with already support http/jsonrpc. Other libraries may as well. All you'll need to do is update the endpoint/url to https://api.steemit.com from the older wss://steemd.steemit.com. If you have a custom written app that doesn't use one of the popular libraries you will need to change your transport method to http/jsonrpc from websockets.

How long do I have?

The timeframe for deprecating steemd.steemit.com is not determined yet, but you should start implementing this change as soon as possible if you are using our public nodes for your STEEM apps. We will announce a final date before the endpoint is taken away.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  trending

I'm a user..not a geek
I have no idea what this means.
That said..I REALLY APPRECIATE the update.
It makes me feel a LOT better knowing that the devs
ACTUALLY EXIST
and are working on Steemit.
I'd have REALLY APPRECIATED
similar updates during the last few weeks when Steemit was broke.

·

I put an oar in here ,since we are in the same boat.

·

Basically, for end users, you can expect steemit.com and all steem-based tools to become more stable and more responsive.

·
·
·
·
·

Two imperfect metaphors that describe what happened in two levels of detail:

  1. You ever gone to an office to talk with a super expert about something, but they are too busy. But, it turns out that their assistant was helpful and solved your problems without you having to wait around.

  2. Whenever you want to work on Steem, you go down to the steem office. There's this one really smart giant, but kinda grumpy robot who takes your request and goes to work magic on the blockchain then gets back to you. They tried making his brain bigger and giving Botty tons of coffee to get him to work faster. They even tried giving him more arms. Still Botty got way too busy and developed a bit of a temper. It's pretty hard to build more like Botty because he's fat, has a ton of custom parts and fills up the building. He also slurps down the electricity and takes ages to wake up if he goes to sleep.
    But, it turns out that most of what botty does is read stuff from steem. And, there's some bots made from high quality mass produced parts that are way less fat, slurp less power and wake up super quick when summoned. These bots are super good at handling your steem requests and remembering what they read just in case you or the next customer needs it. They don't like to bother Botty unless they need too. These guys are called Jussi.
    So now, you when you go to steem office you talk with any of the available Jussi bots. If you're just asking to read something, chances are your Jussi bot can pull it from their memory. When Jussi can't remember something or they need to write something into the blockchain (like upvotes, postings etc) then they go ask Botty and get back to you at super fast bot speeds.
    When the steem office gets really busy, it's super easy to open up more counters and have more Jussi bots ready to serve - afterall Jussi bots are made from cheap mass produced but high quality parts. When steem gets less busy then some Jussi counters can be closed to save power.

That's kinda what happened.

·
·
·
·

I'm going to turn this into a post.

·
·
·
·
·

That explains everything.
Thank you.

·

Like Everitt, I'm an end user, not a developer.
My IT experience is in usability and human factors (aka "developers' most hated person"), not code.
I'm also PR and communications.
I just want things to WORK... and when they don't, I want someone to wave a flag and say "Yeah, shit's broken right now, but we're fixing it..."
Silence when shit breaks is a bad deal. People lose confidence.
So THANK YOU for the update.

These more frequent updates are great. Please keep them coming & thanks for all your hard work.

·

But got some bugs. I updated one of the server that runs on Yehey.org Load Balancer servers to use https://api.steemit.com as suggested instead of wss://steemd.steemit.com. Getting this error.

[ Europe - Yehey.org server ]
<-- POST /api/v1/page_view 2d2bb8e00b64d68bbd85fd3b82
-- /page_view --> ip=2600:8806:600:2c70:ed83:5c3a:4542:a06f uid=2d2bb8e00b64d68bbd85fd3b82 page=/pick_account
<-- POST /api/v1/csp_violation 2d2bb8e00b64d68bbd85fd3b82
-- /csp_violation --> https://yehey.org/pick_account : https://api.steemit.com/ -- Mozilla/5.0 (Windows
NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.75 Safari/537.36
--> POST /api/v1/csp_violation 200 5ms 0B 2d2bb8e00b64d68bbd85fd3b82
--> POST /api/v1/page_view 200 34ms 12B 2d2bb8e00b64d68bbd85fd3b82

Everything is working fine but when I start in the homepage, click on new or hot I have to reload/refresh the screen to see the posts. Then go back to homepage, it also require to refresh. This behavior is only happening when using https://api.steemit.com.

I will also post this findings in github.com.
Thank you for your hard work.
@Yehey

For scripts/bots running #radiator, you can switch to api.steemit.com by upgrading the library:

$ bundle update

Also see: How to Update Radiator Apps

·

Thanks for adding this information inertia. In your next radiator release can you make this the default? We will be doing the same with steem-js and steem-python, and I'm sure @almost-digital will be updating dsteem also.

EDIT: I see that that is a bundle update so you must have already done so, great :)

·
·

dsteem doesn't have default nodes, but I'll make sure to change the examples to show api.steemit.com 😁

·
·

I am currently implementing a HTTP Client for SteemJ - The next version will support it ;)

·
·
·

Good news, glad to hear it!

·
·

Yep, it's been the default for almost two weeks. :D

But of course, people have to at least do bundle update. There are additional steps if they aren't using the defaults, which is likely. But most of that can be picked up by recloning the app and installing from scratch, which I go over in How to Update Radiator Apps.

·
·
·

Interesting - I believe we weren't quite production ready on that two weeks ago. It's usually best to wait for a formal announcement to update such things. But thank you for being on top of it anyway, much appreciated.

·
·
·
·

Hay @justinw - I would totally agree with you, but as a third party developer myself I have to say that this is quite impossible. - I need to implement the stuff at the same time you do it, because you do not provide a lot of information to us. From my point of view this is more or less the first time I hear about a change early enough.

If we talk about Steem Updates its quite a nightmare - You release a new Steem Version, everyone updates the nodes, and depending on the changes, a lot of third party tools are no longer usable (at least parts of them).

I've talked to timcliff some time ago that it would be really great to have a changelog for third party devs some weeks before you release changes.

That would be really really nice <3

·
·
·
·
·

This IS the advance notification and is why we're not discontinuing hosting a websockets enabled cluster of steemd's (wss://steemd.steemit.com) - at least not yet. We want to give everyone fair warning and a chance to update their libraries before discontinuing this service. It's never our intention to leave behind 3rd party developers. If you need help updating your applications or
steem library feel free to reach out to me directly on steemit.chat and I'll do my best to answer any questions you may have. This will be a transport change, not an API change - anything that worked over websockets to steemd will work over http/jsonrpc, generally with fairly minimal adjustments. We made a commitment earlier this year to communicate as much as and as far in advance as possible for any API changes or anything else that could effect third party developers. We will continue to notify everyone about any upcoming changes in advance. Steem on.

·
·
·
·
·
·

Thanks for taking the time @justinw - Yeah, as I said, this is the first time I see an announcement early enough and I love it. If you keep doing this for the future and especially for API changes I am more than glad to hear that <3

Regarding the switch to HTTP I also agree that it is not the most complex task :D

Steem on! :)

·
·

Will there be documentation/tutorials if we want to add Jussi in front of our own full nodes? Are their drawbacks? Was the transaction broadcast errors, account not found, and all the other issues the result of Jussi?

·
·
·

As we have availability to do so we will expand the existing documentation for jussi as well as any other services that are lacking full documentation.

Excellent! Thank you so much for keeping us informed on what's being done to improve the performance of this site we love so much. I'm already starting to see improvements. Things are actually working as expected again!

I like to read the dev blog to keep up to date, although it’s in very general terms since most of this goes over my head... But...
Whatever is going on, to me, Steemit seems much more stable and responsive this evening. And as they say in the tech world, “The proof is in the pudding.” (Just kidding. No one says that in the tech industry, or anywhere else probably... but if I hit “post” and this comment sails right through I’m going to celebrate with some pudding...)

·

I hope you have plenty of pudding.

·
·

·

"The proof is in the pudding", I'll remember that.

Thanks for the updates. Good to know that devs are working to make the application better!

What about the efficiency of web sockets vs thousands of individual https connections? I'm sure caching will add a lot of efficiencies but a lot is lost by dropping sockets.

·

When considering millions of connections spread across many servers, short-lived https connections are much easier to load balance and plan for than long-lived websockets. Websockets certainly have their advantages, but in this case https is much more appropriate, and also easier for 3rd party developers to pick up and use.

·
·

https connections are exactly the same as wss connections, except they're "upgraded" https connections. Also, the fact they are now using keep-alive would negate your point that they're extracting some benefit by using "short-lived https connections".

·
·
·

Except, they do not scale well. Go try to load balance millions of websocket connections and see how that goes :)

·
·
·
·

Exactly.
We have difficulties with performance already. This approach will only worsen the situation.

·

I think they're probably using HTTP/1.1 keep-alive for the connections. But nonetheless, while this comparison is a few years old, it does make the point that HTTP/REST packet overhead still adds a lot of additional overhead...

Link: REST vs WebSocket Comparison and Benchmarks

Perhaps the steem devs at some point will be so kind as to give us a clearer understanding of the logic behind some of their design choices, including using python for jussi versus a C++ or other compiled implementation (although there are some python compilers available as well).

JSONRPC has been chosen .. JSONRPC is much more common than websockets.

Also, this statement really makes no sense, as JSONRPC is just the data format, irrelevant to the underlying transfer protocol (HTTP or Websocket or even TCP or UDP). Most of the requests look exactly the same, you're still send this request to get back the exact same data:

{id: 16, jsonrpc: "2.0", method: "call", params: ["database_api", "get_dynamic_global_properties", []]}

The difference is that with the new HTTP model, the request also sends this along with it for every request:

POST / HTTP/1.1
Host: api.steemit.com
Connection: keep-alive
Content-Length: 102
Origin: https://steemit.com
User-Agent: Mozilla/5.0
content-type: text/plain;charset=UTF-8
Accept: /
Referer: https://steemit.com
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8

Also, may have been an oversight last time I looked, but they're definitely using keep-alive now.

And let's not forget the additional response headers for each request too!

HTTP/1.1 200 OK
Date: Sat, 28 Oct 2017 00:39:14 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
Server: nginx
x-jussi-cache-hit: steemd.database_api.get_dynamic_global_properties
x-jussi-urn: steemd.database_api.get_dynamic_global_properties
x-jussi-response-id: s2c436ffb-3c50-4589-af7d-cbd7cfa42cd1->38f80383-8931-48d7-b7d7-fdae590582dd
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range
Strict-Transport-Security: max-age=31557600; includeSubDomains; preload
Content-Security-Policy: upgrade-insecure-requests

Granted, the data should finally be compressed now which should lessen the blow, however websockets are also capable of compression, and per-message-deflate was included in graphene as well, though my understanding is that it may have been disabled due to some sort of incompatibility.

·
·

All of this is irrelevant compared to the huge pain in the ass of load-balancing websocket connections. Wire overhead is nothing compared to infrastructure overhead.

Also, HTTP/2 is basically the same semantics as the stupid jsonrpc-over-websocket-without-framing that steemd uses, but well-specified.

And, as you know, HTTP/2 does header compression.

Even without HTTP/2, even without keepalive: individual requests are better. The handlers are mostly stateless, and this allows horizontal scaling.

·
·
·

Hay @sneak :) I do not want to argue with you about the decisions already made, but I am more than interested in the problems you faced with load balancing the websocket connections. I am more than sure that you are operating one of the biggest websocket services in the world :D So I guess this would be quite valuable for a lot of people.

·
·
·

Do you prefer that 3p developers use HTTP Persistence or not? I did some tests and it and with persistence, I'm seeing ~15% performance boost when streaming the blockchain.

So, there's a clear advantage to using it, but I can see the advantage of not doing it too, from the service provider perspective, like if something needs to re-route, depending on how that's accomplished. With that in mind, I have the client keep track of how many requests it made and ask for a new session every 1,000 requests, or so.

·
·
·

Horizontal scaling is one of the reasons I've been such a big fan of hypermedia APIs and, essentially, the benefits of the REST constraints which have enabled the Internet to grow and scale so well over all these years.

Maybe in Lisbon we can chat a bit about the pros/cons of going with a true REST/hypermedia approach over JSONRPC. I attended RESTfest 5 years in a row and drank the REST kool-aid while building out api.foxycart.com. There's a lot of really great benefits that come "for free" when using hypermedia controls. Yes, more payload size, but as you said, that's often not the problem compared to other concerns. When a stateless, multi-layered, cached system is in place with well-defined rules, horizontal scaling becomes much easier. If you can bring in code-on-demand and hypermedia as the engine of application state, it creates some really fun options.

·
·
·
·

REST only works over HTTP, it's tightly coupled.

JSON-RPC works over any transport that has framing, it is independent of HTTP headers or methods (though it uses POST per the json-rpc spec, it need not even go over HTTP at all).

We didn't want to tightly couple our internal RPC system (which we are using for 100% of our internal services) to a specific transport.

·
·
·
·
·

REST only works over HTTP, it's tightly coupled.

I suggest Roy Fielding himself might disagree. It's an architectural style and a series of constraints. It's not HTTP (which happens to follow them).

https://en.wikipedia.org/wiki/Representational_state_transfer

One could fulfill all the constraints and not use HTTP, right? Because Roy was involved in building both in parallel, it's difficult for most to distinguish between the two, but if they can't, it can mean they don't understand what REST is or what the constraints are about.

That said, I agree with you, in a practical sense that today, for most purposes, REST is done over HTTP. I've talked with people who have used different transport layers with custom formats, but really, what's the point unless you have important requirements you have to meet like super compressed binary transport of something.

REST is not the solution for everything by any means. The old REST Discuss Yahoo group was ridiculous in how die-hard they were about REST solving ever problem imaginable. I found the API Craft Google group to be far more helpful years ago when I was learning all this stuff because the recognize what REST is really good for and when it doesn't make sense.

If you've got something that works well for your internal needs, and it's not tightly coupled between the client and the server, than awesome. My personal preference with REST (specifically the hypermedia constraint) allows for loose coupling so the client and server can evolve independently which is really nice. Internally that may be less important unless your application code base and the team supporting it grow too quickly at which point tight coupling can bring you to your knees and require way too much time to test across multiple departments for every code change request.

·
·
·
·
·
·

REST requires HTTP verbs and headers.

·
·
·
·
·
·
·

Which of the 6 constraints that define REST say that? I linked to the Wikipedia above for a reason. Sure REST "Applied to Web services" typically involves HTTP (verbs, headers, and the rest), but according to the constraints which define REST, it's not a requirement as I understand it. The constraints can be met by other means, if someone chose to do so.

·
·
·
·

TBH I think REST is a dumb fad started by millennials who read the HTTP spec and were like "OMG http has all these verbs, let's write some ruby libraries to use them!"

The only stuff you get "for free" with RESTful systems is caching or various HTTP stuff that works per-path (like L7 proxies that do intelligent rate limiting or whatnot).

That said, we're not anti-REST for external APIs (jussi could be expanded to support it, in theory, when talking to clients) but 100% of our internal RPCs are json-rpc at present.

·
·
·
·
·

Ruby on Rails was one of the worst thing that could ever happen to "REST". It made the term useless which is why I now say "hypermedia APIs" to be clear. If someone claims to be "RESTful" or "REST" the first thing I say is, "Show me the hypermedia." If there's no HATEOAS, there's no REST. Maybe we'll have some time in Lisbon for me to show you some stuff we're doing with our new admin and the Siren format. When the client application is directly driven by server-provided, context-aware hypermedia controls, it's a powerful thing. If you're not getting into the code on demand stuff or leveraging forms, you're just barely scratching the surface of what true REST brings to the table. It's not just HTTP verbs, pretty URLs, and uniform resources. It's much more than that. Yes, caching, consistent HTTP status codes, pretty much most of the WC3 and web-based RFC specs you get for free (or, at least, you get a pattern every developer and library can agree on, regardless of the domain ranging from profiles, to accept headers, to custom media formats, etc, etc)... but again it's more than that.

It's been a few years since I've given conference talks on hypermedia APIs, but sometimes just showing someone the HAL Browser in action makes it all click. Drupal started using HAL+json after one of the web services guys saw one of my talks. It's fun stuff. Unless you've had deep conversations with experts like Mike Amundsen (who, after 5 years of RESTfest, I would consider a friend), I'd be careful dismissing an entire approach as "a dumb fad started by millennials." Having been to and spoken at a number of API conferences, I find that statement out of touch with the enterprise-level usage of hypermedia controls today.

·
·
·
·
·
·

It could be the greatest thing in the world; we still don't want to tightly couple our RPC format to HTTP transport.

Considering that all of our services will speak it as our standard, implementing a translation layer (as was done for steemd with steemapi by @busy.org) isn't high on our priority list.

·
·

You may also be interested to note that jussi supports the JSON-RPC batch format, so that per-call overhead is optional. You can POST a list.

·
·
·

Good to know jussi supports json batching. HTTP/2 is definitely pretty close to websockets, though you have to use SSE for Server-Side push Events. For me, the closer to TCP/UDP the better. But if full-duplex is one of your issues, I can see how HTTP/2 could probably help with that.

Has there also been thought given to using SSE to push frequent requests such as head_block_number or market data, instead of requiring constant polling (also a problem with steemd)?

Regarding JSON versus REST, I totally agree it's the right choice, as it's easy to read/understand, easy to parse, and works with any transport one can devise. I think the WEB is likely an order of magnitude more inter-operable (at least) for having adopted it. My only real issue with it is verbosity, though for STEEMIT post and block data there's not that much one could gain in efficiency over simply implementing standard compression techniques.

However, for certain requests, you may want to consider a highly optimized data packet stream (ie. market data via protobuf). ArangoDB's velocypack has some interesting properties in that regard as well.

This is something I also plan to add to the C++ websocket reverse proxy STEEMIT cache that I've been building (which also includes JSON batching and SSE functionality as a backup to websockets), which I already use to distribute a single stream from a full RPC node to my backup condenser (front and back-end), as well as my discord BOT, and also for all my market data pulls as well.


Finally, I just wanted to throw in these few links, in case there's a tidbit of wisdom in one of them you might find helpful:

Link: How we fine-tuned HAProxy to achieve 2,000,000 concurrent SSL connections

Link: How Discord Scaled Elixir to 5,000,000 Concurrent Users

Link: The Road to 2 Million Websocket Connections in Phoenix

Link: Will WebSocket survive HTTP/2? (just for reference, a good comparison between HTTP/2 & websockets)

Thanks for the info

It's obvious now that your team is working tirelessly to make steemit, steem as better as steemians has dreamed @steemitdev

jussi is another great added value as it's name was derived from juicy.

It's a cool custom built caching layer.

·

When you cook something with steem, it becomes nice and jussi.

·
·

I reciprocate to that meme dude, that's awesome 👏

·

The guy who made it, @layz3r, is a latin nerd and claims it is latin for “request”.

·
·

From what I understand, jussi is the "first-person singular perfect active indicative" of jubeō, jubēre, that is an alternative form of the Latin verb iubeō, iubēre, that means "to command, to order".
So jussi means "I commanded" or "I ordered".

Does this mean things will be less glitchy?
Be nice to here from the Dev's when the site is having problems...
Why can't the dev's do something like what the extensions for firefox/crome by@armandocat? and implement some way of getting rid of dead followers,and hiding resteems or sort them like Busy.org ? why all the cool things coming from add on's...
Be nicer if things were more unified rather than having to switch things through second? party...app,ext,page.
I am not a tech,so Idk.
Thank You for the great site! I do understand it is still beta stage,I can't wait to see how this grows and expands!
Namaste

Congratulations @steemitdev, this post is the most rewarded post (based on pending payouts) in the last 12 hours written by a User account holder (accounts that hold between 0.1 and 1.0 Mega Vests). The total number of posts by User account holders during this period was 1274 and the total pending payments to posts in this category was $1676.44. To see the full list of highest paid posts across all accounts categories, click here.

If you do not wish to receive these messages in future, please reply stop to this comment.

hi @steemitdev, @ned

I WISH YOU COULD JUST TAKE A GLANCE AT THIS. JUST A GLANCE- CONCERNS OF SOME AFRICANS.

This came up in a #community challenge where people in #GHANA and Africa want to promote #STEEMIT at their various locations.

https://steemit.com/steemit/@richforever/internet-data-is-very-expensive-in-africa-major-hindrance-to-wealth-creation-in-africa

Very good news. I can imagine a scenario where 3rd party developers utilize the new api to bring a number of innovative personal and business services to market. This will in turn help to broaden the appeal of Steem blockchain technology.

I like these changes, but where is the Jussi currently drawing data from?

I believe Hivemind isn't ready yet, and wasn't easily able to understand which of the other sources are being used for what.

I tried running the docker SBDS as this seems very useful to me, but the documentation isn't really there yet, I couldn't easily see the DB schema, and getting the data as checkpoints required Amazon S3 permissions I don't have. Are there any public sources for this checkpoint data?

Sounds good but it’s a language I don’t understand 😂

Dear @ned

I met you very briefly around this time last year at the Indaba Hotel in South Africa. Today we're having the annual lunch for my mom's birthday at the Indaba and my aunt and uncle and I got onto the Steemit discussion (I just joined about a month ago).
Honestly, I don't know if it will reach you or if it will mean anything at all. I just wanted to say - hang in there. Many people can't grasp the "ride the wave" concept very well, but I think you're doing pretty well. From the sounds of things, people have been gunning for you personally. It must be pretty difficult to feel like you're carrying this all, but you seem to have a community behind you that truly believe in you. I owned a small business when I was 27, I can't even begin to think what it might feel like to manage something of the magnitude you're dealing with. You're doing a fantastic job @ned .

Yay ! A transparent cache service, that is (I guess) exactly what was needed for these performances issues!

So happy it is back to running smoothly again. Good work @steemitdev

You can find Russian version of this post HERE

Hope that all developer will read this post and will take action.

Well This is really a huge step towards making the apps more fast.

Great, even more apps are coming soon.. wish I could have an online training how to use this..