App Stream update: 4 new search APIs, 2 filter APIs, better functions and error response

in #utopian-io6 years ago (edited)

Github repo: https://github.com/peerquery/app-stream.

I noticed some crude async conventions I had initially used on App Stream, so I took time to quickly update it as well as add a few new features.

New search APIs

Full body text would require the column to be altered to full text, while it is also too expensive given the amount of load already on an App Stream DB. For this reason search is limited to titles, authors and categories and max results is 20.

Since comments/replies do not have titles or "categories" - only posts are searched. To search comments/replies we would have to do a full body text search.

The 4 new search APIs include:

  • /api/search/title/:text - find text in post titles
  • /api/search/author/:author/:text - find post by author with text in post titles
  • /api/search/category/:category/:text - find posts from category with text in post titles
  • /api/search/category/author/:category/:author/:text - find posts by an author from a category with text in post titles

Example: search for posts by author @stoodkev in #dev category whose title contain 'SteemJS', do: /api/search/category/author/dev/stoodkev/SteemJS. The results are a JSON with the fields:

  • id
  • block
  • tx_id(steem transaction id)
  • author
  • permlink
  • category
  • title
  • body
  • json_metadata
  • timestamp
  • url
  • last_update
  • and depth.


This API guide has been included in the search API guide endpoint: localhost/api/search, and the DOCs have been updated to reflect it.

New filter APIs

Sometimes you might just want to return posts from a particular author or just a particular category. The new filter APIs allow just that. Endpoint base with minimalist docs: localhost/api/filter, and like the other, the default max returned rows is 20 for performance issues.

Filter by author:

localhost/api/filter/author/:author,  eg: localhost/api/filter/author/utopian-io would return 20 recent posts from the @utopian-io. The results are a JSON with the fields:

  • id
  • block
  • tx_id(steem transaction id)
  • author
  • permlink
  • category
  • title
  • body
  • json_metadata
  • timestamp
  • url
  • last_update
  • and depth.


This would not be necessary if your App Stream's target is post/comment/reply by author - if you are already curating by author, then everything in your DB is only from your target author.

Filter by category:
localhost/api/filter/category/:category, eg: localhost/api/filter/category/life would return 20 recent posts from the life category. The results are a JSON with the fields:

  • id
  • block
  • tx_id(steem transaction id)
  • author
  • permlink
  • category
  • title
  • body
  • json_metadata
  • timestamp
  • url
  • last_update
  • and depth.


This would not be necessary if your App Stream's target is post/comment/reply by category - if you are already curating by category, then everything in your DB is only from your target category.

Beautify API functions

Initially I left try...catch blocks from what was to be a callback free version, and left them on a callback version of pool.query(). Pool.query had already been promisified with util.promisify, as described by Matt Hagemann.

There is no need to use the callback version of pool.query. Also when using the callback version, it would not need the try...catch block since it already has its own error catching system.

I noticed this needless bloat while developing Curator's APIs, now I've updated the pool functions from:

//fetch latest ? posts
app.get('/api/curate/:num', (req, res) => {
try {
var num = req.params.num;
if (num > 100) num = 100;
var sql = "CALL curate(?)";
pool.query(sql, num, function (error, results, fields) {
if (error) { console.log(error.message); return } ;
res.json(results[0]);
//console.log('Query successful.');
});
}
catch(err) {
console.log(err.message);
}
})
 

To the better:

 

 //fetch latest ? posts
app.get('/api/curate/:num', async (req, res) => {
try {
var num = req.params.num;
if (num > 100) num = 100;
var sql = "CALL curate(?)";
var results = await pool.query(sql, num);
res.json(results[0]);
//console.log('Query successful.');
}
catch(err) {
console.log(err.message);
}
})
 


Its more cleaner, readable and more async-like now. The former version works and is not bugged, but is a completely needless bloat.

Error responses

If you observe the above code carefully, you would notice that its error handler is not complete. When the MYSQL query runs into an error, the error message is logged to the console - however there is nothing done about the receiver.

This means that when there is an error, the API requester will still be left waiting for a response from the server until it times out or something.

I've added error responses to the error handling functions of all existing and new DB API functions, so once there is an error, the requester will be sent a 500(Internal Server Error) response and the connection will be terminated immediately.

 

 //fetch latest ? posts
app.get('/api/curate/:num', async (req, res) => {
try {
var num = req.params.num;
if (num > 100) num = 100;
var sql = "CALL curate(?)";
var results = await pool.query(sql, num);
res.json(results[0]);
//console.log('Query successful.');
}
catch(err) {
console.log(err.message);
res.sendStatus(500); // *** the new error responder ***
}
})
 


Change guide_api to api_guide

api_guide was the original name for the API documentation module, however along the way I changed it to guide_api. This change resulted in breaking parts of the code which still had the old name.

While this did not affect the functioning of App Stream, it shows an undefined value for the option: api_guides when visiting the localhost/api. I've updated all the names back to the original api_guide name, so now it shows the API guide states as on/off.

Safe and recommended for install

This update to App Stream does not break anything. If you have your App Stream already up, you can upload this updated version without breaking anything, or loosing any of the data already generated.

Also, I would recommend that anyone already using App Stream should update to this one. It does not break anything on your server, but rather free up server time when there is an error, and comes with more and better APIs.

The only catch would be that when updating existing systems you might miss a few posts which would be generated during the update process within the server update and server restart window - depending your your setup.

Links

Github repo: https://github.com/peerquery/app-stream

Docs: https://github.com/peerquery/app-stream/blob/master/README.md

Commits: https://github.com/peerquery/app-stream/commits/master

Author: https://github.com/Dzivenu
 


Sort:  

This post was promoted by @Devfund

Funding comment #2

lol at that bidded comment

Thanks for the contribution!

It looks like a really interesting project, and the documentation it already has is really great!

One thing I would recommend for future contributions is that you add a PR, or at least link to the commits that are relevant to the contribution.

Keep up the good work!

Your contribution has been evaluated according to Utopian policies and guidelines, as well as a predefined set of questions pertaining to the category.

To view those questions and the relevant answers related to your post, click here.


Need help? Write a ticket on https://support.utopian.io/.
Chat with us on Discord.
[utopian-moderator]

Hey @dzivenu
Thanks for contributing on Utopian.
We’re already looking forward to your next contribution!

Contributing on Utopian
Learn how to contribute on our website or by watching this tutorial on Youtube.

Want to chat? Join us on Discord https://discord.gg/h52nFrV.

Vote for Utopian Witness!

Coin Marketplace

STEEM 0.36
TRX 0.12
JST 0.039
BTC 70112.96
ETH 3549.99
USDT 1.00
SBD 4.71