You are viewing a single comment's thread from:

RE: PREVIEW. Technical thinking about the future of the space. ... [ Word Count: 2.000 ~ 8 PAGES | Revised: 2018.12.21 ]

in #development5 years ago

[Outline. Will be making a mathematical post later ...]

The theme is one on which papers occasionally come out and have been coming out for years and I think some thoughts about it in the context of tokens are due.

Fast, large databases, those with with very large variety in how information is entered, and great flow in and out, are very hot. At least B2B at the moment.

There are a lot of recent, interesting frameworks that have not yet built into a product however.

Each of those, so far as I can see, can also be generalized.

[I really wish blogs could render LaTeX and TikZ ... but whatever ...]

A short summary of one approach with, I think, a future.

(1) Cells are smart databases. Which arbitrarily many different services can read and write to concurrently. It's one of the more modern approaches (originated in mit) to scalable concurrency of services with high data flow, high data heterogeneity. To build high flexibility. Seems interesting for easily (and elegantly) building a high activity marketplace.

Services have no memory. (For simplicity but also for security.) A cell has various links to some services and not others. These links can change. A cell can receive a message to make the nonoriented link (Cell---Service) oriented (Cell-->Service). The linked service then sees an initial input and starts working. When done processing, it sends output to the cell, possibly changing --> back to ---. All the cells linked to a service are its neighbors. No services most often are not linked directly. Linked through such mailboxes. It's a net.

So no time management needed. All services work asynchronously. Concurrently. They don't wait turns. And no managing of the waiting of turns. Which eliminates the biggest constraint to rapid and easy development and later user experience flexibility.

Many services can write and read to the same database. Which uses one of possibly several rules for merging inputs [SUS08,RAD09]. Or stores many as they come in. Where in a tree then varying with which situation that input is part of [HEW76]. Which incidentally allows symbolic evaluation.

Any working service can look into any of its neighboring cells if it needs further information.

If it gets y and performs f(x,y), it will search, looking at neighboring cells to which it has any links, for the missing x, if it finds it copy it, and do f(x,y), then drop the result in some cell based on f(x,y), or x, or y, or some rule, else wait, searching again periodically, meanwhile doing other work.

Many services can interact in many ways at once, as needed, over many sequences of services and cells.

It's easy for service providing parties to join into this kind of system or leave the system while it works, very little coding required. Which reduces barrier to usage, and the system grows in capabilities, therefore possibly utility, faster than the number of services in it increases, as that number grows.

(2) Suppose, basically, you are building a variation on the theme of a marketplace, and will reward buyers and sellers with tokens for transactions. To encourage them to have more transactions. Tokens have utility in that they allocate and control services or redeemed for other goods. Large enough prospective user count? I'm thinking ...

i) marketplace with token rewards (default),

ii) marketplace with token rewards and infrastructure for web services type rewards,

iii) more or less complex rewards ... ?

Allowing third parties to easily offer web services and tokens are the user interface. But they must be able to integrate conveniently.

Beginning the building of web services framework that allows, for example, third parties to offer web services for tokens should put a harder, utility based floor on the value of utility tokens which are the rewards. And presents the best trade off regarding architecture that is needed that is also interesting and has valuation prospectives.

More complex rewards would have to be much more complex to be interesting. Simple rewards would be boring, unless the system reaches so many users that just about any product appears on the marketplace and can be obtained with rewards tokens. But directly competing with something like amazon, without the advantage of some difference would be too risky.

(3) I like it because I think it will be easy to pitch, because it has a broad use case and market applicability, easy to add use cases that hedge, easy to position, and it's mathematically interesting to me. (Multicategories as a model, for example.)

Sort:  

[More ideas than time. Had another good idea, I think. Will think about it some more later; Steem-as-post-it-note seems to be how I sometimes use this blog.]

There's a project: https://harc.ycr.org/project/lively/ For making responsive front ends. By smalltalk guys. Javascript. (https://github.com/LivelyKernel/)

It's basically done but the ycr team made no practical use of it ... yet.

Consider the token rewards for buying and selling in a marketplace with rewards used for web services, which is discussed above.

Thinking about it, would be interesting to see a developer to use lively to make a demo page of auction or shopping listings.

For example, each one a block of, say, an image, below it a price, below it possible tokens reward. And likewise some web service like listings. "Does ABC. Reward tokens needed: X."

A side panel on the right. Which is a desktop (lively) type environment.

Users should be allowed to drag listings there and connect them together and with web service listings, visually, using the lively framework. And user can do something interesting on connected listings. The idea being something like users get more reward tokens if connecting several things and buying them together, or they get more reward tokens to use on a web service, if they indicate the services they will be using the reward tokens for in advance. By connecting some listings with some web services before buying the listings. The greater number of tokens they get, then, however, can then only be used on those web services.

(Then the services-computing-via-cells database framework considered in the previous comment may be useful, even more than before, in the back end, for that kind of front end to run and scale. Cells accept partial computations, unfinished computations, partial results, propagating partial information, which however can be completed by information in other cells, if they are neighbors of the same service. Maybe services should do higher order operations like change which other services are connected or not to which cells/minidatabases. Arrows between arrows. Cells as smart databases, and the way links are formed could implement nicely some of that kind of tying. Hmmm.)

Maybe have the content of listings in the demo be posts via blockchain. Like Steem. Which are parsed by the front end. Such content would be much less mutable and time stamped.

Lively team may be quite happy to suddenly have a use case for lively with potentially a great amount of users. Meanwhile lively used in the above manner seems like it may facilitate a shopping and services experience that differentiates itself from the rest.

I want to think more about this.

Rough logic behind a toy example of an approach to a database in the style of [SUS08,RAD09] if this interfaces with one or more blockchains. Suppose the aim is publishing to a blockchain, producing values, but also building a distributed database framework that allows services joining the system and leaving the system to pass around incomplete operations and complete them as information appears.

Using a lot of web workers (each is its own thread) and publishing just the invariants (values that results where order doesn't matter) to the blockchain as these values become available, via a single threaded package like that, may be the way to most quickly build a demo, which, if necessary, can later be refactored.

Suppose a f.F [x] is one of all possible operations requiring x many variables. Further suppose operations considered are those which can partially complete if not all variables are present, but will not entirely complete. That is, if x-2 variables are given, it will produce an output of the type that, if taken as an input to f, if the missing 2 variables are later supplied, will approximately continue where it left off until it completes. Similar to futures in some languages.

Implement perhaps spawn(x)(cell,services(y, A.Script[x+1] )) would produce x many threads each corresponding to (procedurally) named block that is visible or not. If visible, it also prints to the chain via your preferred package or framework.

Each of the x many threads sends a message with the name (id) of its block to each of y many services.

These are services running on, say, heroku, and they listen for some blockchain event. And then if or when the event occurs, read all cells, whose names they have, in the toy example just the current state of the blocks, plus the value provided by the event occurring, run the script, and write to one of the blocks. The script may involve running another spawn(...)(...) type script that waits and listens for additional blockchain events. (The logic then becomes multicategorical.) Initially the blocks are empty and the operation only partly completes.

But now one block stores the result of a begun but not finished operation. Eventually the cells are filled with results, and this allows some operations that were unfinished to actually be completed, or help other operations complete. The threads do the passing and reading and writing in the background, starting and stopping when sent a message by services that know their id. The other service also reads these cells, and so unfinished operations propagate from service to service until they complete. When completed they get posted. And that may or may not be an event for which some service listens.

Regarding something else.

For later. Good topic for a post. Haven't made one in a few days.

Logical, function programming is really very elegant. Would be nice to make a concurrent implementation with something like Erlang under the hood. Like reimplement flat concurrent prolog with a few well known features also included. A combination is more than the sum of its parts.

Here's how we can think about Erlang. The following is theoretical discussion. Of primarily academic interest. Let's see if a really concise, natural language explanation is possible.

It simply already does well what we would want at the lowest level. We want the threads and concurrency and message passing to be real and primary.

Once we have basic actor agents, we build operations like the following. Write simple functions. List in named libraries for convenience. A.B here means A selected from B as usual. We can define actor agents like in flat concurrent prolog. Define some initial inputs, if any, Loads, define some Exceptions, heuristics that run on Loads, input messages, and prune the list of Operations, which are run on Loads until a valid output or time out or crash and restart or Operations all fail to produce a valid output in some order. Operations are defined as procedures, function in the base language. There may be Notes, for Petri net type marks or Holland tags, as actors agents can be treated as wholes as messages, and these implement, anyway, Hewitt actor rules, in that notes tell how to treat future messages. Valid outputs of Operations can be stored for a while as valid Inputs to other Destinations, which are stored addresses. Sent out as messages perhaps with some delay. Like: spawn(ForAll(A)(ForAll(B)(neuron(message(<>,<>,r).RandomsLibrary, f_initial=1/2).NeuronsLibrary).Operations).Operations).
Here: neuron would create a new actor agent for the pair indicated. Not necessary that pairs used. Tuples or more complex structure like other nets can be considered, if multiarrows allowed.

Different actor agents have different operations and a neuron in a net can be a procedure that removes a load in A, a message it would operate on later, for example received but in queue, but operating on something else first, and instead sends that to B. Which has a different set of operations. Logic by elimination. This redirect may be with a frequency. Initially set at 1/2. Mundane learning by tweaking a coefficient in case output deviates from a developer supplied or network generated output is an operation among others inside actor agents of the net. Each does logic. And a log of which operations succeeded and which failed on which load reveals that logic. That can be a message sent to actors in the net. Meanwhile they tweak as net runs on messages sent to it as experience and performs reasoning and where reasoning fails changes reasoning, and as some messages document its reasoning failures it can change its reasoning based on its own descriptions of its own reasoning. Because testing different operations and characterizing what branches give a valid output which don't, is elimination logic, means to ordinary formal logic. Experience and logic are mixed. Neural nets can do logical reasoning, which is not always correct, but depends on experience, so nonmonotonic in the sense of McCarthy. Just usual logical, functional programming where well known actors or fcp agents are first class objects and the primary units.

This is how things like flat concurrent prolog operate. Makes for very short code. But challenging to predict what outputs, if any, code will produce. Analysis still required to model make good predictions about whether such a net will solve or not a particular problem. Open problem in science.

Coin Marketplace

STEEM 0.30
TRX 0.11
JST 0.034
BTC 66931.79
ETH 3249.50
USDT 1.00
SBD 4.10