You are viewing a single comment's thread from:

RE: CHANNEL. More thinking about the future of the space. ... [ Word Count: 1.250 ~ 5 PAGES | Revised: 2018.2.9 ]

in #channel6 years ago (edited)

Preview. Next post.


 

 

BLOG

 

Some considerations regarding:

One future direction of blockchain technology.

Controlled experiment model of computation (CEMOC). Based on actor spawning and using an implementation for example consisting of Loads, Exceptions, Operations, Notes, Inputs, Destinations actor agents and tag tokens with generic composition in each field.

      Word count: 4.000 ~ 16 PAGES   |   Revised: 2019.3.15

 

 

— 〈  1  〉—

LET'S MAKE IT

 

Will discuss the following. Have to make a concise, popular way to describe the following kind of system design.

Methods transform inputs. Inputs are in mailboxes. Some inputs are files. For example, image files. Each neuron transforms inputs and passes on the results to other neurons and has several methods to choose from. It can update its list of methods, can backtrack when a method produces an undesired result or fails to produce any result, can select the order in which to process inputs, according to certain procedures, and can effectively split into several neurons, each having part of the list of inputs to process and part of the methods with which to process them. Neurons can spawn other neurons where that is part of methods. And neurons use heuristics to decide to which other neurons to send their results as inputs for further transformation. Outputs appear in appropriate mailboxes and are displayed to the correct end users. End users select goals. Methods also extract data and this serves to train neurons as if they were part of a different network. In the same way, just a different type of input and it gets handled by a different type of method. Not all methods work on all inputs. Different overlapping networks coincide in some neurons and the state affected by one network can more or less affect behavior of neurons in the other network. Different logs and internal states exist for neurons.

As usual choice and selection often involves primitive randomness or else a probability distribution. Learning affects the probability distribution.

The next topic is that automation is required to make reasoning about and working with such systems feasible and furthermore productive.

For example, suppose the procedure NewRandomName(NameType,LIST1,LIST2,TEXT), as in NewRandomName(x_Neuron,[TokenT,TokenT1,TokenT2,...],[TokenN1,TokenN2,...,TokenT],TEXT) generates a random atom (x_NeuronA57 or x_NeuronBCD9876 or ...) and then writes that in the place of TokenT or TokenT1 or TokenT2 or ... whenever that occurs in TEXT.

Every atom which NewRandomName generates and puts in the place of that token differs from (is "consistent" with) the atoms so far generated and put in the place of the tokens TokenN1 and TokenN2 and .... and TokenT. NewRandomName never write two same random atoms in the place of TokenT when that occurs in two different places either because TokenT was also in the LIST2.

Now consider constructing a network with a hundred neurons by declaring an initial complex agent/neuron and sending it a plain text file script as an input. It has a method for parsing that script.

NewRandomName(x_Neuron,[TokenT],[TokenT],Repeat(100,AddNeuronToList(ListL,Create(TokenT,[Image1.png,Image2.png],[operation1,operation2,...],[(Send(Result,RandomFrom(ListL)))]))))

In other words, the ability to generate a large number of complex agents/neurons generically is to be very compressed and standardized.

Using scripts and appropriately expanding them and then filling token placeholders and pattern matching mostly according to naming patterns to parse and then process the scripts is one approach.

Other approaches will be discussed.

An interesting system will be discussed. As an intro to discussing backtracking-based approaches to AI, which has a vast literature. We already discussed this months and months ago under the form of PyLogTalk and in other ways. The point is a standard typical system in whose terms backtracking-based programming in general can be fruitfully discussed.

i.1) Actor agents (actoragents) will have at least the following fields: Loads (dataset), Exceptions, Operations, Notes, Inputs (to other actor agents, things created by transforming loads by operating on them and temporarily being stored), Destinations.

The datalist and operations are what were being implemented recently.

Each actor agent without data waits. It is not loaded, we will say. But if it is loaded, it will being working. It will select an operation of a set of operations to transform the smallest parts of the dataset that is a load upon it. An operation is selected. If it transforms a selected load such that any result is produced, the load is removed from the dataset of that actor agent. But if not, backtracking occurs. The load is not removed and selection is repeated. In respect to that load, it is the operation which is removed from the operationsset. The actor agent waits if it has no operations using which it may try unloading itself. Actor agents progressively unload themselves and wait or else reach a state where they lack the means by which to unload themselves any further and wait. Selection from an empty set fails. No special logic, same logic that determines working determines waiting.

Typically we simply declare an actor agent when it is classlike. In other words, when it becomes loaded, it will select a load and create a copy of itself using the same function the developer uses to create classes of this sort in the script. The copy will contain a dataset, a loadsset containing that and only that load. It will contain, however, all the same operations and everything else. (None of them were eliminated yet.) But this is a result. Therefore that load is removed from the dataset of the parent actor agent.

So the classlike actoragent would unload itself by spawning instances which inherit the operationsset but only an appropriate subset of the dataset.

Create and Send operations exist: Create("ActorNameIsHere", [LoadsList], [OperationsList]) and Send(object,"ActorNameIsHere"). They should compose.

For example, Create("ActorNameIsHereA", [LoadsListB], [Create("ActorNameIsHereC", [LoadsListD], [OperationsListE]),Send(F,"ActorNameIsHereG")]) is valid.

The function Send() should not be sensitive to type of object. It sends what it is given to send.

i.2) Suppose, for example, or for testing purposes, the order in which loads in the dataset, the loadsset are selected, and the order in which operations are selected, is random.

ii) When the end user enters text in the input field in the front end, it is uploaded as the same text in a plain text file. The end user may upload images having various extensions.

Let us distinguish the names of folders by some notation: ϕ*Folder will denote, for example, a folder called "Folder".

Exists a special actor agent named "Initial". Such that any file dropped into ϕ*Input in the program directory automatically gets added to the inputs for "Initial". This will either be an image file or plain text file.

There is an operation called "Terminal". It simply puts the object on which it operated into ϕ*Output. The front end will display it.

ϕInput and ϕOutput are cleared at intervals.

iii) This particular arrangement is temporary, primarily done make online and offline testing straightforward in the near future. A system may handle different concurrent end users with notes that are passed along with loads and only results corresponding to data supplied by each end user will be shown that end user, even though a single system is operating. Notes can special loads that pair with other loads.

iv) There is logging.

The user inferace takes inputs, displays outputs, and displays to the user what logic the program performed, what the program is doing or did. All of which relies on the log the system produces at this stage. (Later each log is also going to be used for learning and other things.)

So we make operation: LearnExceptionsOperations. A function that takes a string. It then adds the string it takes at the bottom of the current list of strings in a plain text log file.

This log file can be at most some size. (It checks size before writing. It writes nothing if the log file becomes too large.) First it searches for an appropriately named log file. If that is absent, it creates the log file with the first entry. If that is present it just writes to it, adding. The log file it looks for is LearnedExceptionsOperations_ReadMeReact.erl when "ReadMeReact" is the actor that runs operation "LearnExceptionsOperations".

Create(“ReadMeReact”, ["string1", "string2", ... ], [LearnExceptionsOperations])

ends up running LearnExceptionsOperations on each of the strings in some order and creates a plain text log file inside the same directory as everything else.

The contents of LearnedExceptionsOperations_ReadMeReact.erl:

string1,
string9,
string3,
string2,
...

The writing is concurrent. Once it is written to the log, a string is eliminated from the list in inputs to ReadMeReact. But there are new strings coming in, and yes, operation LearnExceptionsOperations picks inputs, like all operations, when more than one inputs exists, at random, from the list of inputs at the time it makes the pick.

For example, Create(“ReadMeReact”, [ ], [LearnExceptionsOperations]) creates the actor, having internal state [],[LearnExceptionsOperations,]. This can be the first actor constructed in the LS.

Other actors are created.

It is sent some strings, and internal state updated to, for example, ["string1", "string2", "string3", "string4", "string5",],[LearnExceptionsOperations].

ReadMeReact is done writing string1 and string3 to the log file, leaving string2, string4, string5, at which point string6, string7, string8, string9, are sent it. Then its internal state becomes ["string2", "string4", "string5", "string6", "string7", "string8", "string9"],[LearnExceptionsOperations].

It selects randomly, so even though for example string3 was present "before" string8, they have an equal chance of getting selected and string8 may appear in the log earlier in the list, be written earlier to the log file, than string3 next.

When the internal state of ReadMeReact becomes [ ], [LearnExceptionsOperations], it goes back to waiting.

For all actors, in the basic definition such that when an actor having name "Name" runs an operation "Operation", the string "(Name, Did, Operation)" is Sent to ReadMeReact. That is, Send("(Name, Did, Operation)", ReadMeReact) is done. It should result in "(Name, Did, Operation)" being added to the list of inputs in the internal state of ReadMeReact. Then pretty soon (Name, Did, Operation) will appear written to the text log file LearnedExceptionsOperations_ReadMeReact.erl.

Exist three exception cases.

When an actor runs an operation called LearnExceptionsOperations. In that case the Send(...) does not occur ... if it did there the LearnedExceptionsOperations_ReadMeReact.erl will be increasingly filled with (ReadMeReact, Did, LearnExceptionsOperations), which is undesirable.

When Send(object, DestinationActor) is the operation run by the actor having name "Name"; in that case, Send("(Name, Sent, DestinationActor)", ReadMeReact) is done.

When "Create" is the operation, "Name" creates an actor called "NameTwo", Send("(Name, Created, NameTwo)", ReadMeReact) is done.

v) We can consider some examples. I will write extensions for clarity, but they will not appear in the code.

For example, Create("DisplayIt", [testA.png, testB.png, testC.jpg], [Terminal.erl,]) in the Lowest-level Script (LS, demo.erl), would create an actor that copies testA.png, testB.png, testC.jpg in some order to ϕ*Output.

The same should occur for data that is not a file, when submitted a terminal actor agent. We can define a terminal actor agent as any one that contains terminal.erl as an operation in it.

For example, Create("DisplayIt", [This is a string.,], [Terminal,]) is done, like with the logging, a plain text file containing

This is a string.

would be created.

In all cases, the name of the create file in Output should be 23 randomly selected letters of the alphabet with the appropriate file extension.

If the name already exists in the folder, it does NOT overwrite, but randomly generates another random string for the name.

Disclaimer: This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This text is a popular, speculative discussion of basic science literature for the sake of discussion and regarding it no warranties of any kind exist. Treat it as conjecture about the past and the future. As subject to change. Like the open future itself is subject to change. No promise to do anything or that anything is done is involved.

Sort:  
Loading...

Coin Marketplace

STEEM 0.18
TRX 0.15
JST 0.029
BTC 61475.37
ETH 2485.94
USDT 1.00
SBD 2.61