Something more about AI

in #life6 years ago

In a previous post, I wrote a little about artificial intelligence, and generally how I think it can (or should) go in the future. The mechanical basis for this intelligence started by an attempt to replicate the human brain, using neurons, which amount to a simple computer program that has an input and output on a small scale, and those are then networked together in large quantities. Hence the name neural networks, or neural nets, since each "neuron" is networked together with a whole bunch of other ones. An early name given to this synthetic neurons was "perceptrons" because they perceive things.

The main idea is that the network reaches its own conclusions about things, based on examples instead of hard-coded logic. Think of how a child learns new things - you point to a chair and say "chair", and then point to a table and say "table", and so on. Eventually, the kid will figure out that a chair has four legs and you sit on it, and a table has four legs and you don't sit on it. An adult that "knows" things learns differently, by applying logic to existing known facts - an animal is a living thing, therefore, it is wrong to cause it harm. A tree is not 'perceived' as a living thing, therefore, it is okay to cut it down and use it to build a house.

Artificial networks (ANN) are more like the child in how they learn, which means they need examples and some labels. Show it enough cat pictures that are labeled as 'cats', and enough that are labeled as 'not cats', and it will figure out the relevant parameters to identify cats - fur, tails, whiskers, etc. It will also figure out what to ignore that still qualifies as valid - fur colors, sizes, species. We don't really know HOW they learn and reach those conclusions, but mostly we accept the results since we can measure how often they are right, and learn to rely on them when they are correct often enough.

ANNs exist for many purposes, and part of the advancements relate to how much initial data do they require before being able to start making predictions. One ANN can be about cat recognition, another can be about earthquakes, another for identifying terrorists, or stock market analysis. Part of the validation of the predictions that is required before we can rely on them is based on statistics, and there are many models for validation. The validation requirement assumes that pattern recognition used by the ANN has some underlying logic to it, even if we can't really understand it with our 'underdeveloped' human brain.

The next step in this evolution is the data we have to give those machines, and that we call the base dataset. In the age of the internet and internet-of-things and multitude of sensors that are deployed around the world, we have plenty of information of all sorts to be able to let the machines learn. For example, AI can take a dating website that has lots of pictures of people who self-identify their sexual orientation and then figure out how to identify the sexual orientation of others based on their picture alone. Another example is using pictures of brain scans to identify people with suicidal tendencies.

Because of the amounts of data required, there are a few names for this general idea, such as big data, machine learning, and so on. The real-world applications of this technology are almost endless, from healthcare to finance, security to urban design. There is an inherent problem with what it learns sometimes since the underlying mechanism is still correlative in nature - just because today we have a disproportionate number of black people in prisons across the United States doesn't mean all black people are criminals...That means there needs to be some logical reasoning applied - perhaps by a human - to the predictions or models used by ANNs, in order to satisfy something like the following:

  1. We understand and can measure the causal factors.
  2. There is a lot of historical data available.
  3. The forecasts do not affect the thing we are trying to forecast.
  4. The future will somewhat resemble the past in a relevant way.

Those four rules-of-thumb can help mitigate the risks of bad predictions, that may become either self-fulfilling prophecies of sort, or display a human type bias within them. Forecasting stock markets is difficult because it violates 3 out of the 4 (as proven on May 6, 2010, when the market crashed in minutes because of competing ANNs), yet forecasting earthquakes is more likely since it doesn't violate any of the four.

It does mean we need to proceed with caution into the future, and we need to train the machines the right way for our own future safety. Using the machines properly, however, can give us a better future for all people, assuming we won't restrict its insights only to the select few who will be able to afford it.

Coin Marketplace

STEEM 0.27
TRX 0.11
JST 0.030
BTC 69061.75
ETH 3774.04
USDT 1.00
SBD 3.51