Something about AI

in #life6 years ago (edited)

The term "AI" was coined by John McCarthy, founder of the Stanford Artificial Intelligence Lab. It was meant as a way to figure out how to make a machine that has a mind, as vague as that sounds. Since those days, the field of AI has changed, and split into multiple subspecialties, taking disciplines farther away from each other, making dialog between them harder.

The original goal of the field—to discover the fundamental nature of intelligence and reproduce it in electronic form—had given way to elegant algorithms and clever demos. Recent advances in the field are set to make a huge impact on society, but whether we will make a graceful transition or emerge bruised and battered is uncertain.

Artificial intelligence is not the same as human intelligence, but as Edsger Dijkstra said: "The question of whether machines can think is about as relevant as the question of whether submarines can swim." That just means that machines can get a job done differently than a human can, and the question is do we really care how it's done, as long as it gets done?
These machines aren’t conscious, self-reflective, and don’t exhibit any hint of independent aspirations or personal desires. They are incredibly good at specific tasks, and with recent advances in robotics, perception, and machine learning, propelled by accelerating improvements in computer technology, light-weight materials, and sophisticated control system, are enabling a new generation of systems that rival or exceed human capabilities. But I am getting ahead of myself.

So let's start with some basics -

  1. Computers do as they're told - this was the world of sequential programming. It was the logical way of "if this, do that" coded into programs. It relied on people like me that understood a process that humans go through, and found a way to code it into the machine. Those were called "expert systems" and "knowledge engineers", and were very specific. Think of your ATM for example, and how it replaced the bank teller duties - specific tasks in a step-by-step process. Another example is chess playing program, where the complexity is still broken down into individual tasks. The difference is in the computer processing power, and not in the way the machine "thinks".
  2. Computers think differently than humans, and we don't know how - this is the world of deep learning, big data, machine learning, neural networks, and perceptrons. As the saying goes, there is more than one way to skin a cat, and in the programming world it boils down to teaching a machine to solve a problem by examples, as opposed to the "expert system" that had it's process towards a solution hard-coded into the program. If you provide the machine enough data, in the form of input as well as desired output, it will figure out ways to get from A to B. Most times us puny humans have no idea how that road took place, it’s impossible for the creators of machine learning programs to peer into their intricate, evolving structure to understand or explain what they know or how they solve a problem, any more than I can look into your brain to understand what you are thinking about. For simplicity sake, I will sometimes refer to these systems as synthetic beings.
  3. Processing power grows exponentially - but that's hard to grasp to the point it's almost a cliche. Think about your first smartphone. It might have had 8gb of memory. Two years later smartphones came with 16gb, and then 32gb, followed by 64gb. That's doubling every two years, and that means your phone has 8 times the memory it had just three upgrades ago. If your car's gas mileage doubled every two years you would have paid attention more (think about driving from California to New York on a single tank of gas, with a side-trip to Florida). It also means that in a short 10 years we may be looking at terabytes of memory as the standard, and this applies to processing speeds as well as memory.
  4. Shrinking sizes of electronics. The first computers took up entire rooms and buildings. The first cell phones were big and bulky. The first televisions came inside a surrounding cabinet. Today we have devices small enough to be worn on your wrist, and they contain a multitude of sensors and computing capability to rival the first space mission.

Now that the basics are at least introduced, let's dive in a little more. I will mostly ignore the first item - computers do as they are told - because that's the "old world". It was limited by the relatively slow processing power and the lack of actual data. These machines were largely considered big calculators, used to figure out ballistic targeting tables for World War II. They lacked information, had little capacity to store data for long periods of time, and didn't talk to each other.
Little over 100 years ago, Marconi figured out how to use electromagnetic radiation—more commonly called radio waves—to transmit information instantly between distant locations with no evident physical connection. And Thomas Edison figured out how to move energy, in the form of electricity, through wires. Electronics, radio, television, the Internet, computers, and now AI are all steps in the process of evolution, exploring what can be done with these phenomena. And our biology is not geared towards utilizing these things, machines are.

Everything in evolution comes in little steps. The iPhone didn't include anything new, it combined a number of existing ideas into one device (with brilliant marketing). The touchscreen existed before, GPS existed before, digital cameras existed before, etc. Microsoft had tablets a decade before the iPad, the first touchscreens existed in the late 1960's. Computers are no different when it comes to evolution, and software is the same. Today we have the technology to control robotic arms using brain-machine-interface (BMI), and we have drones that require remote controls - it's only logical for a future step to combine the two and have a BMI drone, or fleet of drones controlled by someone's thoughts.
Robotics also fall into the same logic. A robot is a simple machine designed to accomplish a task, using a series of actions. Dishwashers are robots, rice cookers, coffee makers, washing machines, as well as a more complex robot that is assembling a car on a factory floor. They are designed and built in specific ways, and we fit ourselves to them. The dishes are arranged in a way to allow the spinning arm to spray water and soap in order to clean. It doesn't matter that a human would wash dishes differently than the machine, as long as the result is acceptable. In more complex environments we surround the machine with red tapes and emergency shut off valves, but if the car-welding robot expects a frame and a door to be in a specific spot at a certain time they better well be there. There's no do-over because a human had to go pee.
The next part is the brains of the operation - the software telling the machine what to do and when. SciFi planted this idea in our minds that those two need to exist in one location - think C3P0 or R2D2, or Data from Star Trek. But in reality, we know today that this is not true (even if we are unaware of it sometimes), the cloud can house the brains, the machines can be networked together, and we can remote control the drone flying over Yemen from the comfort of a military base in Ft Collins, Colorado.

Performing a task requires four things:

  1. Reasoning - to come up with a plan of action,
  2. Energy - to get the work done,
  3. Awareness - sensing the environment (or the relevant parts of it in regards to the task),
  4. Means - actually doing something, like kick a ball or pick up a coffee mug with your hands.

Those four things are not required to be connected, at least when it comes to machines. For us humans (and other biological beings) those are closely related, hence our limitations over machines sometimes. When it comes to awareness, seeing your environment is helpful to make sure you don't walk into a ditch, but it doesn't help you know there is a ditch three blocks away. Imagine having eyes on every street, and you can decide to take a different route to avoid traffic building because of that ditch three blocks away. Your depth perception and ability to locate sounds would be far better if you could separate your ears and eyes by yards instead of inches, and I hope you get the point.
Similarly, there’s no reason for the means by which robots engage the world to be in one package. They can consist of a collection of disconnected and interchangeable actuators, motors, and tools. Think of a series of switches that control train tracks to route trains in parallel to each other as they travel cross country, or traffic lights as they turn from red to green in the proper sequence to allow traffic flow.
Reasoning is where AI comes into play, and we need to separate from general purpose AI and specific task programming of a synthetic being. General purpose is what we think of when C3P0 comes to mind, or Rosie the robot maid from The Jetsons. They can do anything from carrying a conversation to cleaning dishes, without us having to design our environment for them. Without the robot, general purpose AI is more like Siri, Alexa, Google Home, or HAL-9000.
Synthetic intellect exists in small forms - analyze stocks and computer vision - the kind that recognizes your smile before taking a picture, can read aloud to a blind person a description of a picture, or identify license plates as you drive down the toll road. Provided enough pictures of strawberries, the synthetic agent can identify when they are ripe and ready for picking

Combining things together as a step in the machine evolution, we can come up with technical solutions to every problem we face today. We have 3D printers that print a house out of concrete. This solution requires the land to be ready for the printer, or in other words, we still need to design the environment for the machine. Consider a fleet of drones with spray nozzles attached to a bucket of paint, linked to a camera that can paint the inside of a house. The software can be housed in the cloud, and the hardware deployed out of a suitcase inside a home. You can have pinpoint accuracy, with quality beyond most humans, and either have a mural of your choice or just plain white walls, as chosen from a smartphone app.
That same fleet of drones can easily be used to spray bugs on crops with precision as opposed to a crop-duster spraying an entire field, so a future step will either be creating a more general purpose of an existing application, or different versions of a single application synthetic agent.

Self driving cars are another example of a combination of agents, better suited to work together rather than apart as single entities disconnected from their environment. Instead of each vehicle carrying with it a multitude of sensors and processing power, vehicle-to-vehicle (V2V) communication protocols are being designed, combined with traffic control systems and energy management systems. If a car five miles down the road breaks down in the center lane of the highway, the main traffic control system can alert all other cars long before you make the journey to adjust their speed and merge left and right to avoid the disabled vehicle. The processing power required by each car is reduced when they are just agents of a bigger system. The magic is no more in your car or your phone than TV shows are located in your TV.

All this means that the evolution of machines has the ability to make our world better. We will find ways to combine technologies into smaller, neater, packages, and eliminate them from our view. Just like the iPhone combined the camera and GPS and phone into one device, eventually we will have a way to create a "swiss army knife" equivalent of existing robots like the coffee maker and the rice cooker and the dishwasher into one Rosie like device. We have learned to trust those machines and allow them to make decisions for us, for example when Waze tells you to get off the highway because traffic is building ahead you don't doubt it anymore. When Amazon suggests a complimentary item to your purchase, there's a greater chance you will accept it. When the bank's AI recommends a stock purchase you are more likely to take it.

While our minds are organized to pay attention to things we can point to, the things we can’t see can be just as dangerous. Paradoxically, our evolving technologies are proliferating and consolidating at the same time, and we are ill suited to track, much less predict, the consequences.

Sort:  

This is the best post on Steemit. I gave you a vote now, ya heard?

Thank you. Spread the word :)

Coin Marketplace

STEEM 0.29
TRX 0.11
JST 0.031
BTC 68707.59
ETH 3845.07
USDT 1.00
SBD 3.63