How Neural Networks ThinksteemCreated with Sketch.

in #technology7 years ago

From MIT News:

General-purpose technique sheds light on inner workings of neural nets trained to process language.

Artificial-intelligence research has been transformed by machine-learning systems called neural networks, which learn how to perform tasks by analyzing huge volumes of training data.

During training, a neural net continually readjusts thousands of internal parameters until it can reliably perform some task, such as identifying objects in digital images or translating text from one language to another. But on their own, the final values of those parameters say very little about how the neural net does what it does.

But, at the 2017 Conference on Empirical Methods on Natural Language Processing starting this week, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory are presenting a new general-purpose technique for making sense of neural networks that are trained to perform natural-language-processing tasks, in which computers attempt to interpret freeform texts written in ordinary, or “natural,” language (as opposed to a structured language, such as a database-query language).

The technique applies to any system that takes text as input and produces strings of symbols as output, such as an automatic translator. And because its analysis results from varying inputs and examining the effects on outputs, it can work with online natural-language-processing services, without access to the underlying software.

In fact, the technique works with any black-box text-processing system, regardless of its internal machinery. In their experiments, the researchers show that the technique can identify idiosyncrasies in the work of human translators, too.

Coin Marketplace

STEEM 0.26
TRX 0.20
JST 0.038
BTC 97904.67
ETH 3602.66
USDT 1.00
SBD 3.90