This post was originally written by Manan Shah as a response to a question on Quora.
The field of artificial neural networks is extremely complicated and readily evolving. In order to understand neural networks and how they process information, it is critical to examine how these networks function and the basic models that are used in such a process.
What are artificial neural networks?
Artificial neural networks are parallel computational models (unlike our computers, which have a single processor to collect and display information). These networks are commonly made up of multiple simple processors which are able to act in parallel alongside one another to model changing systems. This parallel computing process also enables faster processing and computation of solutions. Neural networks follow a dynamic computational structure, and do not abide by a simple process to derive a desired output.
The basis for these networks originated from the biological neuron and neural structures – every neuron takes in multiple unique inputs and produces one output. Similarly, in neural networks, different inputs are processed and modified by a weight, or a sort of equation that changes the original value. The network then combines these different weighted inputs with reference to a certain threshold and activation function and gives out the final value.
How do neural networks operate?
Artificial neural networks are organized into layers of parallel computing processes. For every processor in a layer, each of the number of inputs is multiplied by an originally established weight, resulting in what is called the internal value of the operation. This value is further changed by an originally created threshold value and sent to an activation function to map its output. The output of that function is then sent as the input for another layer, or as the final response of a network should the layer be the last. The weights and the threshold values are most commonly modified to produce the correct and most accurate value.
The learning mechanisms of a neural network
Looking at an analogy may be useful in understanding the mechanisms of a neural network. Learning in a neural network is closely related to how we learn in our regular lives and activities – we perform an action and are either accepted or corrected by a trainer or coach to understand how to get better at a certain task. Similarly, neural networks require a trainer in order to describe what should have been produced as a response to the input. Based on the difference between the actual value and the value that was outputted by the network, an error value is computed and sent back through the system. For each layer of the network, the error value is analyzed and used to adjust the threshold and weights for the next input. In this way, the error keeps becoming marginally lesser each run as the network learns how to analyze values.
The procedure described above is known as backpropogation, and is applied continuously through a network until the error value is kept at a minimum. At this point, the neural network no longer requires such a training process and is allowed to run without adjustments. The network may then finally be applied, using the adjusted weights and thresholds as guidelines.
The usage of a neural network while running
When a neural network is actively running, no backpropogation takes place as there is no way to directly verify the expected response. Instead, the validity of output statements are corrected during a new training session or are left as is for the network to run. Many adjustments may need to be made as the network consists of a great amount of variables that must be precise for the artificial neural network to function.
A basic example of such a process can be examined by teaching a neural network to convert text to speech. One could pick multiple different articles and paragraphs and use them as inputs for the network and predetermine a desired input before running the test. The training phase would then consist of going through the multiple layers of the network and using backpropogation to adjust the parameters and threshold value of the network in order to minimize the error value for all input examples. The network may then be tested on new articles to determine if it could truly convert text to proper speech.
Networks like these may be viable models for a great array of mathematical and statistical problems, including but not limited to speech synthesis and recognition, face recognition and prediction, nonlinear system modeling and pattern classification.
Neural networks are a new concept whose potential we have just scratched the surface of. They may be used for a variety of different concepts and ideas, and learn through a specific mechanism of backpropogation and error correction during the testing phase. By properly minimizing the error, these multi-layered systems may be able to one day learn and conceptualize ideas alone, without human correction.
If you’d like to know more about how a neuron functions biologically, check out my answer to What is a neuron?
Most of this answer was from my knowledge through online courses and reading multiple different articles and papers regarding neural networks, but I used the following resources to enhance the quality of this answer: