Neural networks are an artificial intelligence technique that is inspired by the way the human brain works. They are composed of a set of artificial «neurons» that connect to each other and turn on or off according to their inputs.
When an input is presented in this technique, it processes the information and decides which output is the most appropriate. This is done by applying a series of weights to each input and adding these weighted entries. If the resulting sum exceeds a certain threshold, the neuron fires and sends a signal to the other neurons connected to it.
They are often used to perform classification tasks, such as pattern recognition, object identification, or spam detection. They can also be used to make predictions, such as predicting the price of a stock or how long a car trip will take.
History of neural networks: from theory to practice
They have their roots in the 1940s, when researchers Warren McCulloch and Walter Pitts developed a mathematical model of a biological neuron. This model was one of the first formal representations of how a neuron might function and laid the foundation for the field of artificial intelligence.
In the 1950s and 1960s, various algorithms and models based on neural networks were developed, but progress in the field slowed down due to lack of computing power and lack of adequate data. In the 1980s, however, new training techniques emerged and new algorithms were developed that allowed to obtain more accurate results.
There are several types of neural networks, which can be classified in different ways. Some common ways to classify are:
According to its architecture:
- Feedforward neural networks: Artificial feedforward have a linear structure, with layers of neurons connected in a sequence. The inputs are entered into the first layer and the outputs are obtained from the last layer.
- Recursive neural networks: These recursive have a structure in which some neurons connect to themselves or to neurons in previous layers. This allows the network to process data streams, such as natural language.
- Commemorative: Also known as short-term memory, they have a structure that allows them to store and retrieve short-term memory information.
According to its activation function:
- Linear: They have a linear activation function, which means that the output of the neuron is proportional to the sum of its weighted inputs.
- Nonlinear: Nonlinear neural networks have a nonlinear activation function, which means that the output of the neuron is not proportional to the sum of its weighted inputs. This allows the network to process more complex patterns.
According to your number of layers:
- Single-layer: They have only one layer of neurons.
- Multilayer neural networks: Multilayer have two or more layers of neurons. Two-layer neural networks are known as perceptronic networks and are the most common. Neural networks with three or more layers are known as multi-level networks or deep networks.