In previous article, we got introduced to deep learning and how computers mimic the human brain learning behavior. The neuron is the fundamental part in human brain which can receive some inputs and provide outputs. In this article, we will learn about neuron in deep learning.
Neuron in Deep Learning:
Just like a human brain neuron, neuron in deep learning takes some inputs and generates output. Although the process can be complicated, we will see basic concepts behind it. Let us see a diagrammatic representation.
- x1, x2 and x3 marked in yellow circles are input variables.
- w1, w2 and w3 are associated weights for x1, x2 and x3.
- The big green circle is the neuron which performs weighted sum on input values. So, in our scenario the green neuron will perform x1w1 + x2w2 + x3w3. i.e. summation of multiplication of input x and its weight.
- After the green neuron performs its calculations, an activation function is applied on the neuron and the output is dependent on it.
- y value marked in red circle is output after performing calculations on input variables along with its weights and application of activation function.
- All the input or output connects are termed as synapses of neuron in deep learning.
The Activation Function:
Activation functions are important in deep learning. They are applied to neurons which in turn decide whether the neuron should be activated or not which in turn affects the output value sent from neuron.
Now that we know what an activation function does to neuron in deep learning, we need not go into too much mathematical details and types of activation functions. You can find a whole list on Wikipedia. We are concerned with most commonly used activation functions which are listed below
- Threshold Function
- Sigmoid Function
- Rectifier Function
- Hyperbolic tangent Function.
In next article we will study artificial neural networks.