Understanding a Perceptron, building block of an Artificial Neural Network.

Jatin Mishra
3 min readJun 14, 2020

To make a machine act like our brain we must understand how our brain works.

Considering the above quote, to perform complex tasks involved in Artificial Intelligence we use something called “Artificial Neural Networks.” These “Artificial Neural Networks” are made to mimic the working of our brain.

As we know our brain consists of a network of billions of neurons and all the process performed by our brain essentially boils down to firing of these neurons. The firing of neurons means the neuron simply sends an electrical signal to one or more neurons connected to it via connections called synapses.

More about neurons here. https://en.wikipedia.org/wiki/Neuron

Now to mimic or recreate the architecture of our brain we use the “Artificial Neural Network” which consists of “Perceptrons” the artificial counterpart of a neuron. The working of perceptrons is very much like that of a neuron.

A neuron takes some electrical signals via synapses then processes it and sends it to other neurons via synapses. A perceptron on the other hand takes some inputs as numbers processes it and gives a numerical output.

The architecture of a perceptron

https://missinglink.ai/guides/neural-network-concepts/perceptrons-and-multi-layer-perceptrons-the-artificial-neuron-at-the-core-of-deep-learning/

A perceptron consists of the following parts.

1. Inputs — These are the numbers that are fed to the perceptron as an input.

2. Weights– Each input has an associated weight (also a numerical value).

3. Biases — Biases are added to control the point or value at which the perceptron will trigger.
To learn more about biases check out this excellent post.

4. Cell Body — This is essentially a function that calculates the weighted sum of the inputs i.e sum of all the inputs multiplied by their individual weights. Then applies an activation function to it which decides whether the perceptron will be active or not. (If the weighted sum is of certain value then it outputs 1 otherwise 0)
Activation functions are entirely a big topic for reading, please check this post to get more understanding.

Also, activation functions are the major players in the process of training the neural network or backpropagation.

Conclusion

This article only gives a layman understanding of the perceptron, its structure, and why it is used, which is very useful for a person starting in the field of Artificial Intelligence more precisely Deep Learning, which is currently the most exciting subset of AI.

If you enjoyed this post, I recommend you check out the following post which covers the structure and mathematics of an Artificial Neural Network. It is really great.
https://towardsdatascience.com/neural-networks-everything-you-wanted-to-know-327b78b730ab

Thanks for reading.!!

Sign up to discover human stories that deepen your understanding of the world.

Jatin Mishra
Jatin Mishra

Written by Jatin Mishra

Senior iOS Engineer with 6+ years of experience, specializing in high-performance app development and architecture optimization. Join me for special insights.

No responses yet

Write a response