Chukwudi Oranu 👋🏽

Building intuition around Neural Networks

I’ve been curious about how Neural Networks fundamentally work over the last few months. I was pleasantly surprised to discover that they were really math equations under the hood. This was really delightful because now I finally have a reason to use calculus irl and not just in an exam lol.

What I find most interesting about Neural Networks and the associated math is that they allow you to train on and solve a wide range of problems.

I like to think of the backpropagation process, where we tweak the weights w and bias b, as similar to tweaking the knobs of a turntable mixer to get the right sound mix.

In the case of backpropagation, we’ll be tweaking the weights and biases (based on the derivatives) to arrive at a lower loss during each training epoch. In mixing a track, the DJ tweaks each knob (e.g the delay) in order to fine tune the track to the desired mix.

The idea being that setting each weight and bias in each layer of the network to a certain value will result in better outputs from the neural network.

DJ turning a turntable knob

I like to think in units i.e what is the smallest possible representation of this phenomena. For matter, you could say it’s an atom. For Neural Networks, it’s as a Neuron.

Neural Networks are made up of layers of stacked Neurons; so we can generalize our understanding of one neuron to the rest of the network.

We can express the output of a single-layer, single-neuron Neural Network as:

output = w.x + b

where:

  • w represents the weight
  • x represents the input
  • b represents the bias

The above represents the formula for linear regression.

If you’re familiar with linear functions in secondary school algebra/maths, this looks similar to the below function which is the equation of a straight linee:

y = mx + c

Realizing this connection also made me super delighted as I always wondered to myself in secondary school “When will I actually ever get to use this piece of information?” back in 2013 😂.