Introduction
ML can be divided into three broad categories: supervised learning, unsupervised learning, and reinforcement learning
1. Supervised learning concerns learning from labeled data (for example, a collection of pictures labeled as containing a cat
or not containing a cat). Common supervised learning tasks include classification and regression.
2. Unsupervised learning is
concerned with finding patterns and structure in unlabeled data. Examples of unsupervised learning include clustering,
dimensionality reduction, and generative modeling.
3. Finally, in reinforcement learning an agent learns by interacting with
an environment and changing its behavior to maximize its reward. For example, a robot can be trained to navigate in
a complex environment by assigning a high reward to actions that help the robot reach a desired destination
Link: Fundamentals of Deep Learning – Activation Functions and When to Use Them?
Link: Learn from Scratch AnalyticaVidhya
Simple intuition behind neural networks
Neural networks takes several inputs, processes it through
multiple neurons from multiple hidden layers, and returns the result using an output layer.
This result estimation process is technically known as “Forward Propagation“.
Next, we compare the result with actual output. The task is to make the output to the
neural network as close to the actual (desired) output. Each of these neurons is contributing
some error to the final output. How do you reduce the error?
We try to minimize the value/ weight of neurons that are contributing more to the error and
this happens while traveling back to the neurons of the neural network and finding where the
error lies. This process is known as “Backward Propagation“.
In order to reduce this number of iterations to minimize the error, the neural networks
use a common algorithm known as “Gradient Descent”, which helps to optimize the task quickly
and efficiently.
That’s it – this is how Neural networks work! I know this is a very simple representation,
but it would help you understand things in a simple manner.
Link: FeedForward Example (pyTorch)
Link: Basics of ML Application
Link: Linear Regression models
Terminology and useful parameters
Autograd: an automatic differentiation package provided by PyTorch
Perception Layer: Single-layer perceptron takes data as input and its weights are summed
up then an activation function is applied before sent to the output layer.
Activation Layer: The activation functions in the neural network introduce the non-linearity to the linear output. It defines the output of a layer, given data,
meaning it sets the threshold for making the decision of whether to pass the information or not.
Feedforward Neural Network: Feedforward models have hidden layers in between the input and the output layers.
After every hidden layer, an activation function is applied to introduce non-linearity.
Multi-layer perception problem: When you have more than two hidden layers,
the model is also called the deep/multilayer feedforward model or multilayer perceptron model(MLP).
Forward Propagation
Back Propagation: Here we update the biases and weights based on the error.
Model: A data model organizes data elements and standardizes how the data elements relate to one another.
Criterion/Loss Function: Method of evaluating how well specific algorithm models the given data. If predictions deviates too much from actual results,
loss function would cough up a very large number. Gradually, with the help of some optimization function, loss function learns to reduce the error in prediction.
Optimiser: Optimizers are algorithms or methods used to change the attributes of your neural network such as weights and learning rate in order to reduce the losses.
Train: Training a model simply means learning (determining) good values for all the weights and the bias from labeled examples.
Epoch: A cycle through the full training dataset.