The depth of your belief in this philosophy is character defining.

I wonder how many people live their life, then die with very few people turning up to their funerals, memorials, or celebrations of life, because they loved people who could do nothing for them in…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Perceptron made easy!

What is a perceptron?

A perceptron is a linear classifier.

Where can it be used?

A perceptron can be used for any classification problem. Consider the following data distributions of students getting accepted/rejected at a university.

Acceptance of a student depends on their GRE and TOEFL scores. Students scoring high in both get admitted while students performing poorly in both are rejected. We assign label 1 to students getting accepted and 0 to those getting rejected. In the plot above students accepted are shown by blue color and rejected ones by red color. The line separating these two classes is what we obtain by perceptron.

How does a perceptron work?

The boundary line is given by:

In vector form, this can be written as follows:

y = label : 0 or 1 (we are considering binary classification here)

Prediction is given by:

y = 1 if Wx + b > 0

y = 0 if Wx + b < 0

How do we extend it to higher dimensions?

When data has more features (greater than two), it comes under this category. Example — If acceptance or rejection of a student depends on GRE, TOEFL, class rank, extra-curricular activities, and internships. This data cannot be visualized on a 2D plane unlike the example discussed previously. This is still a binary classification problem as we have the same two possibilities- acceptance and rejection. However, for this problem, the boundary between the two classes is obtained by a plane. The perceptron is a plane given by the equation:

Prediction:

y = 1 if Wx + b≥0 and 0 if Wx + b<0 where x=(x1, x2, x3…xn), W =(w1, w2, w3…wn)

For n features of a data point, the boundary is (n-1) dimensional hyperplane given by:

Perceptron representation

There are two ways of representing a perceptron.

Here, we fix one input into perceptron as 1 and bias acts as the weight for this input.

2. Bias without inputs

Perceptrons are a mathematical imitation of neurons which constitute our nervous system. Each unit has an input and it gives output as a function of this input. Perceptrons when stacked together form single layer of a network and several of these layers make a deep neural network. In the upcoming blogs, I’ll be discussing deep learning in detail.

Add a comment

Related posts:

African Culling Experiment Part Two

When I first arrived in South Africa in 1999 there was a rather dark joke circulating. The Toyota Hiace — used as an informal taxi was also known as the ‘High Impact African Culling Experiment’. It…