Scikit Learn (Beginners) — Part 2

This is part two of the Scikit-learn series, which is as follows:

  • Part 1 — Introduction
  • Part 2 — Supervised Learning in Scikit-Learn (this article)
  • Part 3 — Unsupervised Learning in Scikit-Learn

Link to part one : https://medium.com/@deepanshugaur1998/scikit-learn-part-1-introduction-fa05b19b76f1

Link to part three : https://medium.com/@deepanshugaur1998/scikit-learn-beginners-part-3-6fb05798acb1

Supervised Learning In Scikit-Learn

Hello again !

Recap To Supervised Learning :

Q. What is supervised learning ?

In machine learning it is a type of system in which both input and desired output data are provided. Input and output data are labelled for classification to provide a learning basis for future data prediction.

Now as in previous part of this series we have already seen an overview of what scikit learn offers in terms of supervised learning but in this we will understand how do we get started with this powerful library.

Getting Started…..

let’s consider an example of a simple linear regression model :

The mathematical aim of this model is to minimize the residual sum of squares between the observed responses in the dataset, and the results predicted by the linear approximation.

As you can see there is just a small code that can get you started with this amazing algorithm. Isn’t it amazing ? You can even try prediction on the testing set by using ‘.pred’ function.
For more in depth understanding of this linear model consider trying yourself by taking an example.
An easy example can be found here :

Support Vector Machines In Sklearn

Follow the code below to get started with svm’s in scikit-learn :

The parameters you see in the brackets can be changed according to the dataset you have been given.
Once you are comfortable writing the above mentioned code try yourself by tweaking the parameters.

Stochastic Gradient Descent In Sklearn

Stochastic Gradient Descent algorithm is a simple algorithm which is used in discriminative learning of linear classifiers on a large dataset and also it easily fits onto it.

Code :

Naive Bayes In Sklearn

Naive Bayes classifier calculates the probabilities for every factor. Then it selects the outcome with the highest probability.
This classifier assumes the features are independent. Thus, the word ‘naive’ is used.
It is one of the most common algorithms in machine learning.

Code :

Decision Tree Regression In Sklearn

Decision Trees is another type of supervised machine learning algorithm where the data is continuously split according to a certain parameter.
More the data more is the accuracy of the model.
Decision trees is one of the most used algorithm out of all supervised learning algorithms and finds huge applications in the industry.

Code :

Ensemble Methods In SkLearn

It contains bagging methods and random forests.

Random Forests

Another powerful machine learning algorithm that produces great result even without hyper-parameter tuning.
It is also one of the most used algorithms, because of its simplicity and the fact that it can be used for both classification and regression tasks.

Code :

What we have learned ?

By now we have learned how to implement each supervised algorithm using scikit learn.
Stil there are many features that each algorithm has in scikit learn which can be mastered only by practicing.
So stop wasting your time and head straight onto the official documentation of scikit learn for supervised algorithms and make sure you understand each algorithm mathematically as well as by practicing on different datasets.
Link :
http://scikit-learn.org/stable/supervised_learning.html

Note : Next part of this series will be on unsupervised learning so make sure you dont miss that.

Written by

Co-Founder & CEO @Acoto | Ex-Founder @Healthillc | Entrepreneur | Marketing | AI

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store