SVM is a very simple yet powerful supervised machine learning algorithm that can be used for classification as well as regression though its popularly used for classification. They perform really well in small to medium sized datasets and are extremely easy to tune.

In this blog post we will build our intuition of support vector machines and see some math behind it. We will first understand what large margin classifiers are and understand the loss function and the cost function of this algorithm. We will then see how regularization works for SVM and what governs the bias/variance trade off. Finally we will learn about the coolest feature of SVM, that is the Kernel trick.

You must have some pre-requisite knowledge of how linear regression and logistic regression work in order to easily grasp the concepts. I would suggest you to take notes while reading in order to make the most out of this article, it is going to be a long and interesting journey. So, without further ado lets dive in.

推荐文章

- 1. Reconstruct corrupted data using Denoising Autoencoder(Python c..
- 2. Machine Learning with 5 Lines of JavaScript
- 3. Reinforcement Learning — Part 4
- 4. Object Detection in 6 steps using Detectron2
- 5. Use a Negative Binomial for Count Data
- 6. Hierarchical Clustering: Agglomerative and Divisive — Explai..

## 我来评几句

登录后评论已发表评论数()