DIPL. ING.

IVAN SCHMIDT

FOR YOUR SUCCESS

A tour of machine learning algorithms

, Category: IT     Twitter Facebook Linkedin Google+  

I LOVE IT
19%
I LIKE IT
75%
I CAN IT
0%
I HAVE IT
6%
I NEED IT
0%

A tour of machine learning algorithms

It is useful to tour the main algorithms in the field to get a feeling of what methods are available. There are so many algorithms available that it can feel overwhelming when algorithm names are thrown around and you are expected to just know what they are and where they fit. I want to give you two ways to think about and categorize the algorithms you may come across in the field.

The first is a grouping of algorithms by the learning style.
The second is a grouping of algorithms by similarity in form or function (like grouping similar animals together).

Both approaches are useful, but we will focus in on the grouping of algorithms by similarity and go on a tour of a variety of different algorithm types. After reading this post, you will have a much better understanding of the most popular machine learning algorithms for supervised learning and how they are related.

Ensemble Learning Method
Ensemble Learning Method; A cool example of an ensemble of lines of best fit. Weak members are grey, the combined prediction is red.
Plot from Wikipedia, licensed under public domain.


Algorithms Grouped by Learning Style


There are different ways an algorithm can model a problem based on its interaction with the experience or environment or whatever we want to call the input data. It is popular in machine learning and artificial intelligence textbooks to first consider the learning styles that an algorithm can adopt. There are only a few main learning styles or learning models that an algorithm can have and we’ll go through them here with a few examples of algorithms and problem types that they suit.

This taxonomy or way of organizing machine learning algorithms is useful because it forces you to think about the roles of the input data and the model preparation process and select one that is the most appropriate for your problem in order to get the best result. Let’s take a look at four different learning styles in machine learning algorithms:


Supervised Learning


Input data is called training data and has a known label or result such as spam or not-spam or a stock price at a time.

A model is prepared through a training process in which it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data.

Example problems are classification and regression.

Example algorithms include Logistic Regression and the Back Propagation Neural Network.

Supervised Learning


Unsupervised Learning


Input data is not labeled and does not have a known result.

A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may be through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity.

Example problems are clustering, dimensionality reduction and association rule learning.

Example algorithms include: the Apriori algorithm and k-Means.

Unsupervised Learning


Semi-Supervised Learning


Input data is a mixture of labeled and unlabelled examples.

There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions.

Example problems are classification and regression.

Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabeled data.

Semi-Supervised Learning


Overview


When crunching data to model business decisions, you are most typically using supervised and unsupervised learning methods. A hot topic at the moment is semi-supervised learning methods in areas such as image classification where there are large datasets with very few labeled examples.


Algorithms Grouped By Similarity


Algorithms are often grouped by similarity in terms of their function (how they work). For example, tree-based methods, and neural network inspired methods. I think this is the most useful way to group algorithms and it is the approach we will use here.

This is a useful grouping method, but it is not perfect. There are still algorithms that could just as easily fit into multiple categories like Learning Vector Quantization that is both a neural network inspired method and an instance-based method. There are also categories that have the same name that describe the problem and the class of algorithm such as Regression and Clustering.

We could handle these cases by listing algorithms twice or by selecting the group that subjectively is the “best” fit. I like this latter approach of not duplicating algorithms to keep things simple.

In this section, I list many of the popular machine learning algorithms grouped the way I think is the most intuitive. The list is not exhaustive in either the groups or the algorithms, but I think it is representative and will be useful to you to get an idea of the lay of the land.

Please Note: There is a strong bias towards algorithms used for classification and regression, the two most prevalent supervised machine learning problems you will encounter.


Regression Algorithms


Regression is concerned with modeling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model.

Regression methods are a workhorse of statistics and have been co-opted into statistical machine learning. This may be confusing because we can use regression to refer to the class of problem and the class of algorithm. Really, regression is a process.

The most popular regression algorithms are:
Ordinary Least Squares Regression (OLSR)
Linear Regression
Logistic Regression
Stepwise Regression
Multivariate Adaptive Regression Splines (MARS)
Locally Estimated Scatterplot Smoothing (LOESS)
Instance-based Algorithms

Regression Algorithms


Instance-based Algorithms


Instance-based learning model is a decision problem with instances or examples of training data that are deemed important or required to the model.

Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason, instance-based methods are also called winner-take-all methods and memory-based learning. Focus is put on the representation of the stored instances and similarity measures used between instances.

The most popular instance-based algorithms are:
k-Nearest Neighbor (kNN)
Learning Vector Quantization (LVQ)
Self-Organizing Map (SOM)
Locally Weighted Learning (LWL)
Regularization Algorithms

Instance-based Algorithms


Regularization Algorithms


An extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing.

I have listed regularization algorithms separately here because they are popular, powerful and generally simple modifications made to other methods.

The most popular regularization algorithms are:
Ridge Regression
Least Absolute Shrinkage and Selection Operator (LASSO)
Elastic Net
Least-Angle Regression (LARS)

Regularization Algorithms


Decision Tree Algorithms


Decision Tree AlgorithmsDecision tree methods construct a model of decisions made based on actual values of attributes in the data.

Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data for classification and regression problems. Decision trees are often fast and accurate and a big favorite in machine learning.

The most popular decision tree algorithms are:
Classification and Regression Tree (CART)
Iterative Dichotomiser 3 (ID3)
C4.5 and C5.0 (different versions of a powerful approach)
Chi-squared Automatic Interaction Detection (CHAID)
Decision Stump
M5
Conditional Decision Trees

Decision Tree Algorithms


Bayesian Algorithms


Bayesian methods are those that explicitly apply Bayes’ Theorem for problems such as classification and regression.

The most popular Bayesian algorithms are:
Naive Bayes
Gaussian Naive Bayes
Multinomial Naive Bayes
Averaged One-Dependence Estimators (AODE)
Bayesian Belief Network (BBN)
Bayesian Network (BN)

Bayesian Algorithms


Clustering Algorithms


Clustering, like regression, describes the class of problem and the class of methods.

Clustering methods are typically organized by the modeling approaches such as centroid-based and hierarchal. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality.

The most popular clustering algorithms are:
k-Means
k-Medians
Expectation Maximisation (EM)
Hierarchical Clustering

Clustering Algorithms


Association Rule Learning Algorithms


Association rule learning methods extract rules that best explain observed relationships between variables in data.

These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organization.

The most popular association rule learning algorithms are:
Apriori algorithm
Eclat algorithm

Association Rule Learning Algorithms


Artificial Neural Network Algorithms


Artificial Neural Networks are models that are inspired by the structure and/or function of biological neural networks.

They are a class of pattern matching that are commonly used for regression and classification problems but are really an enormous subfield comprised of hundreds of algorithms and variations for all manner of problem types.

Note that I have separated out Deep Learning from neural networks because of the massive growth and popularity in the field. Here we are concerned with the more classical methods.

The most popular artificial neural network algorithms are:
Perceptron
Back-Propagation
Hopfield Network
Radial Basis Function Network (RBFN)

Artificial Neural Network Algorithms


Deep Learning Algorithms


Deep Learning methods are a modern update to Artificial Neural Networks that exploit abundant cheap computation.

They are concerned with building much larger and more complex neural networks and, as commented on above, many methods are concerned with semi-supervised learning problems where large datasets contain very little labeled data.

The most popular deep learning algorithms are:
Deep Boltzmann Machine (DBM)
Deep Belief Networks (DBN)
Convolutional Neural Network (CNN)
Stacked Auto-Encoders

Deep Learning Algorithms


Dimensionality Reduction Algorithms


Dimensional Reduction AlgorithmsLike clustering methods, dimensionality reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarize or describe data using less information.

This can be useful to visualize dimensional data or to simplify data which can then be used in a supervised learning method. Many of these methods can be adapted for use in classification and regression.

Principal Component Analysis (PCA)
Principal Component Regression (PCR)
Partial Least Squares Regression (PLSR)
Sammon Mapping
Multidimensional Scaling (MDS)
Projection Pursuit
Linear Discriminant Analysis (LDA)
Mixture Discriminant Analysis (MDA)
Quadratic Discriminant Analysis (QDA)
Flexible Discriminant Analysis (FDA)

Dimensionality Reduction Algorithms


Ensemble Algorithms


Ensemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction.

Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular.

Boosting
Bootstrapped Aggregation (Bagging)
AdaBoost
Stacked Generalization (blending)
Gradient Boosting Machines (GBM)
Gradient Boosted Regression Trees (GBRT)
Random Forest

Ensemble Algorithms


Other Algorithms


Many algorithms were not covered. I did not cover algorithms from specialty tasks in the process of machine learning, such as:
Feature selection algorithms
Algorithm accuracy evaluation
Performance measures

I also did not cover algorithms from specialty subfields of machine learning, such as:
Computational intelligence (evolutionary algorithms, etc.)
Computer Vision (CV)
Natural Language Processing (NLP)
Recommender Systems
Reinforcement Learning
Graphical Models
And more…


Other Lists of Algorithms


There are other great lists of algorithms out there if you’re interested. Below are few hand selected examples.

List of Machine Learning Algorithms: On Wikipedia. Although extensive, I do not find this list or the organization of the algorithms particularly useful.

Machine Learning Algorithms Category: Also on Wikipedia, slightly more useful than Wikipedias great list above. It organizes algorithms alphabetically.

CRAN Task View: Machine Learning & Statistical Learning: A list of all the packages and all the algorithms supported by each machine learning package in R. Gives you a grounded feeling of what’s out there and what people are using for analysis day-to-day.

Top 10 Algorithms in Data Mining: Published article on the most popular algorithms for data mining. Another grounded and less overwhelming take on methods that you could go off and learn deeply.



About the nature of learning


Document for view: Reinforcement Learning: An Introduction

Deep learning


Machine learning vs. artificial intelligence


People tend to use machine learning and artificial intelligence as if they’re interchangeable, but they’re not: rather, machine learning (ML) is one very successful approach to the broader field of artificial intelligence (AI).

Artificial intelligence is, essentially, the simulation of intelligence in computers. As such, it’s something that people have been working on for decades; the field of AI research was founded at a conference at Dartmouth in 1956. It encompasses a range of approaches, which range from the simple to the incredibly complex. If you encode a set of simple rules that let a computer never lose at tic-tac-toe, that’s a basic form of artificial intelligence.


How do machines learn?


There are a number of different ways to get machines to learn for themselves. I’m going to focus on one  -  deep learning  - because it’s led to a lot of big, recent breakthroughs (it’s what Google’s DeepMind used in AlphaGo, for instance) and, as such, it’s probably the method people are most excited about right now.

At deep learning’s core are things called artificial neural networks. The thinking behind these neural networks (I’m dropping the artificial for brevity) is, broadly, this:

By far the best learning system we’ve ever encountered is the human brain, so let’s try to get computers to mimic the way the human brain learns.

Neural networks are this attempt  - essentially, they’re meant to be mathematical representations of the way the human brain operates. And they’re pretty amazingly effective.


Artificial neural networks


A neural network is essentially a series of units (modelled after the neurons in the human brain) and the connections between them (modelled after synapses). You can set these up to represent some input you want to pass the neural network. To do that, you make each unit in the input layer, with ‘on’ representing and ‘off’ representing pixel. Each of connections has some weighting, with some stronger than others. Each of the units in that next layer triggers its own connections if the combined weightings of all the triggered connections coming into that unit cross a certain threshold. What this means is that two things  -  (i) the image the network is shown and (ii) what exactly all the weightings between its units are set to  -  create a complex chain reaction of triggered connections that works its way through the network (from left to right) and that directly effects which connections into the final layer are triggered.

The single unit is the output layer. Like the units in the middle layer, it takes the combined weightings of all the triggered connections coming into it and  -  in our case  -  if these are above a certain threshold, we treat this as an output of ‘yes’ from the network (with a ‘no’ given by a sum below that threshold). In this way, the network gives us an answer to the question, ‘Does this image contain a cat?’

The first time this whole process takes place, that answer is likely to be wrong. The weightings of the connections between the layers are essentially random, which means the output is essentially random; the network can’t successfully tell you whether or not the image features a cat.

If you tell the network whether it was right or wrong, it can go and change the weightings of the connections between its units (using a technique called backpropagation), in an attempt to get closer to the right answer. You do this enough times  - have the network output an answer, tell it whether it was correct and have it alter its weightings  -  and, gradually, it gets better and better at the task at hand. In this case, it learns the ability to tell you whether or not an image you show it has a cat in it. And this same technique can be used for a whole range of tasks, from translating speech to composing music.

Why is it called ‘deep’ learning?
‘Deep’ learning systems are really just neural networks in which there are lots of layers between the input layer and the output layer.


Why is this important?


Machine learning  - and AI in general  - has been around for a while. But it’s recently started accelerating at a rate that’s surprised a lot of people.

As recently as 2014, most experts thought it would be 10 years before a machine beat the world’s best players at Go. DeepMind proved them wrong. It’s becoming increasingly apparent that many tasks we once thought would be the domain of humans alone for the foreseeable future  -  if not forever  -  will be accomplished by machine learning systems much sooner than expected.

This is going to have a huge effect on politics, the economy, and society as a whole. Entire industries will be automated, leaving millions of people out of work and unable to retrain fast enough to stay ahead of ever-faster technological improvements.

This, in turn, is likely to lead to discontent on a scale never seen before. The political upheavals of 2016 are an early sign of this  -  the populist movements that have circled the globe, which tend towards isolationism and a rejection of the huge societal changes of the last few decades, have their roots in the communities most immediately threatened by mass job automation.

This dissatisfaction is only going to increase as machine learning systems become more and more capable. And if you think there’s a limit to what these machine learning systems can accomplish, bear in mind that the majority of AI experts think AI will be able to accomplish any intellectual task humans can perform by 2050.

It’s worth saying that AI is  -  if handled right  - going to bring huge benefits to humanity as a whole. A world of machines that can work tirelessly, innovate, and improve themselves, is a world in which advances in economic efficiency make everything that came before look like the Dark Ages.

But the path to this revolution in economic efficiency will be strewn with defunct industries and discarded drafts  -  full of underestimates  - of what we as a species think is technologically possible. Machine learning  -  the system described above, and others like it  -  is going to change the ways humans operate more than any technology that’s ever existed.