Path: blob/master/ML Clustering Analysis/ 4 Hierarchical-Clustering.ipynb
3074 views
Hierarchical Clustering
What is Hierarchical Clustering ?
Hierarchical clustering is a method of cluster analysis that seeks to build a hierarchy of clusters. It is particularly useful when the number of clusters is unknown, as it provides a dendrogram, a tree-like diagram that shows the arrangement of the clusters formed by the algorithm.
Types of Hierarchical Clustering
Agglomerative (Bottom-Up) Clustering:
Start with each data point as its own cluster.
At each step, merge the two closest clusters.
Continue until all data points are merged into a single cluster.
Divisive (Top-Down) Clustering:
Start with all data points in a single cluster.
At each step, split the cluster into two.
Continue until each data point is in its own cluster.
It is a type of connectivity model clustering which is based on the fact that data points that are closer to each other are more similar than the data points lying far away in a data space.
As the name speaks for itself, the hierarchical clustering forms the hierarchy of the clusters that can be studied by visualising dendogram.
How to measure closeness of points?
Hierarchical Clustering - Agglomerative
We will be looking at a clustering technique, which is Agglomerative Hierarchical Clustering.
Centroid Distance: Euclidean distance between mean of data points in the two clusters
Minimum Distance: Euclidean distance between two data points in the two clusters that are closest to each other
Maximum Distance: Euclidean distance between two data points in the two clusters that are farthest to each other
Understanding Algoritm
Let there be N data points. Firstly, these N data points are assigned to N different clusters with one data point in each cluster.
Then, two data points with minimum euclidean distance between them are merged into a single cluster.
Then, two clusters with minimum centroid distance between them are merged into a single cluster.
This process is repeated until we are left with a single cluster, hence forming hierarchy of clusters.
a) Agglomerative clustering
One of the simplest and easily understood algorithms used to perform agglomerative clustering is single linkage. In this algorithm, we start with considering each data point as a subcluster. We define a metric to measure the distance between all pairs of subclusters at each step and keep merging the nearest two subclusters in each step. We repeat this procedure till there is only one cluster in the system.
b) Divisive clustering
One of the algorithms used to perform divisive clustering is recursive k-means. As the name suggests, you recursively perform the procedure of k-means on each intermediate cluster till you encounter all the data samples in the system or the minimum number of data samples you desire to have in a cluster. In each step of this algorithm, you have to be mindful of how many clusters would you like to create next.
Explanation: Steps to Perform Agglomerative Hierarchical Clustering
linkage method: Defines how the distance between clusters is computed. The ward method minimizes the total variance within clusters. (We can experiment with different linkage methods (single, complete, average, etc.)
dendrogram: Visualizes the merging of clusters. The y-axis represents the distance (or dissimilarity) at which clusters are merged.
fcluster: Forms flat clusters by cutting the dendrogram at a specific height.
How many Clusters to form:
Visualising dendogram: Best choice of no. of clusters is no. of vertical lines that can be cut by a horizontal line, that can transverse maximum distance vertically without intersecting other cluster. For eg., in the below case, best choice for no. of clusters will be 4.
Intuition and prior knowledge of the data set.
Good Cluster Analysis
Data-points within same cluster share similar profile: Statistically, check the standard deviation for each input variable in each cluster. A perfect separation in case of cluster analysis is rarely achieved. Hence, even one standard deviation distance between two cluster means is considered to be a good separation.
Well spread proportion of data-points among clusters: There are no standards for this requirement. But a minimum of 5% and maximum of 35% of the total population can be assumed as a safe range for each cluster.
Generating Random Data
We will be generating a set of data using the make_blobs class.Input these parameters into make_blobs:
- n_samples: The total number of points equally divided among clusters.
- Choose a number from 10-1500
- centers: The number of centers to generate, or the fixed center locations.
- Choose arrays of x,y coordinates for generating the centers. Have 1-10 centers (ex. centers=[[1,1], [2,5]])
- cluster_std: The standard deviation of the clusters. The larger the number, the further apart the clusters
- Choose a number between 0.5-1.5
Save the result to X1 and y1.
Plot the scatter plot of the randomly generated data
Agglomerative Clustering
We will start by clustering the random data points we just created.The Agglomerative Clustering class will require two inputs:
- n_clusters: The number of clusters to form as well as the number of centroids to generate.
- Value will be: 4
- linkage: Which linkage criterion to use. The linkage criterion determines which distance to use between sets of observation. The algorithm will merge the pairs of cluster that minimize this criterion.
- Value will be: 'complete'
- Note: It is recommended you try everything with 'average' as well
Save the result to a variable called agglom
Fit the model with X1 and y1 from the generated data above.
Run the following code to show the clustering!
Remember to read the code and comments to gain more understanding on how the plotting works.
Dendrogram Associated for the Agglomerative Hierarchical Clustering
Remember that a distance matrix contains the distance from each point to every other point of a dataset .Use the function distance_matrix, which requires two inputs. Use the Feature Matrix, X2 as both inputs and save the distance matrix to a variable called dist_matrix
Remember that the distance values are symmetric, with a diagonal of 0's. This is one way of making sure your matrix is correct.
(print out dist_matrix to make sure it's correct)
Using the linkage class from hierarchy, pass in the parameters:
- The distance matrix
- 'complete' for complete linkage
Save the result to a variable called Z
A Hierarchical clustering is typically visualized as a dendrogram as shown in the following cell. Each merge is represented by a horizontal line. The y-coordinate of the horizontal line is the similarity of the two clusters that were merged, where cities are viewed as singleton clusters. By moving up from the bottom layer to the top node, a dendrogram allows us to reconstruct the history of merges that resulted in the depicted clustering.
Next, we will save the dendrogram to a variable called dendro. In doing this, the dendrogram will also be displayed. Using the dendrogram class from hierarchy, pass in the parameter:
- Z
Practice
We used complete linkage for our case, change it to average linkage to see how the dendogram changes.
Clustering on Vehicle dataset
Imagine that an automobile manufacturer has developed prototypes for a new vehicle. Before introducing the new model into its range, the manufacturer wants to determine which existing vehicles on the market are most like the prototypes--that is, how vehicles can be grouped, which group is the most similar with the model, and therefore which models they will be competing against.
Our objective here, is to use clustering methods, to find the most distinctive clusters of vehicles. It will summarize the existing vehicles and help manufacturers to make decision about the supply of new models.
Read data
lets read dataset to see what features the manufacturer has collected about the existing models.
The feature sets include price in thousands (price), engine size (engine_s), horsepower (horsepow), wheelbase (wheelbas), width (width), length (length), curb weight (curb_wgt), fuel capacity (fuel_cap) and fuel efficiency (mpg).
Data Cleaning
lets simply clear the dataset by dropping the rows that have null value:Feature selection
Lets select our feature set:
Normalization
Now we can normalize the feature set. MinMaxScaler transforms features by scaling each feature to a given range. It is by default (0, 1). That is, this estimator scales and translates each feature individually such that it is between zero and one.
Clustering using Scipy
In this part we use Scipy package to cluster the dataset: First, we calculate the distance matrix.In agglomerative clustering, at each iteration, the algorithm must update the distance matrix to reflect the distance of the newly formed cluster with the remaining clusters in the forest. The following methods are supported in Scipy for calculating the distance between the newly formed cluster and each: - single - complete - average - weighted - centroid
We use complete for our case, but feel free to change it to see how the results change.
Essentially, Hierarchical clustering does not require a pre-specified number of clusters. However, in some applications we want a partition of disjoint clusters just as in flat clustering. So you can use a cutting line:
Also, you can determine the number of clusters directly:
Now, plot the dendrogram:
Clustering using scikit-learn
Lets redo it again, but this time using scikit-learn package:Now, we can use the 'AgglomerativeClustering' function from scikit-learn library to cluster the dataset. The AgglomerativeClustering performs a hierarchical clustering using a bottom up approach. The linkage criteria determines the metric used for the merge strategy:
Weighted average minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach.
Maximum or complete linkage minimizes the maximum distance between observations of pairs of clusters.
Average linkage minimizes the average of the distances between all observations of pairs of clusters.
And, we can add a new field to our dataframe to show the cluster of each row:
As you can see, we are seeing the distribution of each cluster using the scatter plot, but it is not very clear where is the centroid of each cluster. Moreover, there are 2 types of vehicles in our dataset, "truck" (value of 0 in the type column) and "car" (value of 1 in the type column). So, we use them to distinguish the classes, and summarize the cluster. First we count the number of cases in each group:
Now we can look at the characteristics of each cluster:
It is obvious that we have 3 main clusters with the majority of vehicles in those.
Cars:
Cluster 1: with almost high mpg, and low in horsepower.
Cluster 2: with good mpg and horsepower, but highest price
Cluster 3: with good mpg, Aveage horsepower, avearge Price
Cluster 4: with low mpg, horse low and lowest price
Trucks:
Cluster 1: with almost highest mpg among trucks, and lowest in horsepower and price.
Cluster 2: with almost low mpg and medium horsepower, but average price
Cluster 3: with good mpg and horsepower, low price.
Please notice that we did not use type , and price of cars in the clustering process, but Hierarchical clustering could forge the clusters and discriminate them with quite high accuracy.