Blog

  • Machine Learning – HDBSCAN Clustering

    Working of HDBSCAN Clustering

    HDBSCAN builds a hierarchy of clusters using a mutual-reachability graph, which is a graph where each data point is a node and the edges between them are weighted by a measure of similarity or distance. The graph is built by connecting two points with an edge if their mutual reachability distance is below a given threshold.

    The mutual reachability distance between two points is the maximum of their reachability distances, which is a measure of how easily one point can be reached from the other. The reachability distance between two points is defined as the maximum of their distance and the minimum density of any point along their path.

    The hierarchy of clusters is then extracted from the mutual-reachability graph using a minimum spanning tree (MST) algorithm. The leaves of the MST correspond to the individual data points, while the internal nodes correspond to clusters of varying sizes and shapes.

    The HDBSCAN algorithm then applies a condensed tree algorithm to the MST to extract the clusters. The condensed tree is a compact representation of the MST that only includes the internal nodes of the tree. The condensed tree is then cut at a certain level to obtain the clusters, with the level of the cut determined by a user-defined minimum cluster size or a heuristic based on the stability of the clusters.

    Implementation in Python

    HDBSCAN is available as a Python library that can be installed using pip. The library provides an implementation of the HDBSCAN algorithm along with several useful functions for data preprocessing and visualization.

    Installation

    To install HDBSCAN, open a terminal window and type the following command −

    pip install hdbscan
    

    Usage

    To use HDBSCAN, first import the hdbscan library −

    import hdbscan
    

    Next, we generate a sample dataset using the make_blobs() function from scikit-learn −

    # generate random dataset with 1000 samples and 3 clusters
    X, y = make_blobs(n_samples=1000, centers=3, random_state=42)

    Now, create an instance of the HDBSCAN class and fit it to the data −

    clusterer = hdbscan.HDBSCAN(min_cluster_size=10, metric='euclidean')# fit the data to the clusterer
    clusterer.fit(X)

    This will apply HDBSCAN to the dataset and assign each point to a cluster. To visualize the clustering results, you can plot the data with color each point according to its cluster label −

    # get the cluster labels
    labels = clusterer.labels_
    
    # create a colormap for the clusters
    colors = np.array([x for x in'bgrcmykbgrcmykbgrcmykbgrcmyk'])
    colors = np.hstack([colors]*20)# plot the data with each point colored according to its cluster label
    plt.figure(figsize=(7.5,3.5))
    plt.scatter(X[:,0], X[:,1], c=colors[labels])
    plt.show()

    This code will produce a scatter plot of the data with each point colored according to its cluster label as follows −

    hdbscan

    HDBSCAN also provides several parameters that can be adjusted to fine-tune the clustering results −

    • min_cluster_size − The minimum size of a cluster. Points that are not part of any cluster are labeled as noise.
    • min_samples − The minimum number of samples in a neighborhood for a point to be considered a core point.
    • cluster_selection_epsilon − The radius of the neighborhood used for cluster selection.
    • metric − The distance metric used to measure the similarity between points.

    Advantages of HDBSCAN Clustering

    HDBSCAN has several advantages over other clustering algorithms −

    • Better handling of clusters of varying densities − HDBSCAN can identify clusters of different densities, which is a common problem in many datasets.
    • Ability to detect clusters of different shapes and sizes − HDBSCAN can identify clusters that are not necessarily spherical, which is another common problem in many datasets.
    • No need to specify the number of clusters − HDBSCAN does not require the user to specify the number of clusters, which can be difficult to determine a priori.
    • Robust to noise − HDBSCAN is robust to noisy data and can identify outliers as noise points.
  • Machine Learning – OPTICS Clustering

    OPTICS is like DBSCAN (Density-Based Spatial Clustering of Applications with Noise), another popular density-based clustering algorithm. However, OPTICS has several advantages over DBSCAN, including the ability to identify clusters of varying densities, the ability to handle noise, and the ability to produce a hierarchical clustering structure.

    Implementation of OPTICS in Python

    To implement OPTICS clustering in Python, we can use the scikit-learn library. The scikit-learn library provides a class called OPTICS that implements the OPTICS algorithm.

    Here’s an example of how to use the OPTICS class in scikit-learn to cluster a dataset −

    Example

    from sklearn.cluster import OPTICS
    from sklearn.datasets import make_blobs
    import matplotlib.pyplot as plt
    
    # Generate sample data
    X, y = make_blobs(n_samples=2000, centers=4, cluster_std=0.60, random_state=0)# Cluster the data using OPTICS
    optics = OPTICS(min_samples=50, xi=.05)
    optics.fit(X)# Plot the results
    labels = optics.labels_
    plt.figure(figsize=(7.5,3.5))
    plt.scatter(X[:,0], X[:,1], c=labels, cmap='turbo')
    plt.show()

    In this example, we first generate a sample dataset using the make_blobs function from scikit-learn. We then instantiate an OPTICS object with the min_samples parameter set to 50 and the xi parameter set to 0.05. The min_samples parameter specifies the minimum number of samples required for a cluster to be formed, and the xi parameter controls the steepness of the cluster hierarchy. We then fit the OPTICS object to the dataset using the fit method. Finally, we plot the results using a scatter plot, where each data point is colored according to its cluster label.

    Output

    When you execute this program, it will produce the following plot as the output −

    optics

    Advantages of OPTICS Clustering

    Following are the advantages of using OPTICS clustering −

    • Ability to handle clusters of varying densities − OPTICS can handle clusters that have varying densities, unlike some other clustering algorithms that require clusters to have uniform densities.
    • Ability to handle noise − OPTICS can identify noise data points that do not belong to any cluster, which is useful for removing outliers from the dataset.
    • Hierarchical clustering structure − OPTICS produces a hierarchical clustering structure that can be useful for analyzing the dataset at different levels of granularity.

    Disadvantages of OPTICS Clustering

    Following are some of the disadvantages of using OPTICS clustering.

    • Sensitivity to parameters − OPTICS requires careful tuning of its parameters, such as the min_samples and xi parameters, which can be challenging.
    • Computational complexity − OPTICS can be computationally expensive for large datasets, especially when using a high min_samples value.
  • Machine Learning – DBSCAN Clustering

    The DBSCAN Clustering algorithm works as follows −

    • Randomly select a data point that has not been visited.
    • If the data point has at least minPts neighbors within distance eps, create a new cluster and add the data point and its neighbors to the cluster.
    • If the data point does not have at least minPts neighbors within distance eps, mark the data point as noise and continue to the next data point.
    • Repeat steps 1-3 until all data points have been visited.

    Implementation in Python

    We can implement the DBSCAN algorithm in Python using the scikit-learn library. Here are the steps to do so −

    Load the dataset

    The first step is to load the dataset. We will use the make_moons function from the scikitlearn library to generate a toy dataset with two moons.

    from sklearn.datasets import make_moons
    X, y = make_moons(n_samples=200, noise=0.05, random_state=0)

    Perform DBSCAN clustering

    The next step is to perform DBSCAN clustering on the dataset. We will use the DBSCAN class from the scikit-learn library. We will set the minPts parameter to 5 and the “eps” parameter to 0.2.

    from sklearn.cluster import DBSCAN
    clustering = DBSCAN(eps=0.2, min_samples=5)
    clustering.fit(X)

    Visualize the results

    The final step is to visualize the results of the clustering. We will use the Matplotlib library to create a scatter plot of the dataset colored by the cluster assignments.

    import matplotlib.pyplot as plt
    plt.scatter(X[:,0], X[:,1], c=clustering.labels_, cmap='rainbow')
    plt.show()

    Example

    Here is the complete implementation of DBSCAN clustering in Python −

    from sklearn.datasets import make_moons
    X, y = make_moons(n_samples=200, noise=0.05, random_state=0)from sklearn.cluster import DBSCAN
    
    clustering = DBSCAN(eps=0.2, min_samples=5)
    clustering.fit(X)import matplotlib.pyplot as plt
    plt.figure(figsize=(7.5,3.5))
    plt.scatter(X[:,0], X[:,1], c=clustering.labels_, cmap='rainbow')
    
    plt.show()

    Output

    The resulting scatter plot should show two distinct clusters, each corresponding to one of the moons in the dataset. The noise data points should be colored black.

    DBSCAN clustering

    Advantages of DBSCAN

    Following are the advantages of using DBSCAN clustering −

    • DBSCAN can handle clusters of arbitrary shape, unlike k-means, which assumes that clusters are spherical.
    • It does not require prior knowledge of the number of clusters in the dataset, unlike k-means.
    • It can detect outliers, which are points that do not belong to any cluster. This is because DBSCAN defines clusters as dense regions of points, and points that are far from any dense region are considered outliers.
    • It is relatively insensitive to the initial choice of parameters, such as the epsilon and min_samples parameters, unlike k-means.
    • It is scalable to large datasets, as it only needs to compute pairwise distances between neighboring points, rather than all pairs of points.

    Disadvantages of DBSCAN

    Following are the disadvantages of using DBSCAN clustering −

    • It can be sensitive to the choice of the epsilon and min_samples parameters. If these parameters are not chosen carefully, DBSCAN may fail to identify clusters or merge them incorrectly.
    • It may not work well on datasets with varying densities, as it assumes that all clusters have the same density.
    • It may produce different results for different runs on the same dataset, due to the non-deterministic nature of the algorithm.
    • It may be computationally expensive for high-dimensional datasets, as the distance computations become more expensive as the number of dimensions increases.
    • It may not work well on datasets with noise or outliers if the density of the noise or outliers is too high. In such cases, the noise or outliers may be wrongly assigned to clusters.
  • Machine Learning – Density-Based Clustering

    Density-based clustering is based on the idea that clusters are regions of high density separated by regions of low density.

    • The algorithm works by first identifying “core” data points, which are data points that have a minimum number of neighbors within a specified distance. These core data points form the center of a cluster.
    • Next, the algorithm identifies “border” data points, which are data points that are not core data points but have at least one core data point as a neighbor.
    • Finally, the algorithm identifies “noise” data points, which are data points that are not core data points or border data points.

    Popular Density-based Clustering Algorithms

    Here are the most common density-based clustering algorithms −

    DBSCAN Clustering

    The DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm is one of the most common density-based clustering algorithms. The DBSCAN algorithm requires two parameters: the minimum number of neighbors (minPts) and the maximum distance between core data points (eps).

    OPTICS Clustering

    OPTICS (Ordering Points to Identify the Clustering Structure) is a density-based clustering algorithm that operates by building a reachability graph of the dataset. The reachability graph is a directed graph that connects each data point to its nearest neighbors within a specified distance threshold. The edges in the reachability graph are weighted according to the distance between the connected data points. The algorithm then constructs a hierarchical clustering structure by recursively splitting the reachability graph into clusters based on a specified density threshold.

    HDBSCAN Clustering

    HDBSCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise) is a clustering algorithm that is based on density clustering. It is a newer algorithm that builds upon the popular DBSCAN algorithm and offers several advantages over it, such as better handling of clusters of varying densities and the ability to detect clusters of different shapes and sizes.

    In the next three chapters, we will discuss all the three density-based clustering algorithms in detail along with their implementation in Python.

  • Hierarchical Clustering in Machine Learning

    Hierarchical Clustering

    Hierarchical clustering is an unsupervised learning algorithm that is used to group together the unlabeled data points having similar characteristics. Hierarchical clustering algorithms falls into following two categories −

    • Agglomerative hierarchical algorithms − In agglomerative hierarchical algorithms, each data point is treated as a single cluster and then successively merge or agglomerate (bottom-up approach) the pairs of clusters. The hierarchy of the clusters is represented as a dendrogram or tree structure.
    • Divisive hierarchical algorithms − On the other hand, in divisive hierarchical algorithms, all the data points are treated as one big cluster and the process of clustering involves dividing (Top-down approach) the one big cluster into various small clusters.

    Steps to Perform Agglomerative Hierarchical Clustering

    We are going to explain the most used and important Hierarchical clustering i.e. agglomerative. The steps to perform the same is as follows −

    • Step 1 − Treat each data point as single cluster. Hence, we will be having say K clusters at start. The number of data points will also be K at start.
    • Step 2 − Now, in this step we need to form a big cluster by joining two closet datapoints. This will result in total of K-1 clusters.
    • Step 3 − Now, to form more clusters we need to join two closet clusters. This will result in total of K-2 clusters.
    • Step 4 − Now, to form one big cluster repeat the above three steps until K would become 0 i.e. no more data points left to join.
    • Step 5 − At last, after making one single big cluster, dendrograms will be used to divide into multiple clusters depending upon the problem.

    Role of Dendrograms in Agglomerative Hierarchical Clustering

    As we discussed in the last step, the role of dendrogram started once the big cluster is formed. Dendrogram will be used to split the clusters into multiple cluster of related data points depending upon our problem. It can be understood with the help of following example −

    Example 1

    To understand, let’s start with importing the required libraries as follows −

    %matplotlib inline
    import matplotlib.pyplot as plt
    import numpy as np
    

    Next, we will be plotting the datapoints we have taken for this example −

    X = np.array([[7,8],[12,20],[17,19],[26,15],[32,37],[87,75],[73,85],[62,80],[73,60],[87,96],])
    labels =range(1,11)
    plt.figure(figsize=(10,7))
    plt.subplots_adjust(bottom=0.1)
    plt.scatter(X[:,0],X[:,1], label='True Position')for label, x, y inzip(labels, X[:,0], X[:,1]):
       plt.annotate(label,xy=(x, y), xytext=(-3,3),textcoords='offset points', ha='right', va='bottom')
    plt.show()

    Output

    When you execute this code, it will produce the following plot as the output −

    Hierarchical Clustering

    From the above diagram, it is very easy to see we have two clusters in our datapoints but in real-world data, there can be thousands of clusters.

    Next, we will be plotting the dendrograms of our datapoints by using Scipy library −

    from scipy.cluster.hierarchy import dendrogram, linkage
    from matplotlib import pyplot as plt
    linked = linkage(X,'single')
    labelList =range(1,11)
    plt.figure(figsize=(10,7))
    
    dendrogram(linked, orientation='top',labels=labelList,
    distance_sort='descending',show_leaf_counts=True)
    
    plt.show()

    It will produce the following plot −

    Longest Vertical Distance

    Now, once the big cluster is formed, the longest vertical distance is selected. A vertical line is then drawn through it as shown in the following diagram. As the horizontal line crosses the blue line at two points hence the number of clusters would be two.

    Longest Vertical_Distance Selected

    Next, we need to import the class for clustering and call its fit_predict method to predict the cluster. We are importing AgglomerativeClustering class of sklearn.cluster library −

    from sklearn.cluster import AgglomerativeClustering
    
    cluster = AgglomerativeClustering(n_clusters=2, affinity='euclidean',
    linkage='ward')
    
    cluster.fit_predict(X)

    Next, plot the cluster with the help of following code −

    plt.scatter(X[:,0],X[:,1], c=cluster.labels_, cmap='rainbow')

    The following diagram shows the two clusters from our datapoints.

    Two Clusters Datapoints

    Example 2

    As we understood the concept of dendrograms from the simple example above, let’s move to another example in which we are creating clusters of the data point in Pima Indian Diabetes Dataset by using hierarchical clustering −

    import matplotlib.pyplot as plt
    import pandas as pd
    %matplotlib inline
    import numpy as np
    
    from pandas import read_csv
    path =r"C:\pima-indians-diabetes.csv"
    headernames =['preg','plas','pres','skin','test','mass','pedi','age','class']
    data = read_csv(path, names=headernames)
    array = data.values
    X = array[:,0:8]
    Y = array[:,8]
    
    patient_data = data.iloc[:,3:5].values
    import scipy.cluster.hierarchy as shc
    plt.figure(figsize=(10,7)) 
    plt.title("Patient Dendograms")
    
    dend = shc.dendrogram(shc.linkage(data, method='ward'))from sklearn.cluster import AgglomerativeClustering
    cluster = AgglomerativeClustering(n_clusters=4, affinity='euclidean', linkage='ward')
    cluster.fit_predict(patient_data)
    
    plt.figure(figsize=(7.2,5.5))
    plt.scatter(patient_data[:,0], patient_data[:,1], c=cluster.labels_,
    cmap='rainbow')

    Output

    When you run this code, it will produce the following two plots as the output −

    Hierarchical Clustering1
    Hierarchical Clustering2
  • Mean-Shift Clustering Algorithm in Machine Learning

    Mean-Shift Clustering Algorithm

    The Mean-Shift clustering algorithm is a non-parametric clustering algorithm that works by iteratively shifting the mean of a data point towards the densest area of the data. The densest area of the data is determined by the kernel function, which is a function that assigns weights to the data points based on their distance from the mean. The kernel function used in Mean-Shift clustering is usually a Gaussian function.

    The Mean-Shift clustering algorithm is a powerful clustering algorithm used in unsupervised learning. Unlike K-means clustering, it does not make any assumptions; hence it is a non-parametric algorithm.

    The difference between K-Means algorithm and Mean-Shift is that later one does not need to specify the number of clusters in advance because the number of clusters will be determined by the algorithm w.r.t data.

    Working of Mean-Shift Algorithm

    We can understand the working of Mean-Shift clustering algorithm with the help of following steps −

    • Step 1 − First, start with the data points assigned to a cluster of their own.
    • Step 2 − Next, this algorithm will compute the centroids.
    • Step 3 − In this step, location of new centroids will be updated.
    • Step 4 − Now, the process will be iterated and moved to the higher density region.
    • Step 5 − At last, it will be stopped once the centroids reach at position from where it cannot move further.

    The Mean-Shift clustering algorithm is a density-based clustering algorithm, which means that it identifies clusters based on the density of the data points rather than the distance between them. In other words, the algorithm identifies clusters based on the areas where the density of the data points is highest.

    Implementation of Mean-Shift Clustering in Python

    The Mean-Shift clustering algorithm can be implemented in Python programming language using the scikit-learn library. The scikit-learn library is a popular machine learning library in Python that provides various tools for data analysis and machine learning. The following steps are involved in implementing the Mean-Shift clustering algorithm in Python using the scikit-learn library −

    Step 1 − Import the necessary libraries

    The numpy library is used for scientific computing in Python, while the matplotlib library is used for data visualization. The sklearn.cluster library contains the MeanShift class, which is used for implementing the Mean-Shift clustering algorithm in Python.

    The estimate_bandwidth function is used to estimate the bandwidth of the kernel function, which is an important parameter in the Mean-Shift clustering algorithm.

    import numpy as np
    import matplotlib.pyplot as plt
    from sklearn.cluster import MeanShift, estimate_bandwidth
    

    Step 2 − Generate the data

    In this step, we generate a random dataset with 500 data points and 2 features. We use the numpy.random.randn function to generate the data.

    # Generate the data
    X = np.random.randn(500,2)

    Step 3 − Estimate the bandwidth of the kernel function

    In this step, we estimate the bandwidth of the kernel function using the estimate_bandwidth function. The bandwidth is an important parameter in the Mean-Shift clustering algorithm, which determines the width of the kernel function.

    # Estimate the bandwidth
    bandwidth = estimate_bandwidth(X, quantile=0.1, n_samples=100)

    Step 4 − Initialize the Mean-Shift clustering algorithm

    In this step, we initialize the Mean-Shift clustering algorithm using the MeanShift class. We pass the bandwidth parameter to the class to set the width of the kernel function.

    # Initialize the Mean-Shift algorithm
    ms = MeanShift(bandwidth=bandwidth, bin_seeding=True)

    Step 5 − Train the model

    In this step, we train the Mean-Shift clustering algorithm on the dataset using the fit method of the MeanShift class.

    # Train the model
    ms.fit(X)

    Step 6 − Visualize the results

    # Visualize the results
    labels = ms.labels_
    cluster_centers = ms.cluster_centers_
    n_clusters_ =len(np.unique(labels))print("Number of estimated clusters:", n_clusters_)# Plot the data points and the centroids
    plt.figure(figsize=(7.5,3.5))
    plt.scatter(X[:,0], X[:,1], c=labels, cmap='viridis')
    plt.scatter(cluster_centers[:,0], cluster_centers[:,1], marker='*', s=300, c='r')
    plt.show()

    In this step, we visualize the results of the Mean-Shift clustering algorithm. We extract the cluster labels and the cluster centers from the trained model. We then print the number of estimated clusters. Finally, we plot the data points and the centroids using the matplotlib library.

    Complete Example

    Here is the complete implementation example of Mean-Shift Clustering Algorithm in python −

    import numpy as np
    import matplotlib.pyplot as plt
    from sklearn.cluster import MeanShift, estimate_bandwidth
    
    # Generate the data
    X = np.random.randn(500,2)# Estimate the bandwidth
    bandwidth = estimate_bandwidth(X, quantile=0.1, n_samples=100)# Initialize the Mean-Shift algorithm
    ms = MeanShift(bandwidth=bandwidth, bin_seeding=True)# Train the model
    ms.fit(X)# Visualize the results
    labels = ms.labels_
    cluster_centers = ms.cluster_centers_
    n_clusters_ =len(np.unique(labels))print("Number of estimated clusters:", n_clusters_)# Plot the data points and the centroids
    plt.figure(figsize=(7.5,3.5))
    plt.scatter(X[:,0], X[:,1], c=labels, cmap='summer')
    plt.scatter(cluster_centers[:,0], cluster_centers[:,1], marker='*',
    s=200, c='r')
    plt.show()

    Output

    When you execute the program, it will produce the following plot as the output −

    Mean Shift Clustering

    Example

    It is a simple example to understand how Mean-Shift algorithm works. In this example, we are going to first generate 2D dataset containing 4 different blobs and after that will apply Mean-Shift algorithm to see the result.

    %matplotlib inline
    import numpy as np
    from sklearn.cluster import MeanShift
    import matplotlib.pyplot as plt
    from matplotlib import style
    style.use("ggplot")from sklearn.datasets import make_blobs
    centers =[[3,3,3],[4,5,5],[3,10,10]]
    X, _ = make_blobs(n_samples =700, centers = centers, cluster_std =0.5)
    plt.scatter(X[:,0],X[:,1])
    plt.show()

    Output

    2d data points with 4 blobs
    ms = MeanShift()
    ms.fit(X)
    labels = ms.labels_
    cluster_centers = ms.cluster_centers_
    print(cluster_centers)
    n_clusters_ =len(np.unique(labels))print("Estimated clusters:", n_clusters_)
    colors =10*['r.','g.','b.','c.','k.','y.','m.']for i inrange(len(X)):
        plt.plot(X[i][0], X[i][1], colors[labels[i]], markersize =3)
    plt.scatter(cluster_centers[:,0],cluster_centers[:,1],
        marker=".",color='k', s=20, linewidths =5, zorder=10)
    plt.show()

    Output

    [[ 4.03457771  5.03063843  4.92928409]
     [ 3.01124859  2.9957586   2.981767  ]
     [ 2.94969928 10.00712673 10.01575558]]
    Estimated clusters: 3
    
    Visualizing Clusters

    Applications of Mean-Shift Clustering

    The Mean-Shift clustering algorithm has several applications in various fields. Some of the applications of Mean-Shift clustering are as follows −

    • Computer vision − Mean-Shift clustering is widely used in computer vision for object tracking, image segmentation, and feature extraction.
    • Image processing − Mean-Shift clustering is used for image segmentation, which is the process of dividing an image into multiple segments based on the similarity of the pixels.
    • Anomaly detection − Mean-Shift clustering can be used for detecting anomalies in data by identifying the areas with low density.
    • Customer segmentation − Mean-Shift clustering can be used for customer segmentation in marketing by identifying groups of customers with similar behavior and preferences.
    • Social network analysis − Mean-Shift clustering can be used for clustering users in social networks based on their interests and interactions.

    Advantages and Disadvantages

    Let’s discuss some advantages and disadvantages of the means-shift clustering algorithm.

    Advantages

    The following are some advantages of Mean-Shift clustering algorithm −

    • It does not need to make any model assumption as like in K-means or Gaussian mixture.
    • It can also model the complex clusters which have nonconvex shape.
    • It only needs one parameter named bandwidth which automatically determines the number of clusters.
    • There is no issue of local minima as like in K-means.
    • No problem generated from outliers.

    Disadvantages

    The following are some disadvantages of Mean-Shift clustering algorithm −

    • Mean-shift algorithm does not work well in case of high dimension, where number of clusters changes abruptly.
    • We do not have any direct control on the number of clusters but in some applications, we need a specific number of clusters.
    • It cannot differentiate between meaningful and meaningless modes.
  • Machine Learning – K-Medoids Clustering

    K-Medoids Clustering – Algorithm

    The K-medoids clustering algorithm can be summarized as follows −

    • Initialize k medoids − Select k random data points from the dataset as the initial medoids.
    • Assign data points to medoids − Assign each data point to the nearest medoid.
    • Update medoids − For each cluster, select the data point that minimizes the sum of distances to all the other data points in the cluster, and set it as the new medoid.
    • Repeat steps 2 and 3 until convergence or a maximum number of iterations is reached.

    Implementation in Python

    To implement K-medoids clustering in Python, we can use the scikit-learn library. The scikit-learn library provides the KMedoids class, which can be used to perform K-medoids clustering on a dataset.

    First, we need to import the required libraries −

    from sklearn_extra.cluster import KMedoids
    from sklearn.datasets import make_blobs
    import matplotlib.pyplot as plt
    

    Next, we generate a sample dataset using the make_blobs() function from scikit-learn −

    X, y = make_blobs(n_samples=500, centers=3, random_state=42)

    Here, we generate a dataset with 500 data points and 3 clusters.

    Next, we initialize the KMedoids class and fit the data −

    kmedoids = KMedoids(n_clusters=3, random_state=42)
    kmedoids.fit(X)

    Here, we set the number of clusters to 3 and use the random_state parameter to ensure reproducibility.

    Finally, we can visualize the clustering results using a scatter plot −

    plt.figure(figsize=(7.5,3.5))
    plt.scatter(X[:,0], X[:,1], c=kmedoids.labels_, cmap='viridis')
    plt.scatter(kmedoids.cluster_centers_[:,0],
    kmedoids.cluster_centers_[:,1], marker='x', color='red')
    plt.show()

    Example

    Here is the complete implementation in Python −

    from sklearn_extra.cluster import KMedoids
    from sklearn.datasets import make_blobs
    import matplotlib.pyplot as plt
    
    # Generate sample data
    X, y = make_blobs(n_samples=500, centers=3, random_state=42)# Cluster the data using KMedoids
    kmedoids = KMedoids(n_clusters=3, random_state=42)
    kmedoids.fit(X)# Plot the results
    plt.figure(figsize=(7.5,3.5))
    plt.scatter(X[:,0], X[:,1], c=kmedoids.labels_, cmap='viridis')
    plt.scatter(kmedoids.cluster_centers_[:,0],
    kmedoids.cluster_centers_[:,1], marker='x', color='red')
    plt.show()

    Output

    Here, we plot the data points as a scatter plot and color them based on their cluster labels. We also plot the medoids as red crosses.

    medoids

    K-Medoids Clustering – Advantages

    Here are the advantages of using K-medoids clustering −

    • Robust to outliers and noise − K-medoids clustering is more robust to outliers and noise than K-means clustering because it uses a representative data point, called a medoid, to represent the center of the cluster.
    • Can handle non-Euclidean distance metrics − K-medoids clustering can be used with any distance metric, including non-Euclidean distance metrics, such as Manhattan distance and cosine similarity.
    • Computationally efficient − K-medoids clustering has a computational complexity of O(k*n^2), which is lower than the computational complexity of K-means clustering.

    K-Medoids Clustering – Disadvantages

    The disadvantages of using K-medoids clustering are as follows −

    • Sensitive to the choice of k − The performance of K-medoids clustering can be sensitive to the choice of k, the number of clusters.
    • Not suitable for high-dimensional data − K-medoids clustering may not perform well on high-dimensional data because the medoid selection process becomes computationally expensive.
  • K-Means Clustering Algorithm in Machine Learning

    K-Means Clustering Algorithm

    K-means clustering algorithm computes the centroids and iterates until we it finds optimal centroid. It assumes that the number of clusters are already known. It is also called flat clustering algorithm. The number of clusters identified from data by algorithm is represented by ‘K’ in K-means.

    In this algorithm, the data points are assigned to a cluster in such a manner that the sum of the squared distance between the data points and centroid would be minimum. It is to be understood that less variation within the clusters will lead to more similar data points within same cluster.

    Working of K-Means Algorithm

    We can understand the working of K-Means clustering algorithm with the help of following steps −

    • Step 1 − First, we need to specify the number of clusters, K, need to be generated by this algorithm.
    • Step 2 − Next, randomly select K data points and assign each data point to a cluster. In simple words, classify the data based on the number of data points.
    • Step 3 − Now it will compute the cluster centroids.
    • Step 4 − Next, keep iterating the following until we find optimal centroid which is the assignment of data points to the clusters that are not changing any more −4.1 − First, the sum of squared distance between data points and centroids would be computed.4.2 − Now, we have to assign each data point to the cluster that is closer than other cluster (centroid).4.3 − At last compute the centroids for the clusters by taking the average of all data points of that cluster.

    K-means follows Expectation-Maximization approach to solve the problem. The Expectation-step is used for assigning the data points to the closest cluster and the Maximization-step is used for computing the centroid of each cluster.

    While working with K-means algorithm we need to take care of the following things −

    • While working with clustering algorithms including K-Means, it is recommended to standardize the data because such algorithms use distance-based measurement to determine the similarity between data points.
    • Due to the iterative nature of K-Means and random initialization of centroids, K-Means may stick in a local optimum and may not converge to global optimum. That is why it is recommended to use different initializations of centroids.

    The K-Means algorithm is a straightforward and efficient algorithm, and it can handle large datasets. However, it has some limitations, such as its sensitivity to the initial centroids, its tendency to converge to local optima, and its assumption of equal variance for all clusters.

    Objective of K-means Clustering

    The main goals of cluster analysis are −

    • To get a meaningful intuition from the data we are working with.
    • Cluster-then-predict where different models will be built for different subgroups.

    Implementation of K-Means Algorithm Using Python

    Python has several libraries that provide implementations of various machine learning algorithms, including K-Means clustering. Let’s see how to implement the K-Means algorithm in Python using the scikit-learn library.

    Example 1

    It is a simple example to understand how k-means works. In this example, we generate 300 random data points with two features. And apply K-means algorithm to generate clusters.

    Step 1 − Import Required Libraries

    To implement the K-Means algorithm in Python, we first need to import the required libraries. We will use the numpy and matplotlib libraries for data processing and visualization, respectively, and the scikit-learn library for the K-Means algorithm.

    import numpy as np
    import matplotlib.pyplot as plt
    from sklearn.cluster import KMeans
    

    Step 2 − Generate Data

    To test the K-Means algorithm, we need to generate some sample data. In this example, we will generate 300 random data points with two features. We will visualize the data also.

    X = np.random.rand(300,2)
    
    plt.figure(figsize=(7.5,3.5))
    plt.scatter(X[:,0], X[:,1], s=20, cmap='summer');
    plt.show()

    Output

    K-Means Clustering

    Step 3 − Initialize K-Means

    Next, we need to initialize the K-Means algorithm by specifying the number of clusters (K) and the maximum number of iterations.

    kmeans = KMeans(n_clusters=3, max_iter=100)

    Step 4 − Train the Model

    After initializing the K-Means algorithm, we can train the model by fitting the data to the algorithm.

    kmeans.fit(X)

    Step 5 − Visualize the Clusters

    To visualize the clusters, we can plot the data points and color them based on their assigned cluster.

    plt.figure(figsize=(7.5,3.5))
    plt.scatter(X[:,0], X[:,1], c=kmeans.labels_, s=20, cmap='summer')
    plt.scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1],
    marker='x', c='r', s=50, alpha=0.9)
    plt.show()

    Output

    The output of the above code will be a plot with the data points colored based on their assigned cluster, and the centroids marked with an ‘x’ symbol in red color.

    K-Means Clustering Plot

    Example 2

    In this example, we are going to first generate 2D dataset containing 4 different blobs and after that will apply k-means algorithm to see the result.

    First, we will start by importing the necessary packages −

    %matplotlib inline
    import matplotlib.pyplot as plt
    import seaborn as sns; sns.set()import numpy as np
    from sklearn.cluster import KMeans
    

    The following code will generate the 2D, containing four blobs −

    from sklearn.datasets import make_blobs
    X, y_true = make_blobs(n_samples=400, centers=4, cluster_std=0.60, random_state=0)

    Next, the following code will help us to visualize the dataset −

    plt.scatter(X[:,0], X[:,1], s=20);
    plt.show()
    Visualizing 2D Blog

    Next, make an object of KMeans along with providing number of clusters, train the model and do the prediction as follows −

    kmeans = KMeans(n_clusters=4)
    kmeans.fit(X)
    y_kmeans = kmeans.predict(X)

    Now, with the help of following code we can plot and visualize the cluster’s centers picked by k-means Python estimator −

    plt.scatter(X[:,0], X[:,1], c=y_kmeans, s=20, cmap='summer')
    centers = kmeans.cluster_centers_
    plt.scatter(centers[:,0], centers[:,1], c='blue', s=100, alpha=0.9);
    plt.show()
    Visualizing Clusters Ceters

    Example 3

    Let us move to another example in which we are going to apply K-means clustering on simple digits dataset. K-means will try to identify similar digits without using the original label information.

    First, we will start by importing the necessary packages −

    %matplotlib inline
    import matplotlib.pyplot as plt
    import seaborn as sns; sns.set()import numpy as np
    from sklearn.cluster import KMeans
    

    Next, load the digit dataset from sklearn and make an object of it. We can also find number of rows and columns in this dataset as follows −

    from sklearn.datasets import load_digits
    digits = load_digits()
    digits.data.shape
    

    Output

    (1797, 64)
    

    The above output shows that this dataset is having 1797 samples with 64 features.

    We can perform the clustering as we did in Example 1 above −

    kmeans = KMeans(n_clusters=10, random_state=0)
    clusters = kmeans.fit_predict(digits.data)
    kmeans.cluster_centers_.shape
    

    Output

    (10, 64)
    

    The above output shows that K-means created 10 clusters with 64 features.

    fig, ax = plt.subplots(2,5, figsize=(8,3))
    centers = kmeans.cluster_centers_.reshape(10,8,8)for axi, center inzip(ax.flat, centers):
       axi.set(xticks=[], yticks=[])
       axi.imshow(center, interpolation='nearest', cmap=plt.cm.binary)

    Output

    As output, we will get following image showing clusters centers learned by k-means.

    Visualizing Digits Clusters Centers

    The following lines of code will match the learned cluster labels with the true labels found in them −

    from scipy.stats import mode
    labels = np.zeros_like(clusters)for i inrange(10):
       mask =(clusters == i)
       labels[mask]= mode(digits.target[mask])[0]

    Next, we can check the accuracy as follows −

    from sklearn.metrics import accuracy_score
    accuracy_score(digits.target, labels)

    Output

    0.7935447968836951
    

    The above output shows that the accuracy is around 80%.

    Advantages of K-Means Clustering Algorithm

    The following are some advantages of K-Means clustering algorithms −

    • It is very easy to understand and implement.
    • If we have large number of variables then, K-means would be faster than Hierarchical clustering.
    • On re-computation of centroids, an instance can change the cluster.
    • Tighter clusters are formed with K-means as compared to Hierarchical clustering.

    Disadvantages of K-Means Clustering Algorithm

    The following are some disadvantages of K-Means clustering algorithms −

    • It is a bit difficult to predict the number of clusters i.e. the value of k.
    • Output is strongly impacted by initial inputs like number of clusters (value of k).
    • Order of data will have strong impact on the final output.
    • It is very sensitive to rescaling. If we will rescale our data by means of normalization or standardization, then the output will completely change.final output.
    • It is not good in doing clustering job if the clusters have a complicated geometric shape.

    Applications of K-Means Clustering

    K-Means clustering is a versatile algorithm with various applications in several fields. Here we have highlighted some of the important applications −

    Image Segmentation

    K-Means clustering can be used to segment an image into different regions based on the color or texture of the pixels. This technique is widely used in computer vision applications, such as object recognition, image retrieval, and medical imaging.

    Customer Segmentation

    K-Means clustering can be used to segment customers into different groups based on their purchasing behavior or demographic characteristics. This technique is widely used in marketing applications, such as customer retention, loyalty programs, and targeted advertising.

    Anomaly Detection

    K-Means clustering can be used to detect anomalies in a dataset by identifying data points that do not belong to any cluster. This technique is widely used in fraud detection, network intrusion detection, and predictive maintenance.

    Genomic Data Analysis

    K-Means clustering can be used to analyze gene expression data to identify different groups of genes that are co-regulated or co-expressed. This technique is widely used in bioinformatics applications, such as drug discovery, disease diagnosis, and personalized medicine.

  • Machine Learning – Centroid-Based Clustering

    Centroid-based clustering is a class of machine learning algorithms that aims to partition a dataset into groups or clusters based on the proximity of data points to the centroid of each cluster.

    The centroid of a cluster is the arithmetic mean of all the data points in that cluster and serves as a representative point for that cluster.

    The two most popular centroid-based clustering algorithms are −

    K-means Clustering

    K-Means clustering is a popular unsupervised machine learning algorithm used for clustering data. It is a simple and efficient algorithm that can group data points into K clusters based on their similarity. The algorithm works by first randomly selecting K centroids, which are the initial centers of each cluster. Each data point is then assigned to the cluster whose centroid is closest to it. The centroids are then updated by taking the mean of all the data points in the cluster. This process is repeated until the centroids no longer move or the maximum number of iterations is reached.

    K-Medoids Clustering

    K-medoids clustering is a partition-based clustering algorithm that is used to cluster a set of data points into “k” clusters. Unlike K-means clustering, which uses the mean value of the data points to represent the center of the cluster, K-medoids clustering uses a representative data point, called a medoid, to represent the center of the cluster. The medoid is the data point that minimizes the sum of the distances between it and all the other data points in the cluster. This makes K-medoids clustering more robust to outliers and noise than K-means clustering.

    We will discuss these two clustering methods in the next two chapters.

  • Clustering Algorithms in Machine Learning

    Clustering Algorithms are one of the most useful unsupervised machine learning methods. These methods are used to find similarity as well as the relationship patterns among data samples and then cluster those samples into groups having similarity based on features.

    Clustering is important because it determines the intrinsic grouping among the present unlabeled data. They basically make some assumptions about data points to constitute their similarity. Each assumption will construct different but equally valid clusters.

    For example, below is the diagram which shows clustering system grouped together the similar kind of data in different clusters −

    clustering system grouped

    Cluster Formation Methods

    It is not necessary that clusters will be formed in spherical form. Followings are some other cluster formation methods −

    Density-based

    In these methods, the clusters are formed as the dense region. The advantage of these methods is that they have good accuracy as well as good ability to merge two clusters. Ex. Density-Based Spatial Clustering of Applications with Noise (DBSCAN), Ordering Points to identify Clustering structure (OPTICS) etc.

    Hierarchical-based

    In these methods, the clusters are formed as a tree type structure based on the hierarchy. They have two categories namely, Agglomerative (Bottom up approach) and Divisive (Top down approach). Ex. Clustering using Representatives (CURE), Balanced iterative Reducing Clustering using Hierarchies (BIRCH) etc.

    Partitioning

    In these methods, the clusters are formed by portioning the objects into k clusters. Number of clusters will be equal to the number of partitions. Ex. K-means, Clustering Large Applications based upon randomized Search (CLARANS).

    Grid

    In these methods, the clusters are formed as a grid like structure. The advantage of these methods is that all the clustering operation done on these grids are fast and independent of the number of data objects. Ex. Statistical Information Grid (STING), Clustering in Quest (CLIQUE).

    Clustering Algorithms in Machine Learning

    The following are the most important and useful machine learning clustering algorithms −

    • K-Means Clustering
    • K-Medoids Clustering
    • Mean-Shift Clustering
    • DBSCAN Clustering
    • OPTICS Clustering
    • HDBSCAN Clustering
    • BIRCH algorithm
    • Affinity Propagation Clustering
    • Agglomerative Clustering
    • Gaussian Mixture Model

    K-Means Clustering

    The K-Means clustering algorithm computes the centroids and iterates until we it finds optimal centroid. It assumes that the number of clusters are already known. It is also called flat clustering algorithm. The number of clusters identified from data by algorithm is represented by ‘K’ in K-means.

    K-Medoids Clustering

    The K-Methoids Clustering is an improved version of K-means clustering algorithm. Working is as follows

    • Select k random data points from the dataset as the initial medoids.
    • Assign each data point to the nearest medoid.
    • For each cluster, select the data point that minimizes the sum of distances to all the other data points in the cluster, and set it as the new medoid.
    • Repeat steps 2 and 3 until convergence or a maximum number of iterations is reached.

    Mean-Shift Clustering

    Mean-Shift ClusteringIt is another powerful clustering algorithm used in unsupervised learning. Unlike K-means clustering, it does not make any assumptions hence it is a non-parametric algorithm.

    DBSCAN Clustering

    The DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm is one of the most common density-based clustering algorithms. The DBSCAN algorithm requires two parameters: the minimum number of neighbors (minPts) and the maximum distance between core data points (eps).

    OPTICS Clustering

    OPTICS (Ordering Points to Identify the Clustering Structure) is like DBSCAN, another popular density-based clustering algorithm. However, OPTICS has several advantages over DBSCAN, including the ability to identify clusters of varying densities, the ability to handle noise, and the ability to produce a hierarchical clustering structure.

    HDBSCAN Clustering

    HDBSCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise) is a clustering algorithm that is based on density clustering. It is a newer algorithm that builds upon the popular DBSCAN algorithm and offers several advantages over it, such as better handling of clusters of varying densities and the ability to detect clusters of different shapes and sizes.

    BIRCH algorithm

    BIRCH (Balanced Iterative Reducing and Clustering hierarchies) is a hierarchical clustering algorithm that is designed to handle large datasets efficiently. The algorithm builds a treelike structure of clusters by recursively partitioning the data into subclusters until a stopping criterion is met.

    Affinity Propagation Clustering

    Affinity Propagation is a clustering algorithm that identifies “exemplars” in a dataset and assigns each data point to one of these exemplars. It is a type of clustering algorithm that does not require a pre-specified number of clusters, making it a useful tool for exploratory data analysis. Affinity Propagation was introduced by Frey and Dueck in 2007 and has since been widely used in many fields such as biology, computer vision, and social network analysis.

    Agglomerative Clustering

    Agglomerative clustering is a hierarchical clustering algorithm that starts with each data point as its own cluster and iteratively merges the closest clusters until a stopping criterion is reached. It is a bottom-up approach that produces a dendrogram, which is a tree-like diagram that shows the hierarchical relationship between the clusters. The algorithm can be implemented using the scikit-learn library in Python.

    Gaussian Mixture Model

    Gaussian Mixture Models (GMM) is a popular clustering algorithm used in machine learning that assumes that the data is generated from a mixture of Gaussian distributions. In other words, GMM tries to fit a set of Gaussian distributions to the data, where each Gaussian distribution represents a cluster in the data.

    Measuring Clustering Performance

    One of the most important consideration regarding ML model is assessing its performance or you can say model’s quality. In case of supervised learning algorithms, assessing the quality of our model is easy because we already have labels for every example.

    On the other hand, in case of unsupervised learning algorithms we are not that much blessed because we deal with unlabeled data. But still we have some metrics that give the practitioner an insight about the happening of change in clusters depending on algorithm.

    Before we deep dive into such metrics, we must understand that these metrics only evaluates the comparative performance of models against each other rather than measuring the validity of the model’s prediction. Followings are some of the metrics that we can deploy on clustering algorithms to measure the quality of model −1. Silhouette Analysis 2. Davis-Bouldin Index 3. Dunn Index

    1. Silhouette Analysis

    Silhouette analysis used to check the quality of clustering model by measuring the distance between the clusters. It basically provides us a way to assess the parameters like number of clusters with the help of Silhouette score. This score measures how close each point in one cluster is to points in the neighboring clusters.

    Analysis of Silhouette Score

    The range of Silhouette score is [-1, 1]. Its analysis is as follows −

    • +1 Score − Near +1 Silhouette score indicates that the sample is far away from its neighboring cluster.
    • 0 Score − 0 Silhouette score indicates that the sample is on or very close to the decision boundary separating two neighboring clusters.
    • -1 Score &minusl -1 Silhouette score indicates that the samples have been assigned to the wrong clusters.

    The calculation of Silhouette score can be done by using the following formula −

    =()/ (,)

    Here, = mean distance to the points in the nearest cluster

    And, = mean intra-cluster distance to all the points.

    2. Davis-Bouldin Index

    DB index is another good metric to perform the analysis of clustering algorithms. With the help of DB index, we can understand the following points about clustering model −

    • Weather the clusters are well-spaced from each other or not?
    • How much dense the clusters are?

    We can calculate DB index with the help of following formula −

    DB=1n∑i=1nmaxj≠i(σi+σjd(ci,cj))

    Here, = number of clusters

    σi = average distance of all points in cluster from the cluster centroid .

    Less the DB index, better the clustering model is.

    3. Dunn Index

    It works same as DB index but there are following points in which both differs −

    • The Dunn index considers only the worst case i.e. the clusters that are close together while DB index considers dispersion and separation of all the clusters in clustering model.
    • Dunn index increases as the performance increases while DB index gets better when clusters are well-spaced and dense.

    We can calculate Dunn index with the help of following formula −

    D=min1≤i<j≤nP(i,j)mix1≤i<k≤nq(k)

    Here, ,, = each indices for clusters

    = inter-cluster distance

    q = intra-cluster distance

    Applications of Clustering

    We can find clustering useful in the following areas −

    Data summarization and compression − Clustering is widely used in the areas where we require data summarization, compression and reduction as well. The examples are image processing and vector quantization.

    Collaborative systems and customer segmentation − Since clustering can be used to find similar products or same kind of users, it can be used in the area of collaborative systems and customer segmentation.

    Serve as a key intermediate step for other data mining tasks − Cluster analysis can generate a compact summary of data for classification, testing, hypothesis generation; hence, it serves as a key intermediate step for other data mining tasks also.

    Trend detection in dynamic data − Clustering can also be used for trend detection in dynamic data by making various clusters of similar trends.

    Social network analysis − Clustering can be used in social network analysis. The examples are generating sequences in images, videos or audios.

    Biological data analysis − Clustering can also be used to make clusters of images, videos hence it can successfully be used in biological data analysis.