Clustering algorithms on Data Mining

This article is an attempt to systematize and give a holistic view of the latest achievements in the development of effective approaches to data clustering. The purpose of this article was not to provide a detailed description of all clustering algorithms. On the contrary, the structure of a review article as well as the issues raised will help you navigate through a large number of clustering algorithms.

Introduction

Clustering — a process combining similar objects into groups —is one of the fundamental tasks in the field of data analysis and data mining. The range of areas where it can be applied is wide: image segmentation, marketing, anti-fraud procedures, impact analysis, text analysis, etc. At the present time, clustering is often the first step in data analysis. After grouping, other methods are applied, and separate models are built for each group.

The problem of clustering in one form or another was described in such scientific areas as statistics, pattern recognition, optimization, and machine learning. this is the reason for a wide range of synonyms for the concept of a cluster, such as class and taxon.

At the present moment, the number of methods for splitting of groups of objects into clusters is quite large – several dozen algorithms and even more modifications of those. However, we review clustering algorithms from the point of view of their application in data mining.

Clustering in Data Mining

Clustering in data mining becomes valuable when it acts as one of the stages of data analysis, building a complete analytical solution. It is often easier for an analyst to identify groups of similar objects, study their features, and build a separate model for each group, than to create a single general model for all the data. This technique is constantly used in marketing for identification of groups of customers, buyers, and products as well as for development of separate strategies for each of them.

Very often, the data used in data mining has the following important features:

  • high dimensionality (thousands of fields) and large volume (hundreds of thousands and millions of records) of database tables and data stores (ultra-large databases);
  • the data sets contain a large number of numeric and categorical attributes.

All attributes or features of the objects are divided into numeric and categorical. The numeric attributes are those that can be put in arranged order in certain space, and the categorical attributes are those that cannot be put in order. For example, the "age" attribute is numeric, and the "color" attribute is categorical. Assignment of values to attributes occurs during measurements employing a selected type of scale, and this, generally speaking, is a separate task.

Most clustering algorithms involve comparing of objects with each other based on some measure of proximity (similarity). The measure of proximity is a value that has a limit and increases with increasing proximity of the objects. The similarity measures are "developed" according to special rules, and the choice of specific measures depends on the task, as well as on the measurement scale. Euclidean distance, calculated using the formula below, is often used as a measure of proximity for numeric attributes:

D(x, y)=\sqrt{\sum_{i}{(x-y)^2}}

Czekanowski-Sørensen and Jaccard similarity coefficients are commonly used for categorical attributes ( \mid t_1 \cap t_2\mid/\mid t_1\cup t_2 \mid).

The need to process large amounts of data in data mining has led to establishing of requirements that, if applicable, the clustering algorithm should meet. Let's consider them:

  1. The minimum possible number of passes through the database;
  2. Operation with a limited amount of ram;
  3. Ability to interrupt the algorithm and save the intermediate results so that you can continue the calculations later;
  4. The algorithm should operate when the objects from the database can be retrieved only in the unidirectional cursor mode (i.e., in the record navigation mode).

The algorithm that meets these requirements (especially the second one) will be called scalable. Scalability is the most important property of an algorithm, depending on its computational complexity and software implementation. There is also a more comprehensive definition. An algorithm is called scalable, if, given a constant RAM capacity, its operation time increases linearly as the number of records in the database increases.

However, it is not always necessary to process extremely large data sets. Therefore, at the dawn of the theory of cluster analysis, almost no attention was paid to scalability of algorithms. It was assumed that all processed data will fit into RAM, the main emphasis had always been put on improvement of the quality of clustering.

It is difficult to maintain a balance between high quality clustering and scalability. Therefore, ideally, the data mining arsenal should include both efficient microarray clustering algorithms and scalable ones for processing of large databases.

Clustering Algorithms: Brilliance and Misery

In sum, it is already possible to classify cluster algorithms into scalable and non-scalable. Let's broaden the classification.

Considering the methods of splitting into clusters, there are two types of algorithms: hierarchical and non-hierarchical. Classical hierarchical algorithms only work with categorical attributes with a complete tree of nested clusters being built. The agglomerative methods for building of cluster hierarchies are common here – they combine sequential unification of the source objects and corresponding reduction of the number of clusters. The hierarchical algorithms provide relatively high quality clustering and do not require prior specification of the number of clusters. Most of them have a complexity of O(n^2).

The non-hierarchical algorithms are based on the optimization of a certain objective function that determines the optimal, in a certain sense, partitioning of a set of objects into clusters. In this group, popular algorithms of the k-means family (k-means, fuzzy c-means, Gustafson-Kessel), which use the sum of squares of weighted deviations of the object coordinates from the centers of the desired clusters as the objective function.

The clusters looked for are either spherical or ellipsoid in shape. As per the canonical implementation, the function is minimized based on Lagrange multiplier method and allows you to find only the nearest local minimum. Implementation of global search methods (genetic algorithms) will significantly increase the computational complexity of the algorithm.

Figure 1. Histograms of Two Partitions

Among the non-hierarchical algorithms that are not based on distance, special mention should go to the EM algorithm (Expectation-Maximization). Instead of using cluster centers, it assumes that there is a probability density function for each cluster with a corresponding expectation value and variance. In the mixture of distributions (Fig. 2), their parameters (mean and standard deviations) are searched for using the maximum likelihood estimation principle. The EM algorithm is one of the implementations of such a search. The problem is that before start of the algorithm, a hypothesis about the type of distributions that are difficult to evaluate in the total data set is put forward.

Figure 2. Distributions and Their Mixture

Another problem occurs when the attributes of an object are mixed – one part has a numeric type, and the other part has a categorical type. For example, suppose you want to calculate the distance between the following objects with attributes (Age, Gender, Education):

{23, male, higher}, (1)

{25, female, secondary}, (2)

The first attribute is numeric, the others are categorical. If we want to use a classical hierarchical algorithm with some measure of similarity, we will have to somehow discredit the "Age" attribute. For example, as follows:

{under 30, male, higher}, (1)

{under 30, female, secondary}, (2)

At the same time, we will certainly lose some of the information. If we determine the distance within the Euclidean space, then there will be issues with the categorical attributes. It is clear that the distance between the "Gender — male" and "Gender — female" is 0, because the values of this attribute are using the name scale. And the "Education" attribute can be measured both using the name scale and within the order scale by assigning a certain score to each value.

Which option should we choose? What if the category attributes are more important than the numeric attributes? The problem of finding of a solution for these issues falls on the shoulders of teh analyst. In addition, when using the k-means algorithm and those alike, it is difficult to define the cluster centers for categorical attributes, as well as to set the number of clusters a priori.

The algorithm for optimization of the objective function in non-hierarchical algorithms based on distances is iterative, and each iteration requires calculation of the distance matrix between the objects. With a large number of objects, this is inefficient and requires significant computation power. The computational complexity of the 1st iteration of the k-means algorithm is estimated as O(kmn), where k,m,n — are the number of clusters, attributes, and objects, respectively. But there can be lots of iterations! You will have to make a lot of passes through the data set.

This very approach presupposing searching for spherical or ellipsoid clusters has a lot of disadvantages within the k-means. This approach works well when data in space form compact clusters that are clearly distinguishable from each other. And if the data have a nested form, none of the algorithms of the k-means family will ever cope with this task. The algorithm also does not work well when one cluster is significantly larger than the others and they are close to each other – the effect of "splitting" of the large cluster is observed (Fig. 3).

Figure 3. Splitting of a Large Cluster

However, studies in the field of improvement of clustering algorithms are conducted constantly. Interesting extensions of the k-means algorithm for working with categorical attributes (k-modes) and mixed attributes (k-prototypes) have been developed. For example, in k-prototypes, the distances between the objects are calculated differently depending on the attribute type.

In the market of scalable clustering algorithms, the main struggle is to reduce each "additional" pass through the data set when the algorithm is running. Scalable analogs of k-means and EM (scalable k-means and scalable EM), scalable agglomerative methods (CURE, CACTUS) have been developed. These modern algorithms require only a few (two to ten) database scans before the final clustering.

Development of scalable algorithms is based on the idea of abandoning the local optimization function. Pairwise comparison of objects with each other in the k-means algorithm is nothing more than local optimization since at each iteration it is necessary to calculate the distance from the cluster center to each object. This leads to an expanded cost of computational power required.

When setting the global optimization function, addition of a new point to the cluster does not require a lot of calculations: it is calculated based on the old value, the new object, and the so-called cluster features. The specific cluster characteristics depend on a particular algorithm. This is how BIRCH, LargeItem, CLOPE, and many other algorithms appeared.

Thus, there is no single universal clustering algorithm. When using any algorithm, it is important to understand its advantages and disadvantages, take into account the nature of the data it works with best, and its scalability.

Reference list

  1. Bradley, P., Fayyad, U., Reina, C. Scaling Clustering Algorithms to Large Databases, Proc. 4th Int'l Conf. Knowledge Discovery and Data Mining, AAAI Press, Menlo Park, Calif., 1998.
  2. Zhang, T., Ramakrishnan, R., Livny, M. Birch: An Efficient Data Clustering Method for Large Databases, Proc. ACM SIGMOD Int’l Conf. Management of Data, ACM Press, New York, 1996.
  3. Paul S. Bradley, Usama M. Fayyad, Cory A. Reina Scaling EM (Expectation-Maximization) Clustering to Large Databases, Microsoft Research, 1999.
  4. Z. Huang. Clustering large data sets with mixed numeric and categorical values. In The First Pacific-Asia Conference on Knowledge Discovery and Data Mining, 1997.
  5. Milenova, B., Campos, M. Clustering large databases with numeric and nominal values using orthogonal projections, Oracle Data Mining Technologies, 2002.
  6. Z. Huang. A fast clustering algorithm to claster very large categorical data sets in Data Mining. Research Issues on on Data Mining and KDD, 1997.
  7. Wang, K., Xu, C.. Liu, B. Clustering transactions using large items. In Proc. CIKM’99, Kansas, Missouri, 1999.
  8. Guha S., Rastogi R.,Shim K. CURE: An Efficient Clustering Algorithm for Large Databases, Proc. ACM SIGMOD Int’l Conf. Management of Data, ACM Press, New York, 1998.
  9. Ganti V., Gerhke J., Ramakrishan R. CACTUS – Clustering Categorical Data Using Summaries. In Proc KDD’99, 1999.
  10. J. Bilmes. A Gentle Tutorial on the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models, Tech. Report ICSI-TR-97-021, 1997.
#data mining#data science#clustering

See also

Building customers' loyalty
What is customer loyalty? Is it repeated purchases? Or an emotional connection that is manifested, for example, in the fact that the client recommends your company to their friends?
The credit pipeline. The case of the MigCredit microfinance organization
How to migrate to a modern analytical platform and replace the database without stopping the work of the credit pipeline. A practical case from the leader of the microfinance market in Russia,...
Missing data imputation
In practice, missing data are very common in real data processing. The reasons may comprise data entry errors, information hiding, or fraud. In this article, we will discuss in which cases...