kmeans++^{[1]}^{[2]} is an algorithm for choosing the initial values for kmeans clustering in statistics and machine learning. It was proposed in 2007 by David Arthur and Sergei Vassilvitskii as an approximation algorithm for the NPhard kmeans problema way of avoiding the sometimes poor clusterings found by the standard kmeans algorithm.
Contents 
The kmeans problem is to find cluster centers that minimize the sum of squared distances from each data point being clustered to its cluster center (the center that is closest to it). Although finding an exact solution to the kmeans problem for arbitrary input is NPhard^{[3]}, the standard approach to finding an approximate solution (often called Lloyd's algorithm or the kmeans algorithm) is used widely and frequently finds reasonable solutions quickly.
However, the kmeans algorithm has at least two major theoretic shortcomings:
In a nutshell, kmeans++ addresses the second of these obstacles by specifying a procedure to initialize the cluster centers before proceeding with the standard kmeans optimization iterations. With the kmeans++ initialization, the algorithm is guaranteed to find a solution that is O(log k) competitive to the optimal kmeans solution.
To illustrate the potential of the kmeans algorithm to perform arbitrarily poorly with respect to the objective function of minimizing the sum squared distance of points to assigned clusters, consider the example of four points in that form an axis aligned rectangle with the width of the rectangle they form being somewhat larger than its height.
If k = 2 and the two initial cluster centers lie at the midpoints of the top and bottom line segments of the rectangle formed by the four data points, the kmeans algorithm will converge without moving these cluster centers. Consequently, the two bottom data points are clustered together and the two data points forming the top of the rectangle are clustered togethera suboptimal clustering because the width of the rectangle is greater than the height of the rectangle.
Now, consider stretching the rectangle horizontally to an arbitrary width. The standard kmeans algorithm will still cluster the points suboptimally, and by increasing the horizontal distance between the two data points in each cluster, we can make the algorithm do arbitrarily poorly with respect to the kmeans objective function.
With the intuition of spreading the k initial cluster centers away from each other, the first cluster center is chosen uniformly at random from the data points that are being clustered, after which each subsequent cluster center is chosen from the remaining data points with probability proportional to its distance squared to the point's closest cluster center.
The exact algorithm is as follows:
This seeding method gives out considerable improvements in the final error of kmeans. Although the initial selection in the algorithm takes extra time, the kmeans part itself converges very fast after this seeding and thus the algorithm actually lowers the computation time too. The authors tested their method with real and synthetic datasets and obtained typically 2fold improvements in speed, and for certain datasets close to 1000fold improvements in error. Their tests almost always showed the new method to be at least as good as vanilla kmeans in both speed and error.
Additionally, the authors calculate an approximation ratio for their algorithm. The kmeans++ algorithm guarantees an approximation ratio O(log(k)) where k is the number of clusters used. This is in contrast to vanilla kmeans, which can generate clusterings arbitrarily worse than the optimum ^{[5]}.
The kmeans++ approach has been applied since its initial proposal. In a review^{[6]}, which includes many types of clustering algorithms, the method is said to successfully overcome some of the problems associated with other ways of defining initial clustercentres for kmeans clustering. Lee et al.^{[7]} report an application of kmeans++ to create geographical cluster of photographs based the latitude and longitude information attached to the photos. An application to financial diversification is reported by Howard and Johansen.^{[8]} Jaiswal^{[9]} describes kmeans++ as "very useful". Other support for the method and ongoing discussion is also available online.^{[10]}
