Products
  • Wolfram|One

    The definitive Wolfram Language and notebook experience

  • Mathematica

    The original technical computing environment

  • Notebook Assistant + LLM Kit

    All-in-one AI assistance for your Wolfram experience

  • Compute Services
  • System Modeler
  • Finance Platform
  • Wolfram|Alpha Notebook Edition
  • Application Server
  • Enterprise Private Cloud
  • Wolfram Engine
  • Wolfram Player
  • Wolfram Cloud App
  • Wolfram Player App

More mobile apps

Core Technologies of Wolfram Products

  • Wolfram Language
  • Computable Data
  • Wolfram Notebooks
  • AI & Linguistic Understanding

Deployment Options

  • Wolfram Cloud
  • wolframscript
  • Wolfram Engine Community Edition
  • Wolfram LLM API
  • WSTPServer
  • Wolfram|Alpha APIs

From the Community

  • Function Repository
  • Community Paclet Repository
  • Example Repository
  • Neural Net Repository
  • Prompt Repository
  • Wolfram Demonstrations
  • Data Repository
  • Group & Organizational Licensing
  • All Products
Consulting & Solutions

We deliver solutions for the AI era—combining symbolic computation, data-driven insights and deep technical expertise

  • Data & Computational Intelligence
  • Model-Based Design
  • Algorithm Development
  • Wolfram|Alpha for Business
  • Blockchain Technology
  • Education Technology
  • Quantum Computation

Wolfram Consulting

Wolfram Solutions

  • Data Science
  • Artificial Intelligence
  • Biosciences
  • Healthcare Intelligence
  • Sustainable Energy
  • Control Systems
  • Enterprise Wolfram|Alpha
  • Blockchain Labs

More Wolfram Solutions

Wolfram Solutions For Education

  • Research Universities
  • Colleges & Teaching Universities
  • Junior & Community Colleges
  • High Schools
  • Educational Technology
  • Computer-Based Math

More Solutions for Education

  • Contact Us
Learning & Support

Get Started

  • Wolfram Language Introduction
  • Fast Intro for Programmers
  • Fast Intro for Math Students
  • Wolfram Language Documentation

More Learning

  • Highlighted Core Areas
  • Demonstrations
  • YouTube
  • Daily Study Groups
  • Wolfram Schools and Programs
  • Books

Grow Your Skills

  • Wolfram U

    Courses in computing, science, life and more

  • Community

    Learn, solve problems and share ideas.

  • Blog

    News, views and insights from Wolfram

  • Resources for

    Software Developers

Tech Support

  • Contact Us
  • Support FAQs
  • Support FAQs
  • Contact Us
Company
  • About Wolfram
  • Career Center
  • All Sites & Resources
  • Connect & Follow
  • Contact Us

Work with Us

  • Student Ambassador Initiative
  • Wolfram for Startups
  • Student Opportunities
  • Jobs Using Wolfram Language

Educational Programs for Adults

  • Summer School
  • Winter School

Educational Programs for Youth

  • Middle School Camp
  • High School Research Program
  • Computational Adventures

Read

  • Stephen Wolfram's Writings
  • Wolfram Blog
  • Wolfram Tech | Books
  • Wolfram Media
  • Complex Systems

Educational Resources

  • Wolfram MathWorld
  • Wolfram in STEM/STEAM
  • Wolfram Challenges
  • Wolfram Problem Generator

Wolfram Initiatives

  • Wolfram Science
  • Wolfram Foundation
  • History of Mathematics Project

Events

  • Stephen Wolfram Livestreams
  • Online & In-Person Events
  • Contact Us
  • Connect & Follow
Wolfram|Alpha
  • Your Account
  • User Portal
  • Wolfram Cloud
  • Products
    • Wolfram|One
    • Mathematica
    • Notebook Assistant + LLM Kit
    • Compute Services
    • System Modeler
    • Finance Platform
    • Wolfram|Alpha Notebook Edition
    • Application Server
    • Enterprise Private Cloud
    • Wolfram Engine
    • Wolfram Player
    • Wolfram Cloud App
    • Wolfram Player App

    More mobile apps

    • Core Technologies
      • Wolfram Language
      • Computable Data
      • Wolfram Notebooks
      • AI & Linguistic Understanding
    • Deployment Options
      • Wolfram Cloud
      • wolframscript
      • Wolfram Engine Community Edition
      • Wolfram LLM API
      • WSTPServer
      • Wolfram|Alpha APIs
    • From the Community
      • Function Repository
      • Community Paclet Repository
      • Example Repository
      • Neural Net Repository
      • Prompt Repository
      • Wolfram Demonstrations
      • Data Repository
    • Group & Organizational Licensing
    • All Products
  • Consulting & Solutions

    We deliver solutions for the AI era—combining symbolic computation, data-driven insights and deep technical expertise

    WolframConsulting.com

    Wolfram Solutions

    • Data Science
    • Artificial Intelligence
    • Biosciences
    • Healthcare Intelligence
    • Sustainable Energy
    • Control Systems
    • Enterprise Wolfram|Alpha
    • Blockchain Labs

    More Wolfram Solutions

    Wolfram Solutions For Education

    • Research Universities
    • Colleges & Teaching Universities
    • Junior & Community Colleges
    • High Schools
    • Educational Technology
    • Computer-Based Math

    More Solutions for Education

    • Contact Us
  • Learning & Support

    Get Started

    • Wolfram Language Introduction
    • Fast Intro for Programmers
    • Fast Intro for Math Students
    • Wolfram Language Documentation

    Grow Your Skills

    • Wolfram U

      Courses in computing, science, life and more

    • Community

      Learn, solve problems and share ideas.

    • Blog

      News, views and insights from Wolfram

    • Resources for

      Software Developers
    • Tech Support
      • Contact Us
      • Support FAQs
    • More Learning
      • Highlighted Core Areas
      • Demonstrations
      • YouTube
      • Daily Study Groups
      • Wolfram Schools and Programs
      • Books
    • Support FAQs
    • Contact Us
  • Company
    • About Wolfram
    • Career Center
    • All Sites & Resources
    • Connect & Follow
    • Contact Us

    Work with Us

    • Student Ambassador Initiative
    • Wolfram for Startups
    • Student Opportunities
    • Jobs Using Wolfram Language

    Educational Programs for Adults

    • Summer School
    • Winter School

    Educational Programs for Youth

    • Middle School Camp
    • High School Research Program
    • Computational Adventures

    Read

    • Stephen Wolfram's Writings
    • Wolfram Blog
    • Wolfram Tech | Books
    • Wolfram Media
    • Complex Systems
    • Educational Resources
      • Wolfram MathWorld
      • Wolfram in STEM/STEAM
      • Wolfram Challenges
      • Wolfram Problem Generator
    • Wolfram Initiatives
      • Wolfram Science
      • Wolfram Foundation
      • History of Mathematics Project
    • Events
      • Stephen Wolfram Livestreams
      • Online & In-Person Events
    • Contact Us
    • Connect & Follow
  • Wolfram|Alpha
  • Wolfram Cloud
  • Your Account
  • User Portal
Wolfram Language & System Documentation Center
FindClusters
  • See Also
    • ClusteringMeasurements
    • ClusteringComponents
    • ClusterClassify
    • Classify
    • Partition
    • Split
    • Gather
    • Nearest
    • DistanceTransform
    • MeanShift
  • Related Guides
    • Cluster Analysis
    • Machine Learning
    • Logic & Boolean Algebra
    • Boolean Computation
    • Natural Language Processing
    • Signal Processing
    • Text Analysis
    • Scientific Data Analysis
    • Sequence Alignment & Comparison
    • Statistical Data Analysis
    • Distance and Similarity Measures
    • Handling Arrays of Data
    • Computational Geometry
    • Machine Learning Methods
    • Numerical Data
    • Image Computation for Microscopy
    • Unsupervised Machine Learning
    • Audio Analysis
    • Tabular Modeling
    • Tabular Processing Overview
  • Tech Notes
    • Partitioning Data into Clusters
    • See Also
      • ClusteringMeasurements
      • ClusteringComponents
      • ClusterClassify
      • Classify
      • Partition
      • Split
      • Gather
      • Nearest
      • DistanceTransform
      • MeanShift
    • Related Guides
      • Cluster Analysis
      • Machine Learning
      • Logic & Boolean Algebra
      • Boolean Computation
      • Natural Language Processing
      • Signal Processing
      • Text Analysis
      • Scientific Data Analysis
      • Sequence Alignment & Comparison
      • Statistical Data Analysis
      • Distance and Similarity Measures
      • Handling Arrays of Data
      • Computational Geometry
      • Machine Learning Methods
      • Numerical Data
      • Image Computation for Microscopy
      • Unsupervised Machine Learning
      • Audio Analysis
      • Tabular Modeling
      • Tabular Processing Overview
    • Tech Notes
      • Partitioning Data into Clusters

FindClusters[{e1,e2,…}]

partitions the ei into clusters of similar elements.

FindClusters[{e1v1,e2v2,…}]

returns the vi corresponding to the ei in each cluster.

FindClusters[data,n]

partitions data into n clusters.

Details and Options
Details and Options Details and Options
Examples  
Basic Examples  
Scope  
Options  
CriterionFunction  
DistanceFunction  
FeatureExtractor  
Show More Show More
FeatureNames  
FeatureTypes  
Method  
PerformanceGoal  
RandomSeeding  
Weights  
Applications  
Properties & Relations  
Neat Examples  
See Also
Tech Notes
Related Guides
Related Links
History
Cite this Page
BUILT-IN SYMBOL
  • See Also
    • ClusteringMeasurements
    • ClusteringComponents
    • ClusterClassify
    • Classify
    • Partition
    • Split
    • Gather
    • Nearest
    • DistanceTransform
    • MeanShift
  • Related Guides
    • Cluster Analysis
    • Machine Learning
    • Logic & Boolean Algebra
    • Boolean Computation
    • Natural Language Processing
    • Signal Processing
    • Text Analysis
    • Scientific Data Analysis
    • Sequence Alignment & Comparison
    • Statistical Data Analysis
    • Distance and Similarity Measures
    • Handling Arrays of Data
    • Computational Geometry
    • Machine Learning Methods
    • Numerical Data
    • Image Computation for Microscopy
    • Unsupervised Machine Learning
    • Audio Analysis
    • Tabular Modeling
    • Tabular Processing Overview
  • Tech Notes
    • Partitioning Data into Clusters
    • See Also
      • ClusteringMeasurements
      • ClusteringComponents
      • ClusterClassify
      • Classify
      • Partition
      • Split
      • Gather
      • Nearest
      • DistanceTransform
      • MeanShift
    • Related Guides
      • Cluster Analysis
      • Machine Learning
      • Logic & Boolean Algebra
      • Boolean Computation
      • Natural Language Processing
      • Signal Processing
      • Text Analysis
      • Scientific Data Analysis
      • Sequence Alignment & Comparison
      • Statistical Data Analysis
      • Distance and Similarity Measures
      • Handling Arrays of Data
      • Computational Geometry
      • Machine Learning Methods
      • Numerical Data
      • Image Computation for Microscopy
      • Unsupervised Machine Learning
      • Audio Analysis
      • Tabular Modeling
      • Tabular Processing Overview
    • Tech Notes
      • Partitioning Data into Clusters

FindClusters

FindClusters[{e1,e2,…}]

partitions the ei into clusters of similar elements.

FindClusters[{e1v1,e2v2,…}]

returns the vi corresponding to the ei in each cluster.

FindClusters[data,n]

partitions data into n clusters.

Details and Options

  • FindClusters partitions a list into sublists (clusters) of similar elements. The number and composition of the clusters is influenced by the input data, the method and the evaluation criterion used. The elements can belong to a variety of data types, including numerical, textual and image, as well as dates and times.
  • Clustering is typically used to find classes of elements such as customer types, animal taxonomies, document topics, etc. in an unsupervised way. For supervised classification, see Classify.
  • Labels for the input examples ei can be given in the following formats:
  • {e1,e2,…}use the ei themselves
    {e1v1,e2v2,…}a list of rules between the element ei and the label vi
    {e1,e2,…}{v1,v2,…}a rule between all the elements and all the labels
    label1e1,label2e2,…the labels as Association keys
  • The number of clusters can be specified in the following ways:
  • Automaticfind the number of clusters automatically
    nfind exactly n clusters
    UpTo[n]find at most n clusters
  • The following options can be given:
  • CriterionFunction Automaticcriterion for selecting a method
    DistanceFunction Automaticthe distance function to use
    FeatureExtractor Identityhow to extract features from which to learn
    FeatureNames Automaticfeature names to assign for input data
    FeatureTypes Automaticfeature types to assume for input data
    Method Automaticwhat method to use
    MissingValueSynthesisAutomatichow to synthesize missing values
    PerformanceGoal Automaticaspect of performance to optimize
    RandomSeeding 1234what seeding of pseudorandom generators should be done internally
    Weights Automaticwhat weight to give to each example
  • By default, FindClusters will preprocess the data automatically unless a DistanceFunction is specified.
  • The setting for DistanceFunction can be any distance or dissimilarity function, or a function f defining a distance between two values.
  • Possible settings for PerformanceGoal include:
  • Automaticautomatic tradeoff among speed, accuracy, and memory
    "Quality"maximize the accuracy of the classifier
    "Speed"maximize the speed of the classifier
  • Possible settings for Method include:
  • Automaticautomatically select a method
    "Agglomerate"single-linkage clustering algorithm
    "DBSCAN"density-based spatial clustering of applications with noise
    "GaussianMixture"variational Gaussian mixture algorithm
    "JarvisPatrick"Jarvis–Patrick clustering algorithm
    "KMeans"k-means clustering algorithm
    "KMedoids"partitioning around medoids
    "MeanShift"mean-shift clustering algorithm
    "NeighborhoodContraction"shift data points toward high-density regions
    "SpanningTree"minimum spanning tree-based clustering algorithm
    "Spectral"spectral clustering algorithm
  • The methods "KMeans" and "KMedoids" can only be used when the number of clusters is specified.
  • The methods "DBSCAN", "GaussianMixture", "JarvisPatrick", "MeanShift" and "NeighborhoodContraction" can only be used when the number of clusters is Automatic.
  • The following plots show results of common methods on toy datasets:
  • Possible settings for CriterionFunction include:
  • "StandardDeviation"root-mean-square standard deviation
    "RSquared"R-squared
    "Dunn"Dunn index
    "CalinskiHarabasz"Calinski–Harabasz index
    "DaviesBouldin"Davies–Bouldin index
    "Silhouette"Silhouette score
    Automaticinternal index
  • Possible settings for RandomSeeding include:
  • Automaticautomatically reseed every time the function is called
    Inheriteduse externally seeded random numbers
    seeduse an explicit integer or strings as a seed

Examples

open all close all

Basic Examples  (4)

Find clusters of nearby values:

Find exactly four clusters:

Represent clustered elements with the right-hand sides of each rule:

Represent clustered elements with the keys of the association:

Scope  (6)

Cluster vectors of real values:

Cluster data of any precision:

Cluster Boolean True, False data:

Cluster colors:

Cluster images:

Clustering of 3D images:

Options  (15)

CriterionFunction  (1)

Generate some separated data and visualize it:

Cluster the data using different settings for CriterionFunction:

Compare the two clusterings of the data:

DistanceFunction  (4)

Use CanberraDistance as the measure of distance for continuous data:

Clusters obtained with the default SquaredEuclideanDistance:

Use DiceDissimilarity as the measure of distance for Boolean data:

Use MatchingDissimilarity as the measure of distance for Boolean data:

Use HammingDistance as the measure of distance for string data:

Define a distance function as a pure function:

FeatureExtractor  (1)

Find clusters for a list of images:

Create a custom FeatureExtractor to extract features:

FeatureNames  (1)

Use FeatureNames to name features, and refer to their names in further specifications:

FeatureTypes  (1)

Use FeatureTypes to enforce the interpretation of the features:

Compare it to the result obtained by assuming nominal features:

Method  (4)

Cluster the data hierarchically:

Clusters obtained with the default method:

Generate normally distributed data and visualize it:

Cluster the data in 4 clusters by using the k-means method:

Cluster the data using the "GaussianMixture" method without specifying the number of clusters:

Generate some uniformly distributed data:

Cluster the data in 2 clusters by using the k-means method:

Cluster the data using the "DBSCAN" method without specifying the number of clusters:

Generate a list of colors:

Cluster the colors in 5 clusters using the k-medoids method:

Cluster the colors without specifying the number of clusters using the "MeanShift" method:

Cluster the colors without specifying the number of clusters using the "NeighborhoodContraction" method:

Cluster the colors using the "NeighborhoodContraction" method and its suboptions:

PerformanceGoal  (1)

Generate 500 random numerical vectors of length 1000:

Compute their clustering and benchmark the operation:

Perform the same operation with PerformanceGoal set to "Speed":

RandomSeeding  (1)

Generate 500 random numerical vectors in two dimensions:

Compute their clustering several times and compare the results:

Compute their clustering several times by changing the RandomSeeding option, and compare the results:

Weights  (1)

Obtain cluster assignment for some numerical data:

Look at the cluster assignment when changing the weight given to each number:

Applications  (3)

Find and visualize clusters in bivariate data:

Find clusters in five‐dimensional vectors:

Cluster genomic sequences based on the number of element‐wise differences:

Properties & Relations  (2)

FindClusters returns the list of clusters, while ClusteringComponents gives an array of cluster indices:

FindClusters groups data, while Nearest gives the elements closest to a given value:

Neat Examples  (2)

Divide a square into n segments by clustering uniformly distributed random points:

Cluster words beginning with "agg" in the English dictionary:

See Also

ClusteringMeasurements  ClusteringComponents  ClusterClassify  Classify  Partition  Split  Gather  Nearest  DistanceTransform  MeanShift

Function Repository: PrincipalAxisClustering  NewickDendrogram

Tech Notes

    ▪
  • Partitioning Data into Clusters

Related Guides

    ▪
  • Cluster Analysis
  • ▪
  • Machine Learning
  • ▪
  • Logic & Boolean Algebra
  • ▪
  • Boolean Computation
  • ▪
  • Natural Language Processing
  • ▪
  • Signal Processing
  • ▪
  • Text Analysis
  • ▪
  • Scientific Data Analysis
  • ▪
  • Sequence Alignment & Comparison
  • ▪
  • Statistical Data Analysis
  • ▪
  • Distance and Similarity Measures
  • ▪
  • Handling Arrays of Data
  • ▪
  • Computational Geometry
  • ▪
  • Machine Learning Methods
  • ▪
  • Numerical Data
  • ▪
  • Image Computation for Microscopy
  • ▪
  • Unsupervised Machine Learning
  • ▪
  • Audio Analysis
  • ▪
  • Tabular Modeling
  • ▪
  • Tabular Processing Overview

Related Links

  • An Elementary Introduction to the Wolfram Language : Machine Learning

History

Introduced in 2007 (6.0) | Updated in 2016 (11.0) ▪ 2017 (11.1) ▪ 2017 (11.2) ▪ 2018 (11.3) ▪ 2020 (12.1)

Wolfram Research (2007), FindClusters, Wolfram Language function, https://reference.wolfram.com/language/ref/FindClusters.html (updated 2020).

Text

Wolfram Research (2007), FindClusters, Wolfram Language function, https://reference.wolfram.com/language/ref/FindClusters.html (updated 2020).

CMS

Wolfram Language. 2007. "FindClusters." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2020. https://reference.wolfram.com/language/ref/FindClusters.html.

APA

Wolfram Language. (2007). FindClusters. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/FindClusters.html

BibTeX

@misc{reference.wolfram_2025_findclusters, author="Wolfram Research", title="{FindClusters}", year="2020", howpublished="\url{https://reference.wolfram.com/language/ref/FindClusters.html}", note=[Accessed: 04-February-2026]}

BibLaTeX

@online{reference.wolfram_2025_findclusters, organization={Wolfram Research}, title={FindClusters}, year={2020}, url={https://reference.wolfram.com/language/ref/FindClusters.html}, note=[Accessed: 04-February-2026]}

Top
Introduction for Programmers
Introductory Book
Wolfram Function Repository | Wolfram Data Repository | Wolfram Data Drop | Wolfram Language Products
Top
  • Products
  • Wolfram|One
  • Mathematica
  • Notebook Assistant + LLM Kit
  • Compute Services
  • System Modeler

  • Wolfram|Alpha Notebook Edition
  • Wolfram|Alpha Pro
  • Mobile Apps

  • Wolfram Engine
  • Wolfram Player

  • Volume & Site Licensing
  • Server Deployment Options
  • Consulting
  • Wolfram Consulting
  • Repositories
  • Data Repository
  • Function Repository
  • Community Paclet Repository
  • Neural Net Repository
  • Prompt Repository

  • Wolfram Language Example Repository
  • Notebook Archive
  • Wolfram GitHub
  • Learning
  • Wolfram U
  • Wolfram Language Documentation
  • Webinars & Training
  • Educational Programs

  • Wolfram Language Introduction
  • Fast Introduction for Programmers
  • Fast Introduction for Math Students
  • Books

  • Wolfram Community
  • Wolfram Blog
  • Public Resources
  • Wolfram|Alpha
  • Wolfram Problem Generator
  • Wolfram Challenges

  • Computer-Based Math
  • Computational Thinking
  • Computational Adventures

  • Demonstrations Project
  • Wolfram Data Drop
  • MathWorld
  • Wolfram Science
  • Wolfram Media Publishing
  • Customer Resources
  • Store
  • Product Downloads
  • User Portal
  • Your Account
  • Organization Access

  • Support FAQ
  • Contact Support
  • Company
  • About Wolfram
  • Careers
  • Contact
  • Events
Wolfram Community Wolfram Blog
Legal & Privacy Policy
WolframAlpha.com | WolframCloud.com
© 2026 Wolfram
© 2026 Wolfram | Legal & Privacy Policy |
English