ง. Arthur, S. Vassilvitskii: k-means++: the advantages of careful seeding. ใน: Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, 1027-1035, 2007.
[[["เข้าใจง่าย","easyToUnderstand","thumb-up"],["แก้ปัญหาของฉันได้","solvedMyProblem","thumb-up"],["อื่นๆ","otherUp","thumb-up"]],[["ไม่มีข้อมูลที่ฉันต้องการ","missingTheInformationINeed","thumb-down"],["ซับซ้อนเกินไป/มีหลายขั้นตอนมากเกินไป","tooComplicatedTooManySteps","thumb-down"],["ล้าสมัย","outOfDate","thumb-down"],["ปัญหาเกี่ยวกับการแปล","translationIssue","thumb-down"],["ตัวอย่าง/ปัญหาเกี่ยวกับโค้ด","samplesCodeIssue","thumb-down"],["อื่นๆ","otherDown","thumb-down"]],["อัปเดตล่าสุด 2025-07-26 UTC"],[],["The k-means algorithm clusters data using either Euclidean or Manhattan distance. Manhattan distance uses component-wise median for centroids, while Euclidean uses the mean. Initialization methods include random, k-means++, canopy, and farthest first. Canopies can be used to optimize distance calculations. Parameters control the number of clusters, pruning frequency, density thresholds, and distance settings. Additional options include limiting iterations, preserving data order, and using a fast distance calculation mode.\n"]]