Clustering efficiency is very delicate to preprocessing. Scaling, normalization, and even projections like PCA can drastically change cluster shapes.
Relatively than hand-crafting preprocessing steps, TPOT-Clustering robotically searches over pipelines to optimize clustering efficiency, right here utilizing silhouette rating.
Beneath, we evaluate:
– Uncooked KMeans (no preprocessing)
– TPOT-optimized pipeline (with preprocessing + clustering)
We visualize:
1. The unique Dermatology dataset (coloured by ground-truth class labels)
2. Clustering outcomes (with out and with preprocessing)
3. How TPOT transforms the information to enhance clustering high quality
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt
import seaborn as sns
from tpotclustering import TPOTClustering# Load the dataset
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/dermatology/dermatology.knowledge"
column_names = [
"erythema", "scaling", "definite_borders", "itching", "koebner_phenomenon",
"polygonal_papules", "follicular_papules", "oral_mucosal_involvement", "knee_and_elbow_involvement",
"scalp_involvement", "family_history", "melanin_incontinence", "eosinophils_in_the_infiltrate",
"PNL_infiltrate", "fibrosis_of_the_papillary_dermis", "exocytosis", "acanthosis",
"hyperkeratosis", "parakeratosis", "clubbing_of_the_rete_ridges", "elongation_of_the_rete_ridges",
"thinning_of_the_suprapapillary_epidermis", "spongiform_pustule", "munro_microabcess",
"focal_hypergranulosis", "disappearance_of_the_granular_layer", "vacuolisation_and_damage_of_basal_layer",
"spongiosis", "saw_tooth_appearance_of_retes", "follicular_horn_plug", "perifollicular_parakeratosis",
"inflammatory_monoluclear_inflitrate", "band_like_infiltrate", "Age", "Class"
]
df = pd.read_csv(url, names=column_names, index_col=False)
df.substitute('?', np.nan, inplace=True)
df["Age"] = pd.to_numeric(df["Age"], errors='coerce')
df["Age"].fillna(df["Age"].median(), inplace=True)
# Separate options and labels
X_raw = df.drop(columns=["Class"])
y_true = df["Class"]
Why Preprocessing Issues
Earlier than clustering, it’s vital to look at the uncooked dataset. Beneath:
– The PCA scatter plot reveals the projection of the unscaled knowledge. Since PCA is delicate to characteristic variance, high-scale options dominate the end result, hiding construction in lower-variance dimensions.
– The boxplots present that there are options on completely different scales. Clustering algorithms like KMeans are distance-based, and shall be biased towards options with giant numeric ranges until the information is standardized.
This motivates the necessity for computerized preprocessing, one of many strengths of TPOTClustering.
# PCA on unscaled knowledge
pca = PCA(n_components=2)
X_pca_unscaled = pca.fit_transform(X_raw)# Plot setup
fig, axs = plt.subplots(1, 2, figsize=(16, 5))
# 1. PCA plot on unscaled knowledge
sns.scatterplot(x=X_pca_unscaled[:, 0], y=X_pca_unscaled[:, 1], hue=y_true, palette="Set1", ax=axs[0], legend='full')
axs[0].set_title("PCA on Unique (Unscaled) Options")
axs[0].set_xlabel("PCA Part 1")
axs[0].set_ylabel("PCA Part 2")
# 2. Boxplot of characteristic distributions
sns.boxplot(knowledge=X_raw, orient="h", ax=axs[1])
axs[1].set_title("Function Worth Distributions (Uncooked)")
axs[1].set_xlabel("Worth")
plt.tight_layout()
plt.present()