machine_learning

From Accuracy to Insight: Demystifying Machine Learning with PDPs and ICE Curves

Learn how Partial Dependence Plots and ICE curves reveal your model’s logic, uncover feature effects, and build trust in predictions.

From Accuracy to Insight: Demystifying Machine Learning with PDPs and ICE Curves

I have been building machine learning models for a while now. The process often feels like this: you gather data, you train a model, and you get a number—an accuracy score, an R-squared value. You celebrate if it’s high. But recently, I found myself staring at a very good model with a nagging question: “Yes, but why?” How does it actually decide? This isn’t just academic curiosity. If you can’t explain a model’s reasoning to a stakeholder, a regulator, or even to yourself, that high score is built on shaky ground. This is what led me to spend time with two of the most powerful tools for peering inside the black box: Partial Dependence Plots and their companions, ICE curves.

Think of a complex model as a room full of levers, each representing a feature like “income” or “house age.” The model’s prediction is the final sound the machine makes after pulling all levers in some intricate combination. A Partial Dependence Plot (PDP) asks a simple, powerful question: What happens to the average prediction if we move just one lever through its entire range, while letting all the other levers jiggle randomly as they do in our real data? It shows you the main, average effect of that single feature.

Let’s get practical. We’ll use a classic dataset about California housing prices. Our goal is to predict a home’s value. We have features like median income, average house age, and average number of rooms. After preparing the data, we can train a model. I often use a Gradient Boosting machine for tasks like this; it’s powerful and captures complex patterns.

from sklearn.ensemble import GradientBoostingRegressor
from sklearn.datasets import fetch_california_housing

# Load data
california = fetch_california_housing()
X, y = california.data, california.target
feature_names = california.feature_names

# Train a model
model = GradientBoostingRegressor(n_estimators=100, random_state=42)
model.fit(X, y)

Now, we have a predictor. But what does it think about income? To build a PDP for the ‘MedInc’ feature, we follow that core idea. We take all the houses in our dataset. For each house, we replace its actual income value with a specific test value—say, 3.0. We let the model make a prediction with this new, artificial data point. We then average all those predictions. We repeat this for many test values across the income range. The resulting line is the Partial Dependence Plot.

import numpy as np

def partial_dependence_1d(model, X, feature_index, grid_points=50):
    """A simple manual calculation for a 1D PDP."""
    # Create the grid of values for the feature of interest
    feature_values = np.linspace(X[:, feature_index].min(),
                                 X[:, feature_index].max(),
                                 grid_points)
    predictions = []
    
    for value in feature_values:
        # Create a temporary copy of the data
        X_temp = X.copy()
        # Set the feature of interest to the current grid value for all samples
        X_temp[:, feature_index] = value
        # Predict and average
        preds = model.predict(X_temp)
        predictions.append(preds.mean())
    
    return feature_values, np.array(predictions)

# Calculate PDP for MedInc (feature index 0)
grid, pdp_values = partial_dependence_1d(model, X, 0)

What does this plot tell us? For our housing model, we’d likely see a line that rises steadily. This makes intuitive sense: as median income in a block increases, the model predicts higher house prices on average. The PDP confirms the model learned this basic, global trend. But what if the story isn’t the same for every house?

This is where ICE curves come in. While a PDP shows the average effect, an ICE curve shows you the journey for a single, specific house. It answers: “For this particular home, how does the prediction change as we vary income?” When you plot many ICE curves together, you see the variation that the PDP average hides.

import matplotlib.pyplot as plt

def ice_curves(model, X, feature_index, sample_indices=30, grid_points=50):
    """Calculate ICE curves for a sample of instances."""
    grid = np.linspace(X[:, feature_index].min(),
                       X[:, feature_index].max(),
                       grid_points)
    ice_vals = []
    
    # Sample some instances
    sample_idx = np.random.choice(X.shape[0], sample_indices, replace=False)
    
    for i in sample_idx:
        instance_curve = []
        X_instance = X[i].copy()
        for value in grid:
            X_instance[feature_index] = value
            instance_curve.append(model.predict([X_instance])[0])
        ice_vals.append(instance_curve)
    
    return grid, np.array(ice_vals), sample_idx

# Get ICE curves
ice_grid, ice_data, sampled_idx = ice_curves(model, X, 0)

# Plot
plt.figure(figsize=(10, 6))
for i in range(len(ice_data)):
    plt.plot(ice_grid, ice_data[i], color='steelblue', alpha=0.3, linewidth=0.8)
# Overlay the PDP (the average)
plt.plot(grid, pdp_values, color='crimson', linewidth=3, label='PDP (Average)')
plt.xlabel('Median Income')
plt.ylabel('Predicted House Value')
plt.legend()
plt.title('ICE Curves and Partial Dependence for Median Income')
plt.show()

Looking at this plot, you might ask: Do all these lines rise in the same way? Or do some flatten out at higher incomes, suggesting the model thinks other factors become more important for expensive homes? ICE curves can reveal these subgroups and heterogeneities. They tell you if the average story is the only story.

So far, we’ve looked at features in isolation. But what about when two features work together? Does the effect of having more rooms depend on the age of the house? To see this, we need a two-way Partial Dependence Plot. Instead of a line, this creates a surface or a heatmap.

from sklearn.inspection import PartialDependenceDisplay

# Using scikit-learn's built-in functionality for a 2D plot
fig, ax = plt.subplots(figsize=(8, 6))
# We'll look at the interaction between 'AveRooms' (index 5) and 'HouseAge' (index 1)
display = PartialDependenceDisplay.from_estimator(model, X, [(5, 1)], ax=ax)
ax.set_title("2D PDP: Rooms vs. House Age")
plt.show()

A heatmap might show that for newer houses, adding rooms dramatically increases predicted value, but for very old houses, the effect is muted. This is a feature interaction, and spotting it is crucial for true understanding. It can guide feature engineering or validate business logic.

You might wonder, with tools like SHAP available, why start with PDPs and ICE? They serve different purposes. SHAP is fantastic for explaining individual predictions. PDPs and ICE are about understanding the global structure of the model itself—its internal logic across the entire dataset. They help you debug the model: did it learn a nonsensical, unexpected relationship? They help you validate it: does the trend match domain expertise? They help you communicate: you can show a clear graph of “when income goes up, our model says price goes up, like this.”

However, they require careful use. Be cautious with features that are strongly correlated. The PDP will show you what happens when you force income to a low value but keep all other features, which might normally be low too, at their original values. This can create unrealistic data points. Also, computing PDPs can be slow for large datasets or many grid points.

In practice, I use these plots as part of a model review checklist. After training, before deployment, I generate PDPs for key features and 2D plots for suspected interactions. It’s a reality check. It transforms the model from a mysterious number-generator into a system whose reasoning I can interrogate and explain.

I encourage you to take your next model and don’t stop at the metric. Ask it “why?” Plot its partial dependence. Look at the individual curves. Search for the interactions. You’ll be surprised at what you learn, and you’ll build not just accurate models, but trustworthy and intelligible ones. If this journey from black box to clarity resonates with you, please share your thoughts and experiences in the comments below. Let’s keep the conversation on building responsible, understandable AI going.


As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!


📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!


Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Keywords: machine learning,model interpretability,partial dependence plots,ice curves,explainable ai



Similar Posts
Blog Image
How to Build Model Interpretation Pipelines with SHAP and LIME in Python 2024

Learn to build robust model interpretation pipelines using SHAP and LIME in Python. Master global/local explanations, production deployment, and optimization techniques for explainable AI. Start building interpretable ML models today.

Blog Image
Build Robust Machine Learning Pipelines with Feature Selection and Cross-Validation in Python

Learn to build robust machine learning pipelines with feature selection and cross-validation in Python. Master filter, wrapper & embedded methods with scikit-learn for better model performance. Start building today!

Blog Image
Complete Guide to SHAP and LIME Model Explainability in Python 2024

Master model explainability with SHAP and LIME in Python. Complete tutorial with code examples, comparisons, best practices for interpretable machine learning.

Blog Image
Build Robust Anomaly Detection Systems with Isolation Forest and SHAP for Production-Ready Applications

Build robust anomaly detection systems with Isolation Forest and SHAP explainability. Learn implementation, tuning, and production deployment strategies.

Blog Image
Advanced SHAP Model Interpretability Guide: Complete Python Tutorial for Explainable Machine Learning 2024

Master SHAP for explainable machine learning in Python. Complete guide covers theory, implementation, visualizations & production integration. Unlock model interpretability now!

Blog Image
Model Interpretability with SHAP: Complete Theory to Production Implementation Guide

Master SHAP model interpretability from theory to production. Learn implementation, visualization, optimization, and integration patterns. Complete guide with code examples and best practices.