Skip to content Skip to sidebar Skip to footer

Get Intermediate Data State In Scikit-learn Pipeline

Given the following example: from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.decomposition import NMF from sklearn.pipeline import Pipeline import pandas

Solution 1:

As @Vivek Kumar suggested in the comment and as I answered here, I find a debug step that prints information or writes intermediate dataframes to csv useful:

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import NMF
from sklearn.pipeline import Pipeline
import pandas as pd
from sklearn.base import TransformerMixin, BaseEstimator


class Debug(BaseEstimator, TransformerMixin):

    def transform(self, X):
        print(X.shape)
        self.shape = shape
        # what other output you want
        return X

    def fit(self, X, y=None, **fit_params):
        return self

pipe = Pipeline([
    ("tf_idf", TfidfVectorizer()),
    ("debug", Debug()),
    ("nmf", NMF())
])

data = pd.DataFrame([["Salut comment tu vas", "Hey how are you today", "I am okay and you ?"]]).T
data.columns = ["test"]

pipe.fit_transform(data.test)

Edit

I now added a state to the debug transformer. Now you can access the shape as in the answer by @datasailor with:

pipe.named_steps["debug"].shape

Solution 2:

As far as I understand, you want to get the transformed training data. You already fitted the data in pipe.named_steps["tf_idf"], so just use this fitted model to transform the training data again:

pipe.named_steps["tf_idf"].transform(data.test)

Solution 3:

I've create a gist for this. Essentially, from Python 3.2, using the Context Manager, the code below allows for one to retrieve intermediate results into a dict with the names of the pipeline transformers as keys.

with intermediate_transforms(pipe):
    Xt = pipe.transform(X)
    intermediate_results = pipe.intermediate_results__

This is accomplished via the function below, but see my gist for more documentation.

import contextlib
from functools import partial

from sklearn.pipeline import Pipeline

@contextlib.contextmanager
def intermediate_transforms(pipe: Pipeline):
    # Our temporary overload of Pipeline._transform() method.
    # https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/pipeline.py
    def _pipe_transform(self, X):
        Xt = X
        for _, name, transform in self._iter():
            Xt = transform.transform(Xt)
            self.intermediate_results__[name] = Xt
        return Xt

    if not isinstance(pipe, Pipeline):
        raise ValueError(f'"{pipe}" must be a Pipeline.')

    pipe.intermediate_results__ = {}                              
    _transform_before = pipe._transform
    pipe._transform = partial(_pipe_transform, pipe)  # Monkey-patch our _pipe_transform method.
    yield pipe  # Release our patched object to the context
    
    # Restore
    pipe._transform = _transform_before
    delattr(pipe, 'intermediate_results__')

Solution 4:

I'm not sure exactly what your use case is, but one simple solution is this:

# get feature values by transforming x for each step, except the classifier 

x_intermediate = data.train

for step in pipe.steps[:-1]:
    x_intermediate = step[1].transform(x_intermediate)

print(x_intermediate)

Good luck-
Tony


Post a Comment for "Get Intermediate Data State In Scikit-learn Pipeline"