Skip to content Skip to sidebar Skip to footer

How To Apply The Describe Function After Grouping A Pyspark Dataframe?

I want to find the cleanest way to apply the describe function to a grouped DataFrame (this question can also grow to apply any DF function to a grouped DF) I tested grouped aggreg

Solution 1:

Try this:

df.groupby("id").agg(F.count('v').alias('count'), F.mean('v').alias('mean'), F.stddev('v').alias('std'), F.min('v').alias('min'), F.expr('percentile(v, array(0.25))')[0].alias('%25'),  F.expr('percentile(v, array(0.5))')[0].alias('%50'), F.expr('percentile(v, array(0.75))')[0].alias('%75'), F.max('v').alias('max')).show()

Output:

+---+-----+----+------------------+---+----+---+----+----+
| id|count|mean|               std|min| %25|%50| %75| max|
+---+-----+----+------------------+---+----+---+----+----+
|  1|    2| 1.5|0.7071067811865476|1.0|1.25|1.5|1.75| 2.0|
|  2|    3| 6.0| 3.605551275463989|3.0| 4.0|5.0| 7.5|10.0|
+---+-----+----+------------------+---+----+---+----+----+

Solution 2:

If you have a utility function module you could put something like this in it and call a one liner afterwards.

import pyspark.sql.functions as F

defgroupby_apply_describe(df, groupby_col, stat_col):
    """From a grouby df object provide the stats
    of describe for each key in the groupby object.

    Parameters
    ----------
    df : spark dataframe groupby object
    col : column to compute statistics on
    
    """
    output = df.groupby(groupby_col).agg(
        F.count(stat_col).alias("count"),
        F.mean(stat_col).alias("mean"),
        F.stddev(stat_col).alias("std"),
        F.min(stat_col).alias("min"),
        F.expr(f"percentile({stat_col}, array(0.25))")[0].alias("%25"),
        F.expr(f"percentile({stat_col}, array(0.5))")[0].alias("%50"),
        F.expr(f"percentile({stat_col}, array(0.75))")[0].alias("%75"),
        F.max(stat_col).alias("max"),
    )
    print(output.orderBy(groupby_col).show())
    return output

In your case you would call groupby_apply_describe(df, 'id', 'v'). The output should match your requirements.

Solution 3:

Describe multiple columns...

Inspired by the answer before, but tested in spark/3.0.1

import itertools as it
import pyspark.sql.functions as F
from functools import reduce

group_column = 'id'
metric_columns = ['v','v1','v2']

# You will have a dataframe with df variabledefspark_describe(group_col, stat_col):
    return df.groupby(group_col).agg(
        F.count(stat_col).alias(f"{stat_col}_count"),
        F.mean(stat_col).alias(f"{stat_col}_mean"),
        F.stddev(stat_col).alias(f"{stat_col}_std"),
        F.min(stat_col).alias(f"{stat_col}_min"),
        F.max(stat_col).alias("{stat_col}_max"),
        F.expr(f"percentile({stat_col}, array(0.25))")[0].alias(f"{stat_col}_25pct"),
        F.expr(f"percentile({stat_col}, array(0.5))")[0].alias(f"{stat_col}_50pct"),
        F.expr(f"percentile({stat_col}, array(0.75))")[0].alias(f"{stat_col}_75pct"),   
    )

_join = lambda a,b: a.join(b, group_column, 'inner')
dff = reduce(_join, list(map(lambda x: spark_describe(*x), zip(it.repeat(group_column, len(metric_columns)), metric_columns))))

Solution 4:

You would run this:

df.groupby("id").describe('uniform', 'normal').show()

It's fairly self-explanatory.

Post a Comment for "How To Apply The Describe Function After Grouping A Pyspark Dataframe?"