• Stars
    star
    168
  • Rank 225,507 (Top 5 %)
  • Language
    Scala
  • License
    BSD 2-Clause "Sim...
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.

The LinkedIn Fairness Toolkit (LiFT)

Build Status Release License

📣 We've moved from Bintray to Artifactory!

As of version 0.2.2, we are only publishing versions to LinkedIn's Artifactory instance rather than Bintray, which is approaching end of life.

Introduction

The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness and the mitigation of bias in large-scale machine learning workflows. The measurement module includes measuring biases in training data, evaluating fairness metrics for ML models, and detecting statistically significant differences in their performance across different subgroups. It can also be used for ad-hoc fairness analysis. The mitigation part includes a post-processing method for transforming model scores to ensure the so-called equality of opportunity for rankings (in the presence/absence of position bias). This method can be directly applied to the model-generated scores without changing the existing model training pipeline.

This library was created by Sriram Vasudevan and Krishnaram Kenthapadi (work done while at LinkedIn).

Additional Contributors:

  1. Preetam Nandy

Copyright

Copyright 2020 LinkedIn Corporation All Rights Reserved.

Licensed under the BSD 2-Clause License (the "License"). See License in the project root for license information.

Features

LiFT provides a configuration-driven Spark job for scheduled deployments, with support for custom metrics through User Defined Functions (UDFs). APIs at various levels are also exposed to enable users to build upon the library's capabilities as they see fit. One can thus opt for a plug-and-play approach or deploy a customized job that uses LiFT. As a result, the library can be easily integrated into ML pipelines. It can also be utilized in Jupyter notebooks for more exploratory fairness analyses.

LiFT leverages Apache Spark to load input data into in-memory, fault-tolerant and scalable data structures. It strategically caches datasets and any pre-computation performed. Distributed computation is balanced with single system execution to obtain a good mix of scalability and speed. For example, distance, distribution and divergence related metrics are computed on the entire dataset in a distributed manner, while benefit vectors and permutation tests (for model performance) are computed on scored dataset samples that can be collected to the driver.

The LinkedIn Fairness Toolkit (LiFT) provides the following capabilities:

  1. Measuring Fairness Metrics on Training Data
  2. Measuring Fairness Metrics for Model Performance
  3. Achieving Equality of Opportunity

As part of the model performance metrics, it also contains the implementation of a new permutation testing framework that detects statistically significant differences in model performance (as measured by an arbitrary performance metric) across different subgroups.

High-level details about the parameters, metrics supported and usage are described below. More details about the metrics themselves are provided in the links above.

A list of automatically downloaded direct dependencies are provided here.

Usage

Building the Library

It is recommended to use Scala 2.11.8 and Spark 2.3.0. To build, run the following:

./gradlew build

This will produce a JAR file in the ./lift/build/libs/ directory.

If you want to use the library with Spark 2.4 (and the Scala 2.11.8 default), you can specify this when running the build command.

./gradlew build -PsparkVersion=2.4.3

You can also build an artifact with Spark 2.4 and Scala 2.12.

./gradlew build -PsparkVersion=2.4.3 -PscalaVersion=2.12.11

Tests typically run with the test task. If you want to force-run all tests, you can use:

./gradlew cleanTest test --no-build-cache

To force rebuild the library, you can use:

./gradlew clean build --no-build-cache

Add a LiFT Dependency to Your Project

Please check Artifactory for the latest artifact versions.

Gradle Example

The artifacts are available in LinkedIn's Artifactory instance and in Maven Central, so you can specify either repository in the top-level build.gradle file.

repositories {
    mavenCentral()
    maven {
        url "https://linkedin.jfrog.io/artifactory/open-source/"
    }
}

Add the LiFT dependency to the module-level build.gradle file. Here are some examples for multiple recent Spark/Scala version combinations:

dependencies {
    compile 'com.linkedin.lift:lift_2.3.0_2.11:0.1.4'
}
dependencies {
    compile 'com.linkedin.lift:lift_2.4.3_2.11:0.1.4'
}
dependencies {
    compile 'com.linkedin.lift:lift_2.4.3_2.12:0.1.4'
}

Using the JAR File

Depending on the mode of usage, the built JAR can be deployed as part of an offline data pipeline, depended upon to build jobs using its APIs, or added to the classpath of a Spark Jupyter notebook or a Spark Shell instance. For example:

$SPARK_HOME/bin/spark-shell --jars target/lift_2.3.0_2.11_0.1.4.jar

Usage Examples

Measuring Dataset Fairness Metrics using the provided Spark job

LiFT provides a Spark job for measuring fairness metrics for training data, as well as for the validation or test dataset:

com.linkedin.fairness.eval.jobs.MeasureDatasetFairnessMetrics

This job can be configured using various parameters to compute fairness metrics on the dataset of interest:

1. datasetPath: Input data path
2. protectedDatasetPath: Input path to the protected dataset (optional).
                         If not provided, the library attempts to use
                         the right dataset based on the protected attribute.
3. dataFormat: Format of the input datasets. This is the parameter passed
              to the Spark reader's format method. Defaults to avro.
4. dataOptions: A map of options to be used with Spark's reader (optional).
5. uidField: The unique ID field, like a memberId field. It acts as the join key for the primary dataset.
6. labelField: The label field
7. protectedAttributeField: The protected attribute field
8. uidProtectedAttributeField: The uid field (join key) for the protected attribute dataset
9. outputPath: Output data path
10. referenceDistribution: A reference distribution to compare against (optional).
                          Only accepted value currently is UNIFORM.
11. distanceMetrics: Distance and divergence metrics like SKEWS, INF_NORM_DIST,
                    TOTAL_VAR_DIST, JS_DIVERGENCE, KL_DIVERGENCE and
                    DEMOGRAPHIC_PARITY (optional).
12. overallMetrics: Aggregate metrics like GENERALIZED_ENTROPY_INDEX,
                    ATKINSONS_INDEX, THEIL_L_INDEX, THEIL_T_INDEX and
                    COEFFICIENT_OF_VARIATION, along with their corresponding
                    parameters.
13. benefitMetrics: The distance/divergence metrics to use as the benefit
                    vector when computing the overall metrics. Acceptable
                    values are SKEWS and DEMOGRAPHIC_PARITY.

The most up-to-date information on these parameters can always be found here.

The Spark job performs no preprocessing of the input data, and makes assumptions like assuming that the unique ID field (the join key) is stored in the same format in the input data and the protectedAttribute data. This might not be the case for your dataset, in which case you can always create your own Spark job similar to the provided example (described below).

Measuring Model Fairness Metrics using the provided Spark job

LiFT provides a Spark job for measuring fairness metrics for model performance, based on the labels and scores of the test or validation data:

com.linkedin.fairness.eval.jobs.MeasureModelFairnessMetrics

This job can be configured using various parameters to compute fairness metrics on the dataset of interest:

1. datasetPath Input data path
2. protectedDatasetPath Input path to the protected dataset (optional).
                        If not provided, the library attempts to use
                        the right dataset based on the protected attribute.
3. dataFormat: Format of the input datasets. This is the parameter passed
              to the Spark reader's format method. Defaults to avro.
4. dataOptions: A map of options to be used with Spark's reader (optional).
5. uidField The unique ID field, like a memberId field. It acts as the join key for the primary dataset.
6. labelField The label field
7. scoreField The score field
8. scoreType Whether the scores are raw scores or probabilities.
             Accepted values are RAW or PROB.
9. protectedAttributeField The protected attribute field
10. uidProtectedAttributeField The uid field (join key) for the protected attribute dataset.
11. groupIdField An optional field to be used for grouping, in case of ranking metrics
12. outputPath Output data path
13. referenceDistribution A reference distribution to compare against (optional).
                          Only accepted value currently is UNIFORM.
14. approxRows The approximate number of rows to sample from the input data
               when computing model metrics. The final sampled value is
               min(numRowsInDataset, approxRows)
15. labelZeroPercentage The percentage of the sampled data that must
                        be negatively labeled. This is useful in case
                        the input data is highly skewed and you believe
                        that stratified sampling will not obtain sufficient
                        number of examples of a certain label.
16. thresholdOpt An optional value that contains a threshold. It is used
                 in case you want to generate hard binary classifications.
                 If not provided and you request metrics that depend on
                 explicit label predictions (eg. precision), the scoreType
                 information is used to convert the scores into the
                 probabilities of predicting positives. This is used for
                 computing expected positive prediction counts.
17. numTrials Number of trials to run the permutation test for. More trials
              yield results with lower variance in the computed p-value,
              but takes more time
18. seed The random value seed
19. distanceMetrics Distance and divergence metrics that are to be computed.
                    These are metrics such as Demographic Parity
                    and Equalized Odds.
20. permutationMetrics The metrics to use for permutation testing
21. distanceBenefitMetrics The model metrics that are to be used for
                           computing benefit vectors, one for each
                           distance metric specified.
22. performanceBenefitMetrics The model metrics that are to be used for
                              computing benefit vectors, one for each
                              model performance metric specified.
23. overallMetrics The aggregate metrics that are to be computed on each
                   of the benefit vectors generated.

The most up-to-date information on these parameters can always be found here.

The Spark job performs no preprocessing of the input data, and makes assumptions like assuming that the unique ID field (the join key) is stored in the same format in the input data and the protectedAttribute data. This might not be the case for your dataset, in which case you can always create your own Spark job similar to the provided example (described below)

Learning and Applying Equality of Opportunity (EOpp) on Local Datasets

An example is provided in EOppUtilsTest for applying the EOpp transformation to local datasets. We provide two simulated datasets TrainingData.csv and ValidationData.csv each containing 1M samples. The workflow is provided as a test function eOppTransformationTest() consisting of the following steps:

  1. Learning position bias corrected EOpp transformation using the training data
  2. Applying the EOpp transformation on the validation data
  3. Checking EOpp in the transformed validation data with position bias
  4. Checking the (optional) score distribution preserving property of the EOpp transformation

Custom Spark jobs built on LiFT

If you are implementing your own driver program to measure dataset metrics, here's how you can make use of LiFT:

object MeasureDatasetFairnessMetrics { 
  def main(progArgs: Array[String]): Unit = { 
    // Get spark session
    val spark = SparkSession 
      .builder() 
      .appName(getClass.getSimpleName) 
      .getOrCreate() 
 
    // Parse args
    val args = MeasureDatasetFairnessMetricsCmdLineArgs.parseArgs(progArgs) 
 
    // Load and preprocess data
    val df = spark.read.format(args.dataFormat)
      .load(args.datasetPath)
      .select(args.uidField, args.labelField)
 
    // Load protected data and join
    val joinedDF = ...
    joinedDF.persist 

    // Obtain reference distribution (optional). This can be used to provide a
    // custom distribution to compare the dataset against.
    val referenceDistrOpt = ...
 
    // Passing in the appropriate parameters to this API computes and writes 
    // out the fairness metrics 
    FairnessMetricsUtils.computeAndWriteDatasetMetrics(distribution,
      referenceDistrOpt, args) 
  } 
}

A complete example for the above can be found here.

In the case of measuring model metrics, a similar Spark job can be implemented:

object MeasureModelFairnessMetrics { 
  def main(progArgs: Array[String]): Unit = { 
    // Get spark session
    val spark = SparkSession 
      .builder() 
      .appName(getClass.getSimpleName) 
      .getOrCreate() 
 
    // Parse args
    val args = MeasureModelFairnessMetricsCmdLineArgs.parseArgs(progArgs) 
 
    // Load and preprocess data
    val df = spark.read.format(args.dataFormat)
      .load(args.datasetPath)
      .select(args.uidField, args.labelField)
 
    // Load protected data and join
    val joinedDF = ...
    joinedDF.persist 

    // Obtain reference distribution (optional). This can be used to provide a
    // custom distribution to compare the dataset against.
    val referenceDistrOpt = ...
 
    // Passing in the appropriate parameters to this API computes and writes 
    // out the fairness metrics 
    FairnessMetricsUtils.computeAndWriteModelMetrics(
      joinedDF, referenceDistrOpt, args) 
  } 
}

A complete example for the above can be found here.

Contributions

If you would like to contribute to this project, please review the instructions here.

Acknowledgments

Implementations of some methods in LiFT were inspired by other open-source libraries. LiFT also contains the implementation of a new permutation testing framework. Discussions with several LinkedIn employees influenced aspects of this library. A full list of acknowledgements can be found here.

Citations

If you publish material that references the LinkedIn Fairness Toolkit (LiFT), you can use the following citations:

@inproceedings{vasudevan20lift,
    author       = {Vasudevan, Sriram and Kenthapadi, Krishnaram},
    title        = {{LiFT}: A Scalable Framework for Measuring Fairness in ML Applications},
    booktitle    = {Proceedings of the 29th ACM International Conference on Information and Knowledge Management},
    series       = {CIKM '20},
    year         = {2020},
    pages        = {},
    numpages     = {8}
}

@misc{lift,
    author       = {Vasudevan, Sriram and Kenthapadi, Krishnaram},
    title        = {The LinkedIn Fairness Toolkit ({LiFT})},
    howpublished = {\url{https://github.com/linkedin/lift}},
    month        = aug,
    year         = 2020
}

If you publish material that references the permutation testing methodology that is available as part of LiFT, you can use the following citation:

@inproceedings{diciccio20evaluating,
    author       = {DiCiccio, Cyrus and Vasudevan, Sriram and Basu, Kinjal and Kenthapadi, Krishnaram and Agarwal, Deepak},
    title        = {Evaluating Fairness Using Permutation Tests},
    booktitle    = {Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
    series       = {KDD '20},
    year         = {2020},
    pages        = {},
    numpages     = {11}
}

If you publish material that references the equality of opportunity methodology that is available as part of LiFT, you can use the following citation:

@misc{nandy21mitigation,
   author        = {Preetam Nandy and Cyrus Diciccio and Divya Venugopalan and Heloise Logan and Kinjal Basu and Noureddine El Karoui},
   title         = {Achieving Fairness via Post-Processing in Web-Scale Recommender Systems}, 
   year          = {2021},
   eprint        = {2006.11350},
   archivePrefix = {arXiv}
}

More Repositories

1

school-of-sre

At LinkedIn, we are using this curriculum for onboarding our entry-level talents into the SRE role.
HTML
7,821
star
2

css-blocks

High performance, maintainable stylesheets.
TypeScript
6,335
star
3

Burrow

Kafka Consumer Lag Checking
Go
3,725
star
4

databus

Source-agnostic distributed change data capture system
Java
3,636
star
5

Liger-Kernel

Efficient Triton Kernels for LLM Training
Python
3,312
star
6

qark

Tool to look for several security related Android application vulnerabilities
Python
3,183
star
7

dustjs

Asynchronous Javascript templating for the browser and server
JavaScript
2,911
star
8

cruise-control

Cruise-control is the first of its kind to fully automate the dynamic workload rebalance and self-healing of a Kafka cluster. It provides great value to Kafka users by simplifying the operation of Kafka clusters.
Java
2,734
star
9

rest.li

Rest.li is a REST+JSON framework for building robust, scalable service architectures using dynamic discovery and simple asynchronous APIs.
Java
2,500
star
10

kafka-monitor

Xinfra Monitor monitors the availability of Kafka clusters by producing synthetic workloads using end-to-end pipelines to obtain derived vital statistics - E2E latency, service produce/consume availability, offsets commit availability & latency, message loss rate and more.
Java
2,016
star
11

dexmaker

A utility for doing compile or runtime code generation targeting Android's Dalvik VM
Java
1,863
star
12

greykite

A flexible, intuitive and fast forecasting library
Python
1,813
star
13

ambry

Distributed object store
Java
1,740
star
14

shiv

shiv is a command line utility for building fully self contained Python zipapps as outlined in PEP 441, but with all their dependencies included.
Python
1,729
star
15

swift-style-guide

LinkedIn's Official Swift Style Guide
1,430
star
16

dr-elephant

Dr. Elephant is a job and flow-level performance monitoring and tuning tool for Apache Hadoop and Apache Spark
Java
1,353
star
17

detext

DeText: A Deep Neural Text Understanding Framework for Ranking and Classification Tasks
Python
1,263
star
18

luminol

Anomaly Detection and Correlation library
Python
1,182
star
19

parseq

Asynchronous Java made easier
Java
1,165
star
20

oncall

Oncall is a calendar tool designed for scheduling and managing on-call shifts. It can be used as source of dynamic ownership info for paging systems like http://iris.claims.
Python
1,137
star
21

test-butler

Reliable Android Testing, at your service
Java
1,046
star
22

goavro

Go
972
star
23

PalDB

An embeddable write-once key-value store written in Java
Java
937
star
24

brooklin

An extensible distributed system for reliable nearline data streaming at scale
Java
919
star
25

iris

Iris is a highly configurable and flexible service for paging and messaging.
Python
807
star
26

photon-ml

A scalable machine learning library on Apache Spark
Terra
793
star
27

URL-Detector

A Java library to detect and normalize URLs in text
Java
782
star
28

coral

Coral is a translation, analysis, and query rewrite engine for SQL and other relational languages.
Java
781
star
29

Hakawai

A powerful, extensible UITextView.
Objective-C
781
star
30

eyeglass

NPM Modules for Sass
TypeScript
741
star
31

opticss

A CSS Optimizer
TypeScript
715
star
32

LiTr

Lightweight hardware accelerated video/audio transcoder for Android.
Java
609
star
33

kafka-tools

A collection of tools for working with Apache Kafka.
Python
592
star
34

pygradle

Using Gradle to build Python projects
Java
587
star
35

flashback

mock the internet
Java
578
star
36

FeatureFu

Library and tools for advanced feature engineering
Java
568
star
37

LayoutTest-iOS

Write unit tests which test the layout of a view in multiple configurations
Objective-C
564
star
38

FastTreeSHAP

Fast SHAP value computation for interpreting tree-based models
Python
509
star
39

venice

Venice, Derived Data Platform for Planet-Scale Workloads.
Java
487
star
40

Spyglass

A library for mentions on Android
Java
386
star
41

dagli

Framework for defining machine learning models, including feature generation and transformations, as directed acyclic graphs (DAGs).
Java
353
star
42

cruise-control-ui

Cruise Control Frontend (CCFE): Single Page Web Application to Manage Large Scale of Kafka Clusters
Vue
337
star
43

ml-ease

ADMM based large scale logistic regression
Java
333
star
44

openhouse

Open Control Plane for Tables in Data Lakehouse
Java
304
star
45

dph-framework

HTML
298
star
46

transport

A framework for writing performant user-defined functions (UDFs) that are portable across a variety of engines including Apache Spark, Apache Hive, and Presto.
Java
296
star
47

spark-tfrecord

Read and write Tensorflow TFRecord data from Apache Spark.
Scala
288
star
48

isolation-forest

A Spark/Scala implementation of the isolation forest unsupervised outlier detection algorithm with support for exporting in ONNX format.
Scala
224
star
49

shaky-android

Shake to send feedback for Android.
Java
160
star
50

pyexchange

Python wrapper for Microsoft Exchange
Python
153
star
51

asciietch

A graphing library with the goal of making it simple to graphs using ascii characters.
Python
138
star
52

python-avro-json-serializer

Serializes data into a JSON format using AVRO schema.
Python
137
star
53

gdmix

A deep ranking personalization framework
Python
131
star
54

li-apache-kafka-clients

li-apache-kafka-clients is a wrapper library for the Apache Kafka vanilla clients. It provides additional features such as large message support and auditing to the Java producer and consumer in the open source Apache Kafka.
Java
131
star
55

dynamometer

A tool for scale and performance testing of HDFS with a specific focus on the NameNode.
Java
131
star
56

Avro2TF

Avro2TF is designed to fill the gap of making users' training data ready to be consumed by deep learning training frameworks.
Scala
126
star
57

datahub-gma

General Metadata Architecture
Java
121
star
58

linkedin-gradle-plugin-for-apache-hadoop

Groovy
117
star
59

dex-test-parser

Find all test methods in an Android instrumentation APK
Kotlin
106
star
60

cassette

An efficient, file-based FIFO Queue for iOS and macOS.
Objective-C
95
star
61

spaniel

LinkedIn's JavaScript viewport tracking library and IntersectionObserver polyfill
JavaScript
92
star
62

Hoptimator

Multi-hop declarative data pipelines
Java
91
star
63

migz

Multithreaded, gzip-compatible compression and decompression, available as a platform-independent Java library and command-line utilities.
Java
79
star
64

avro-util

Collection of utilities to allow writing java code that operates across a wide range of avro versions.
Java
76
star
65

sysops-api

sysops-api is a framework designed to provide visability from tens of thousands of machines in seconds.
Python
74
star
66

iceberg

A temporary home for LinkedIn's changes to Apache Iceberg (incubating)
Java
62
star
67

DuaLip

DuaLip: Dual Decomposition based Linear Program Solver
Scala
59
star
68

kube2hadoop

Secure HDFS Access from Kubernetes
Java
59
star
69

dynoyarn

DynoYARN is a framework to run simulated YARN clusters and workloads for YARN scale testing.
Java
58
star
70

linkedin.github.com

Listing of all our public GitHub projects.
JavaScript
58
star
71

Tachyon

An Android library that provides a customizable calendar day view UI widget.
Java
57
star
72

Cytodynamics

Classloader isolation library.
Java
49
star
73

iris-relay

Stateless reverse proxy for thirdparty service integration with Iris API.
Python
48
star
74

concurrentli

Classes for multithreading that expand on java.util.concurrent, adding convenience, efficiency and new tools to multithreaded Java programs
Java
46
star
75

iris-mobile

A mobile interface for linkedin/iris, built for iOS and Android on the Ionic platform
TypeScript
42
star
76

lambda-learner

Lambda Learner is a library for iterative incremental training of a class of supervised machine learning models.
Python
41
star
77

TE2Rules

Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
Python
40
star
78

instantsearch-tutorial

Sample code for building an end-to-end instant search solution
JavaScript
39
star
79

PASS-GNN

Python
38
star
80

self-focused

Helps make a single page application more friendly to screen readers.
JavaScript
35
star
81

tracked-queue

An autotracked implementation of a ring-buffer-backed double-ended queue
TypeScript
35
star
82

QueryAnalyzerAgent

Analyze MySQL queries with negligible overhead
Go
35
star
83

performance-quality-models

Personalizing Performance model repository
Jupyter Notebook
31
star
84

data-integration-library

The Data Integration Library project provides a library of generic components based on a multi-stage architecture for data ingress and egress.
Java
28
star
85

Iris-message-processor

Iris-message-processor is a fully distributed Go application meant to replace the sender functionality of Iris and provide reliable, scalable, and extensible incident and out of band message processing and sending.
Go
27
star
86

smart-arg

Smart Arguments Suite (smart-arg) is a slim and handy python lib that helps one work safely and conveniently with command line arguments.
Python
23
star
87

linkedin-calcite

LinkedIn's version of Apache Calcite
Java
22
star
88

atscppapi

This library provides wrappers around the existing Apache Traffic Server API which will vastly simplify the process of writing Apache Traffic Server plugins.
C++
20
star
89

forthic

Python
18
star
90

high-school-trainee

LinkedIn Women in Tech High School Trainee Program
Python
18
star
91

play-parseq

Play-ParSeq is a Play module which seamlessly integrates ParSeq with Play Framework
Scala
17
star
92

icon-magic

Automated icon build system for iOS, Android and Web
TypeScript
17
star
93

QuantEase

QuantEase, a layer-wise quantization framework, frames the problem as discrete-structured non-convex optimization. Our work leverages Coordinate Descent techniques, offering high-quality solutions without the need for matrix inversion or decomposition.
Python
17
star
94

kafka-remote-storage-azure

Java
13
star
95

play-restli

A library that simplifies building restli services on top of the play server.
Java
12
star
96

spark-inequality-impact

Scala
12
star
97

Li-Airflow-Backfill-Plugin

Li-Airflow-Backfill-Plugin is a plugin to work with Apache Airflow to provide data backfill feature, ie. to rerun pipelines for a certain date range.
Python
10
star
98

AlerTiger

Jupyter Notebook
9
star
99

diderot

A fast and flexible implementation of the xDS protocol
Go
6
star
100

gobblin-elr

This is a read-only mirror of apache/gobblin
Java
5
star