• Stars
    star
    333
  • Rank 126,599 (Top 3 %)
  • Language
    Java
  • License
    Apache License 2.0
  • Created over 10 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

ADMM based large scale logistic regression

ml-ease

Table of Contents

  • What is ml-ease?
  • Copyright
  • Open Source Software included in ml-ease
  • What is ADMM?
  • Quick Start
  • Code structure
  • Input Data Format
  • Output Models
  • Detailed Instructions
  • Supporting Team

What is ml-ease?

ml-ease is the Open-sourced Large-scale machine learning library from LinkedIn; currently it has ADMM based large scale logistic regression.

Copyright

Copyright 2014 LinkedIn Corporation. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. The license information on third-party code is included in NOTICE.

Open Source Software included in ml-ease

What is ADMM?

ADMM stands for Alternating Direction Method of Multipliers (Boyd et al. 2011). The basic idea of ADMM is as follows: ADMM considers the large scale logistic regression model fitting as a convex optimization problem with constraints. The ADMM algorithm is guaranteed to converge.? While minimizing the user-defined loss function, it enforces an extra constraint that coefficients from all partitions have to equal. To solve this optimization problem, ADMM uses an iterative process. For each iteration it partitions the big data into many small partitions, and fit an independent logistic regression for each partition. Then, it aggregates the coefficients collected from all partitions, learns the consensus coefficients, and sends it back to all partitions to retrain. After 10-20 iterations, it ends up with a converged solution that is theoretically close to what you would have obtained if you trained it on a single machine.

Quick Start

  • Installation: This uses maven to compile. Command: "mvn clean install"
  • Run ADMM:
    1. Copy the jar ./target/ml-ease-1.0-jar-with-dependencies.jar to hadoop gateway. And then set up a config file like ./examples/sample-config.job.
    2. To run Admm: hadoop jar ml-ease-1.0-jar-with-dependencies.jar com.linkedin.ml-ease.regression.jobs.Regression sample-config.job.

Code structure

  • You can see under src/main/java there are two directories: avro and bjson. Naturally, avro dir has the code for data in avro format.
  • Inside the avro directory you will see 10-15 java classes. It might look a little overwhelming in the beginning, but the most important two classes are:
    1. AdmmPrepare.java is the job you need to call before you run ADMM (i.e. AdmmTrain.java). This job makes sanity checks of the input data format, and assigns partition id to each sample.
    2. AdmmTrain.java is the job you need to call for training logistic regression with L2. It allows you to specify multiple L2 penalty parameters (i.e. lambda) at one run, and if you specify the test data, it will grab the first 1M samples or the first partition in your test data path, and outputs log-likelihood for each iteration and each lambda value.
  • Other useful hadoop jobs include:
    1. AdmmTest.java is a job for testing the model trained by AdmmTrain.java on test data.
    2. AdmmTestLoglik.java is a job for computing loglikelihood given the output from AdmmTest.java.
    3. NaiveTrain.java is a job for training independent logistic regressions per key (This is very useful when you want to train say per-item model on hadoop while data for each item is quite small).
  • A standard flow for training and test ADMM is: AdmmPrepare -> AdmmTrain -> AdmmTest -> AdmmTestLoglik.
  • A Sample job flow in src/main/batch/avro: AdmmPrepare-avro -> AdmmTrain-avro -> AdmmTest-avro -> Admm-avro

Input Data Format

  • The data must have two fields:
    1. response:int,
    2. features:[{name:string, term:string, value:float}]
  • All the other fields are optional.
  • We define a feature string to be represented as name + term. For example, suppose you have a feature called age=[10,20], i.e. age between 10 years old to 20 years old. The feature can be represented by:
  name="age"
  term="[10,20]"
  value=1.0
  • If you don't want to use two fields to represent one feature, feel free to set term as empty string. Note that name should never be empty strings.
  • Training and Test data should have the same format. If not, it is probably fine as long as both have fields "response" and "features".
  • Intercept does not need to be put into the training data.
  • Below is a sample of training/test data:
Record 1:
  {
    "response" : 0,
    "features" : [ 
    {
      "name" : "7",
      "term" : "33",
      "value" : 1.0
    }, {
      "name" : "8",
      "term" : "151",
     "value" : 1.0
    }, {
      "name" : "3",
      "term" : "0",
      "value" : 1.0
    }, {
      "name" : "12",
      "term" : "132",
      "value" : 1.0
    } 
   ],
    "weight" : 1.0,
    "offset" : 0.0,
    "foo" : "whatever"
 }
  • Weight is an optional field that specifies the weight of the observation. Default is 1.0. If you feel some observation is stronger than the others, feel free to use this field, say making the weak ones 0.5.
  • Offset is an optional field. Default is 0.0. When it is non-zero, the model will learn coefficients beta by x'beta+offset instead of x'beta.
  • foo is an extra field to let you know that ADMM allows you to put extra fields in.

Output Models

  • Output Directory Structure
    1. "best-model" saves the best model based on the sample test data over all iterations for all lambdas. Use it WISELY because it may not mean the real "best-model". For example, if your sample test data is too small, the variance of the model performance will become high so that the best model doesn't mean anything. In that case use models from "final-model" instead.
    2. "final-model" saves the models for EACH lambda after the last iteration. This is always a safe choice and it is strongly recommended.
    3. "sample-test-loglik" saves the sample test loglikelihood over all iterations for all lambdas. This is simply for you to study the convergence.
  • Output Model Format A model output record has the following format:
 key:string,
 model:[{name:string, term:string, value:float}]

"key" column saves the lambda value. "model" saves the model output for that lambda. "name" + "term" again represents a feature string, and "value" is the learned coefficient. Note that the first element of the model is always "(INTERCEPT)", which means the intercept. Below is a sample of the learned model for lambda = 1.0 and 2.0:

Record 1:
  {
    "key" : "1.0",
    "model" : [ {
      "name" : "(INTERCEPT)",
      "term" : "",
      "value" : -2.5
    }, {
      "name" : "7",
      "term" : "33",
      "value" : 0.98
    }, {
      "name" : "8",
      "term" : "151",
      "value" : 0.34
    }, {
      "name" : "3",
      "term" : "0",
      "value" : -0.4
    }, {
      "name" : "12",
      "term" : "132",
      "value" : -0.3
    } ],
  }
  Record 2:
  {
    "key" : "2.0",
    "model" : [ {
      "name" : "(INTERCEPT)",
      "term" : "",
      "value" : -2.5
    }, {
      "name" : "7",
      "term" : "33",
      "value" : 0.83
    }, {
      "name" : "8",
      "term" : "151",
      "value" : 0.32
    }, {
      "name" : "3",
      "term" : "0",
      "value" : -0.3
    }, {
      "name" : "12",
      "term" : "132",
      "value" : -0.1
    } ],
 }

Detailed Instructions

AdmmPrepare Job

  • input.paths
    • The input path of training data
  • output.path
    • The ROOT output path of the model directory
  • num.blocks
    • Number of partitions for ADMM, choose a large enough value so that your memory doesn't blow up.
    • But note: The smaller this value is, the better convergence becomes.
  • binary.feature
    • Are all the features in this data binary? true/false
  • map.key
    • The field name for partitioning the data. This is only useful for training per-item model using NaiveTrain.java
  • num.click.replicates
    • This is a trick: For sparse data sets where positives are rare, we replicate the clicks into N copies and give each of them 1/N weight. This helps in terms of convergence, but also makes the data larger. But note: it should be less than number of partitions.
    • Example value, 1 when num.blocks<10.
    • Example value, 5 when num.blocks>10.

AdmmTrain Job

  • It shares parameters with AdmmPrepare job, e.g. num.blocks, binary.feature, num.click.replicates. Please make sure they are the same as what are specified in AdmmPrepare.job.
  • input.paths:
    • Output path of the Admm Prepare job
    • Example: ADMM-Prepare.output.path/tmp-data
  • output.model.path
    • The ROOT output path of the model directory
    • Example: ADMM-Prepare.output.path
  • test.path
    • The test data path
  • has.intercept
    • Whether the model has the intercept or not, if not, intercept will be 0
  • lambda
    • L2 Penalty parameters
    • Example values: 1,10,100
  • num.iters
    • number of ADMM iterations
    • Example value: 20
  • remove.tmp.dir
    • Whether to remove tmp directories or not?
  • epsilon
    • Convergence parameter of ADMM
    • Exampel value: 0.0001
  • lambda.map
    • Location of the lambda map on hdfs. This is for specifying different L2 penalty parameters for different coefficients. No need to use it in most cases
  • short.feature.index
    • How many features do you have? If the number is less than short.MAX, then set to be true, otherwise false
  • test.loglik.per.iter
    • Output test logliklihood per iteration? Usually setting to be true is good

AdmmTest Job

  • input.paths
    • The test data path
  • output.base.path
    • The ROOT path of output for test results
  • model.base.path
    • The ROOT path of the model output

AdmmTestLoglik Job

  • This job will put a /_loglik subdir inside each test predicted directory.
  • input.base.paths
    • The ROOT path of output for test results
  • output.base.path
    • The ROOT path of output for test-loglik results

NaiveTrain job

  • This job is mainly for training per-item model, i.e. for each item (such as campaign_id, creative_id) train an independent regression model.
  • It can also be used for training one logistic regression model for a large scale data. That's why it is called "naive" train: It splits the data into partitions, train independent regression models for each partition, and then take average of the coefficients.
  • The job parameters are very similar to AdmmTrain.java, except
  • compute.model.mean Whether to compute the mean of the coefficients that are learned from each partition. Claim it to be true only if you are training one regression model for a large data using naive method. true/false
  • data.size.threshold For per-item model, whether to ignore the item if the data size of this item is smaller than the threshold.

Supporting Team

This tool is developed by Applied Relevance Science team at LinkedIn. People who contributed to this tool include:

  • Deepak Agarwal
  • Bee-Chung Chen
  • Bo Long
  • Liang Zhang

More Repositories

1

school-of-sre

At LinkedIn, we are using this curriculum for onboarding our entry-level talents into the SRE role.
HTML
7,821
star
2

css-blocks

High performance, maintainable stylesheets.
TypeScript
6,335
star
3

Burrow

Kafka Consumer Lag Checking
Go
3,725
star
4

databus

Source-agnostic distributed change data capture system
Java
3,636
star
5

Liger-Kernel

Efficient Triton Kernels for LLM Training
Python
3,312
star
6

qark

Tool to look for several security related Android application vulnerabilities
Python
3,183
star
7

dustjs

Asynchronous Javascript templating for the browser and server
JavaScript
2,911
star
8

cruise-control

Cruise-control is the first of its kind to fully automate the dynamic workload rebalance and self-healing of a Kafka cluster. It provides great value to Kafka users by simplifying the operation of Kafka clusters.
Java
2,734
star
9

rest.li

Rest.li is a REST+JSON framework for building robust, scalable service architectures using dynamic discovery and simple asynchronous APIs.
Java
2,500
star
10

kafka-monitor

Xinfra Monitor monitors the availability of Kafka clusters by producing synthetic workloads using end-to-end pipelines to obtain derived vital statistics - E2E latency, service produce/consume availability, offsets commit availability & latency, message loss rate and more.
Java
2,016
star
11

dexmaker

A utility for doing compile or runtime code generation targeting Android's Dalvik VM
Java
1,863
star
12

greykite

A flexible, intuitive and fast forecasting library
Python
1,813
star
13

ambry

Distributed object store
Java
1,740
star
14

shiv

shiv is a command line utility for building fully self contained Python zipapps as outlined in PEP 441, but with all their dependencies included.
Python
1,729
star
15

swift-style-guide

LinkedIn's Official Swift Style Guide
1,430
star
16

dr-elephant

Dr. Elephant is a job and flow-level performance monitoring and tuning tool for Apache Hadoop and Apache Spark
Java
1,353
star
17

detext

DeText: A Deep Neural Text Understanding Framework for Ranking and Classification Tasks
Python
1,263
star
18

luminol

Anomaly Detection and Correlation library
Python
1,182
star
19

parseq

Asynchronous Java made easier
Java
1,165
star
20

oncall

Oncall is a calendar tool designed for scheduling and managing on-call shifts. It can be used as source of dynamic ownership info for paging systems like http://iris.claims.
Python
1,137
star
21

test-butler

Reliable Android Testing, at your service
Java
1,046
star
22

goavro

Go
972
star
23

PalDB

An embeddable write-once key-value store written in Java
Java
937
star
24

brooklin

An extensible distributed system for reliable nearline data streaming at scale
Java
919
star
25

iris

Iris is a highly configurable and flexible service for paging and messaging.
Python
807
star
26

photon-ml

A scalable machine learning library on Apache Spark
Terra
793
star
27

URL-Detector

A Java library to detect and normalize URLs in text
Java
782
star
28

coral

Coral is a translation, analysis, and query rewrite engine for SQL and other relational languages.
Java
781
star
29

Hakawai

A powerful, extensible UITextView.
Objective-C
781
star
30

eyeglass

NPM Modules for Sass
TypeScript
741
star
31

opticss

A CSS Optimizer
TypeScript
715
star
32

LiTr

Lightweight hardware accelerated video/audio transcoder for Android.
Java
609
star
33

kafka-tools

A collection of tools for working with Apache Kafka.
Python
592
star
34

pygradle

Using Gradle to build Python projects
Java
587
star
35

flashback

mock the internet
Java
578
star
36

FeatureFu

Library and tools for advanced feature engineering
Java
568
star
37

LayoutTest-iOS

Write unit tests which test the layout of a view in multiple configurations
Objective-C
564
star
38

FastTreeSHAP

Fast SHAP value computation for interpreting tree-based models
Python
509
star
39

venice

Venice, Derived Data Platform for Planet-Scale Workloads.
Java
487
star
40

Spyglass

A library for mentions on Android
Java
386
star
41

dagli

Framework for defining machine learning models, including feature generation and transformations, as directed acyclic graphs (DAGs).
Java
353
star
42

cruise-control-ui

Cruise Control Frontend (CCFE): Single Page Web Application to Manage Large Scale of Kafka Clusters
Vue
337
star
43

openhouse

Open Control Plane for Tables in Data Lakehouse
Java
304
star
44

dph-framework

HTML
298
star
45

transport

A framework for writing performant user-defined functions (UDFs) that are portable across a variety of engines including Apache Spark, Apache Hive, and Presto.
Java
296
star
46

spark-tfrecord

Read and write Tensorflow TFRecord data from Apache Spark.
Scala
288
star
47

isolation-forest

A Spark/Scala implementation of the isolation forest unsupervised outlier detection algorithm with support for exporting in ONNX format.
Scala
224
star
48

LiFT

The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.
Scala
168
star
49

shaky-android

Shake to send feedback for Android.
Java
160
star
50

pyexchange

Python wrapper for Microsoft Exchange
Python
153
star
51

asciietch

A graphing library with the goal of making it simple to graphs using ascii characters.
Python
138
star
52

python-avro-json-serializer

Serializes data into a JSON format using AVRO schema.
Python
137
star
53

gdmix

A deep ranking personalization framework
Python
131
star
54

li-apache-kafka-clients

li-apache-kafka-clients is a wrapper library for the Apache Kafka vanilla clients. It provides additional features such as large message support and auditing to the Java producer and consumer in the open source Apache Kafka.
Java
131
star
55

dynamometer

A tool for scale and performance testing of HDFS with a specific focus on the NameNode.
Java
131
star
56

Avro2TF

Avro2TF is designed to fill the gap of making users' training data ready to be consumed by deep learning training frameworks.
Scala
126
star
57

datahub-gma

General Metadata Architecture
Java
121
star
58

linkedin-gradle-plugin-for-apache-hadoop

Groovy
117
star
59

dex-test-parser

Find all test methods in an Android instrumentation APK
Kotlin
106
star
60

cassette

An efficient, file-based FIFO Queue for iOS and macOS.
Objective-C
95
star
61

spaniel

LinkedIn's JavaScript viewport tracking library and IntersectionObserver polyfill
JavaScript
92
star
62

Hoptimator

Multi-hop declarative data pipelines
Java
91
star
63

migz

Multithreaded, gzip-compatible compression and decompression, available as a platform-independent Java library and command-line utilities.
Java
79
star
64

avro-util

Collection of utilities to allow writing java code that operates across a wide range of avro versions.
Java
76
star
65

sysops-api

sysops-api is a framework designed to provide visability from tens of thousands of machines in seconds.
Python
74
star
66

iceberg

A temporary home for LinkedIn's changes to Apache Iceberg (incubating)
Java
62
star
67

DuaLip

DuaLip: Dual Decomposition based Linear Program Solver
Scala
59
star
68

kube2hadoop

Secure HDFS Access from Kubernetes
Java
59
star
69

dynoyarn

DynoYARN is a framework to run simulated YARN clusters and workloads for YARN scale testing.
Java
58
star
70

linkedin.github.com

Listing of all our public GitHub projects.
JavaScript
58
star
71

Tachyon

An Android library that provides a customizable calendar day view UI widget.
Java
57
star
72

Cytodynamics

Classloader isolation library.
Java
49
star
73

iris-relay

Stateless reverse proxy for thirdparty service integration with Iris API.
Python
48
star
74

concurrentli

Classes for multithreading that expand on java.util.concurrent, adding convenience, efficiency and new tools to multithreaded Java programs
Java
46
star
75

iris-mobile

A mobile interface for linkedin/iris, built for iOS and Android on the Ionic platform
TypeScript
42
star
76

lambda-learner

Lambda Learner is a library for iterative incremental training of a class of supervised machine learning models.
Python
41
star
77

TE2Rules

Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
Python
40
star
78

instantsearch-tutorial

Sample code for building an end-to-end instant search solution
JavaScript
39
star
79

PASS-GNN

Python
38
star
80

self-focused

Helps make a single page application more friendly to screen readers.
JavaScript
35
star
81

tracked-queue

An autotracked implementation of a ring-buffer-backed double-ended queue
TypeScript
35
star
82

QueryAnalyzerAgent

Analyze MySQL queries with negligible overhead
Go
35
star
83

performance-quality-models

Personalizing Performance model repository
Jupyter Notebook
31
star
84

data-integration-library

The Data Integration Library project provides a library of generic components based on a multi-stage architecture for data ingress and egress.
Java
28
star
85

Iris-message-processor

Iris-message-processor is a fully distributed Go application meant to replace the sender functionality of Iris and provide reliable, scalable, and extensible incident and out of band message processing and sending.
Go
27
star
86

smart-arg

Smart Arguments Suite (smart-arg) is a slim and handy python lib that helps one work safely and conveniently with command line arguments.
Python
23
star
87

linkedin-calcite

LinkedIn's version of Apache Calcite
Java
22
star
88

atscppapi

This library provides wrappers around the existing Apache Traffic Server API which will vastly simplify the process of writing Apache Traffic Server plugins.
C++
20
star
89

forthic

Python
18
star
90

high-school-trainee

LinkedIn Women in Tech High School Trainee Program
Python
18
star
91

play-parseq

Play-ParSeq is a Play module which seamlessly integrates ParSeq with Play Framework
Scala
17
star
92

icon-magic

Automated icon build system for iOS, Android and Web
TypeScript
17
star
93

QuantEase

QuantEase, a layer-wise quantization framework, frames the problem as discrete-structured non-convex optimization. Our work leverages Coordinate Descent techniques, offering high-quality solutions without the need for matrix inversion or decomposition.
Python
17
star
94

kafka-remote-storage-azure

Java
13
star
95

play-restli

A library that simplifies building restli services on top of the play server.
Java
12
star
96

spark-inequality-impact

Scala
12
star
97

Li-Airflow-Backfill-Plugin

Li-Airflow-Backfill-Plugin is a plugin to work with Apache Airflow to provide data backfill feature, ie. to rerun pipelines for a certain date range.
Python
10
star
98

AlerTiger

Jupyter Notebook
9
star
99

diderot

A fast and flexible implementation of the xDS protocol
Go
6
star
100

gobblin-elr

This is a read-only mirror of apache/gobblin
Java
5
star