• Stars
    star
    345
  • Rank 118,427 (Top 3 %)
  • Language
    HTML
  • License
    MIT License
  • Created about 4 years ago
  • Updated 9 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A package that makes it trivial to create and evaluate machine learning pipeline architectures.

AutoMLPipeline

Visitor

OpenSSF Best Practices

Overall Stats


Documentation Build Status Help

Star History

Star History Chart


AutoMLPipeline (AMLP) is a package that makes it trivial to create complex ML pipeline structures using simple expressions. It leverages on the built-in macro programming features of Julia to symbolically process, manipulate pipeline expressions, and makes it easy to discover optimal structures for machine learning regression and classification.

To illustrate, here is a pipeline expression and evaluation of a typical machine learning workflow that extracts numerical features (numf) for ica (Independent Component Analysis) and pca (Principal Component Analysis) transformations, respectively, concatenated with the hot-bit encoding (ohe) of categorical features (catf) of a given data for rf (Random Forest) modeling:

model = (catf |> ohe) + (numf |> pca) + (numf |> ica) |> rf
fit!(model,Xtrain,Ytrain)
prediction = transform!(model,Xtest)
score(:accuracy,prediction,Ytest)
crossvalidate(model,X,Y,"balanced_accuracy_score")

Just take note that + has higher priority than |> so if you are not sure, enclose the operations inside parentheses.

### these two expressions are the same
a |> b + c; a |> (b + c)

### these two expressions are the same
a + b |> c; (a + b) |> c

Please read this AutoMLPipeline Paper for benchmark comparisons.

  • JuliaCon Proceedings: DOI

Recorded Video/Conference Presentations:

Related Video/Conference Presentations:

More examples can be found in the examples folder including optimizing pipelines by multi-threading or distributed computing.

Motivations

The typical workflow in machine learning classification or prediction requires some or combination of the following preprocessing steps together with modeling:

  • feature extraction (e.g. ica, pca, svd)
  • feature transformation (e.g. normalization, scaling, ohe)
  • feature selection (anova, correlation)
  • modeling (rf, adaboost, xgboost, lm, svm, mlp)

Each step has several choices of functions to use together with their corresponding parameters. Optimizing the performance of the entire pipeline is a combinatorial search of the proper order and combination of preprocessing steps, optimization of their corresponding parameters, together with searching for the optimal model and its hyper-parameters.

Because of close dependencies among various steps, we can consider the entire process to be a pipeline optimization problem (POP). POP requires simultaneous optimization of pipeline structure and parameter adaptation of its elements. As a consequence, having an elegant way to express pipeline structure can help lessen the complexity in the management and analysis of the wide-array of choices of optimization routines.

The target of future work will be the implementations of different pipeline optimization algorithms ranging from evolutionary approaches, integer programming (discrete choices of POP elements), tree/graph search, and hyper-parameter search.

Package Features

  • Symbolic pipeline API for easy expression and high-level description of complex pipeline structures and processing workflow
  • Common API wrappers for ML libs including Scikitlearn, DecisionTree, etc
  • Easily extensible architecture by overloading just two main interfaces: fit! and transform!
  • Meta-ensembles that allow composition of ensembles of ensembles (recursively if needed) for robust prediction routines
  • Categorical and numerical feature selectors for specialized preprocessing routines based on types

Installation

AutoMLPipeline is in the Julia Official package registry. The latest release can be installed at the Julia prompt using Julia's package management which is triggered by pressing ] at the julia prompt:

julia> ]
pkg> update
pkg> add AutoMLPipeline

Sample Usage

Below outlines some typical way to preprocess and model any dataset.

1. Load Data, Extract Input (X) and Target (Y)
# Make sure that the input feature is a dataframe and the target output is a 1-D vector.
using AutoMLPipeline
profbdata = getprofb()
X = profbdata[:,2:end] 
Y = profbdata[:,1] |> Vector;
head(x)=first(x,5)
head(profbdata)
5Γ—7 DataFrame. Omitted printing of 1 columns
β”‚ Row β”‚ Home.Away β”‚ Favorite_Points β”‚ Underdog_Points β”‚ Pointspread β”‚ Favorite_Name β”‚ Underdog_name β”‚
β”‚     β”‚ String    β”‚ Int64           β”‚ Int64           β”‚ Float64     β”‚ String        β”‚ String        β”‚
β”œβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ 1   β”‚ away      β”‚ 27              β”‚ 24              β”‚ 4.0         β”‚ BUF           β”‚ MIA           β”‚
β”‚ 2   β”‚ at_home   β”‚ 17              β”‚ 14              β”‚ 3.0         β”‚ CHI           β”‚ CIN           β”‚
β”‚ 3   β”‚ away      β”‚ 51              β”‚ 0               β”‚ 2.5         β”‚ CLE           β”‚ PIT           β”‚
β”‚ 4   β”‚ at_home   β”‚ 28              β”‚ 0               β”‚ 5.5         β”‚ NO            β”‚ DAL           β”‚
β”‚ 5   β”‚ at_home   β”‚ 38              β”‚ 7               β”‚ 5.5         β”‚ MIN           β”‚ HOU           β”‚

2. Load Filters, Transformers, and Learners

using AutoMLPipeline

#### Decomposition
pca = skoperator("PCA")
fa  = skoperator("FactorAnalysis")
ica = skoperator("FastICA")

#### Scaler 
rb   = skoperator("RobustScaler")
pt   = skoperator("PowerTransformer")
norm = skoperator("Normalizer")
mx   = skoperator("MinMaxScaler")
std  = skoperator("StandardScaler")

#### categorical preprocessing
ohe = OneHotEncoder()

#### Column selector
catf = CatFeatureSelector()
numf = NumFeatureSelector()
disc = CatNumDiscriminator()

#### Learners
rf       = skoperator("RandomForestClassifier")
gb       = skoperator("GradientBoostingClassifier")
lsvc     = skoperator("LinearSVC")
svc      = skoperator("SVC")
mlp      = skoperator("MLPClassifier")
ada      = skoperator("AdaBoostClassifier")
sgd      = skoperator("SGDClassifier")
skrf_reg = skoperator("RandomForestRegressor")
skgb_reg = skoperator("GradientBoostingRegressor")
jrf      = RandomForest()
tree     = PrunedTree()
vote     = VoteEnsemble()
stack    = StackEnsemble()
best     = BestLearner()

Note: You can get a listing of available Preprocessors and Learners by invoking the function:

  • skoperator()

3. Filter categories and hot-encode them

pohe = catf |> ohe
tr = fit_transform!(pohe,X,Y)
head(tr)
5Γ—56 DataFrame. Omitted printing of 47 columns
β”‚ Row β”‚ x1      β”‚ x2      β”‚ x3      β”‚ x4      β”‚ x5      β”‚ x6      β”‚ x7      β”‚ x8      β”‚ x9      β”‚
β”‚     β”‚ Float64 β”‚ Float64 β”‚ Float64 β”‚ Float64 β”‚ Float64 β”‚ Float64 β”‚ Float64 β”‚ Float64 β”‚ Float64 β”‚
β”œβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ 1   β”‚ 1.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚
β”‚ 2   β”‚ 0.0     β”‚ 1.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚
β”‚ 3   β”‚ 0.0     β”‚ 0.0     β”‚ 1.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚
β”‚ 4   β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 1.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚
β”‚ 5   β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 1.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚ 0.0     β”‚

4. Numerical Feature Extraction Example

4.1 Filter numeric features, compute ica and pca features, and combine both features
pdec = (numf |> pca) + (numf |> ica)
tr = fit_transform!(pdec,X,Y)
head(tr)
5Γ—8 DataFrame
β”‚ Row β”‚ x1       β”‚ x2       β”‚ x3       β”‚ x4       β”‚ x1_1       β”‚ x2_1       β”‚ x3_1       β”‚ x4_1       β”‚
β”‚     β”‚ Float64  β”‚ Float64  β”‚ Float64  β”‚ Float64  β”‚ Float64    β”‚ Float64    β”‚ Float64    β”‚ Float64    β”‚
β”œβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ 1   β”‚ 2.47477  β”‚ 7.87074  β”‚ -1.10495 β”‚ 0.902431 β”‚ 0.0168432  β”‚ 0.00319873 β”‚ -0.0467633 β”‚ 0.026742   β”‚
β”‚ 2   β”‚ -5.47113 β”‚ -3.82946 β”‚ -2.08342 β”‚ 1.00524  β”‚ -0.0327947 β”‚ -0.0217808 β”‚ -0.0451314 β”‚ 0.00702006 β”‚
β”‚ 3   β”‚ 30.4068  β”‚ -10.8073 β”‚ -6.12339 β”‚ 0.883938 β”‚ -0.0734292 β”‚ 0.115776   β”‚ -0.0425357 β”‚ 0.0497831  β”‚
β”‚ 4   β”‚ 8.18372  β”‚ -15.507  β”‚ -1.43203 β”‚ 1.08255  β”‚ -0.0656664 β”‚ 0.0368666  β”‚ -0.0457154 β”‚ -0.0192752 β”‚
β”‚ 5   β”‚ 16.6176  β”‚ -6.68636 β”‚ -1.66597 β”‚ 0.978243 β”‚ -0.0338749 β”‚ 0.0643065  β”‚ -0.0461703 β”‚ 0.00671696 β”‚
4.2 Filter numeric features, transform to robust and power transform scaling, perform ica and pca, respectively, and combine both
ppt = (numf |> rb |> ica) + (numf |> pt |> pca)
tr = fit_transform!(ppt,X,Y)
head(tr)
5Γ—8 DataFrame
β”‚ Row β”‚ x1          β”‚ x2          β”‚ x3         β”‚ x4        β”‚ x1_1      β”‚ x2_1     β”‚ x3_1       β”‚ x4_1      β”‚
β”‚     β”‚ Float64     β”‚ Float64     β”‚ Float64    β”‚ Float64   β”‚ Float64   β”‚ Float64  β”‚ Float64    β”‚ Float64   β”‚
β”œβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ 1   β”‚ -0.00308891 β”‚ -0.0269009  β”‚ -0.0166298 β”‚ 0.0467559 β”‚ -0.64552  β”‚ 1.40289  β”‚ -0.0284468 β”‚ 0.111773  β”‚
β”‚ 2   β”‚ 0.0217799   β”‚ -0.00699717 β”‚ 0.0329868  β”‚ 0.0449952 β”‚ -0.832404 β”‚ 0.475629 β”‚ -1.14881   β”‚ -0.01702  β”‚
β”‚ 3   β”‚ -0.115577   β”‚ -0.0503802  β”‚ 0.0736173  β”‚ 0.0420466 β”‚ 1.54491   β”‚ 1.65258  β”‚ -1.35967   β”‚ -2.57866  β”‚
β”‚ 4   β”‚ -0.0370057  β”‚ 0.0190459   β”‚ 0.065814   β”‚ 0.0454864 β”‚ 1.32065   β”‚ 0.563565 β”‚ -2.05839   β”‚ -0.74898  β”‚
β”‚ 5   β”‚ -0.0643088  β”‚ -0.00711682 β”‚ 0.0340452  β”‚ 0.0459816 β”‚ 1.1223    β”‚ 1.45555  β”‚ -0.88864   β”‚ -0.776195 β”‚

5. A Pipeline for the Voting Ensemble Classification

# take all categorical columns and hot-bit encode each, 
# concatenate them to the numerical features,
# and feed them to the voting ensemble
using AutoMLPipeline.Utils
pvote = (catf |> ohe) + (numf) |> vote
pred = fit_transform!(pvote,X,Y)
sc=score(:accuracy,pred,Y)
println(sc)
crossvalidate(pvote,X,Y,"accuracy_score")
fold: 1, 0.5373134328358209
fold: 2, 0.7014925373134329
fold: 3, 0.5294117647058824
fold: 4, 0.6716417910447762
fold: 5, 0.6716417910447762
fold: 6, 0.6119402985074627
fold: 7, 0.5074626865671642
fold: 8, 0.6323529411764706
fold: 9, 0.6268656716417911
fold: 10, 0.5671641791044776
errors: 0
(mean = 0.6057287093942055, std = 0.06724940684190235, folds = 10, errors = 0)

Note: crossvalidate() supports the following sklearn's performance metric

classification:

  • accuracy_score, balanced_accuracy_score, cohen_kappa_score
  • jaccard_score, matthews_corrcoef, hamming_loss, zero_one_loss
  • f1_score, precision_score, recall_score,

regression:

  • mean_squared_error, mean_squared_log_error
  • mean_absolute_error, median_absolute_error
  • r2_score, max_error, mean_poisson_deviance
  • mean_gamma_deviance, mean_tweedie_deviance,
  • explained_variance_score

6. Use @pipelinex instead of @pipeline to print the corresponding function calls in 6

julia> @pipelinex (catf |> ohe) + (numf) |> vote
:(Pipeline(ComboPipeline(Pipeline(catf, ohe), numf), vote))

# another way is to use @macroexpand with @pipeline
julia> @macroexpand @pipeline (catf |> ohe) + (numf) |> vote
:(Pipeline(ComboPipeline(Pipeline(catf, ohe), numf), vote))

7. A Pipeline for the Random Forest (RF) Classification

# compute the pca, ica, fa of the numerical columns,
# combine them with the hot-bit encoded categorical features
# and feed all to the random forest classifier
prf = (numf |> rb |> pca) + (numf |> rb |> ica) + (numf |> rb |> fa) + (catf |> ohe) |> rf
pred = fit_transform!(prf,X,Y)
score(:accuracy,pred,Y) |> println
crossvalidate(prf,X,Y,"accuracy_score")
fold: 1, 0.6119402985074627
fold: 2, 0.7611940298507462
fold: 3, 0.6764705882352942
fold: 4, 0.6716417910447762
fold: 5, 0.6716417910447762
fold: 6, 0.6567164179104478
fold: 7, 0.6268656716417911
fold: 8, 0.7058823529411765
fold: 9, 0.6417910447761194
fold: 10, 0.6865671641791045
errors: 0
(mean = 0.6710711150131694, std = 0.04231869797446545, folds = 10, errors = 0)

8. A Pipeline for the Linear Support Vector for Classification (LSVC)

plsvc = ((numf |> rb |> pca)+(numf |> rb |> fa)+(numf |> rb |> ica)+(catf |> ohe )) |> lsvc
pred = fit_transform!(plsvc,X,Y)
score(:accuracy,pred,Y) |> println
crossvalidate(plsvc,X,Y,"accuracy_score")
fold: 1, 0.6567164179104478
fold: 2, 0.7164179104477612
fold: 3, 0.8235294117647058
fold: 4, 0.7164179104477612
fold: 5, 0.7313432835820896
fold: 6, 0.6567164179104478
fold: 7, 0.7164179104477612
fold: 8, 0.7352941176470589
fold: 9, 0.746268656716418
fold: 10, 0.6865671641791045
errors: 0
(mean = 0.7185689201053556, std = 0.04820829087095355, folds = 10, errors = 0)

9. A Pipeline for Random Forest Regression

iris = getiris()
Xreg = iris[:,1:3]
Yreg = iris[:,4] |> Vector
pskrfreg = (catf |> ohe) + (numf) |> skrf_reg
res=crossvalidate(pskrfreg,Xreg,Yreg,"mean_absolute_error",10)
fold: 1, 0.1827433333333334
fold: 2, 0.18350888888888886
fold: 3, 0.11627222222222248
fold: 4, 0.1254152380952376
fold: 5, 0.16502333333333377
fold: 6, 0.10900222222222226
fold: 7, 0.12561111111111076
fold: 8, 0.14243000000000025
fold: 9, 0.12130555555555576
fold: 10, 0.18811111111111098
errors: 0
(mean = 0.1459423015873016, std = 0.030924217263958102, folds = 10, errors = 0)

Note: More examples can be found in the test directory of the package. Since the code is written in Julia, you are highly encouraged to read the source code and feel free to extend or adapt the package to your problem. Please feel free to submit PRs to improve the package features.

10. Performance Comparison of Several Learners

10.1 Sequential Processing
using Random
using DataFrames

Random.seed!(1)
jrf  = RandomForest()
tree = PrunedTree()
disc = CatNumDiscriminator()
ada  = skoperator("AdaBoostClassifier")
sgd  = skoperator("SGDClassifier")
std  = skoperator("StandardScaler")
lsvc = skoperator("LinearSVC")

learners = DataFrame()
for learner in [jrf,ada,sgd,tree,lsvc]
   pcmc = @pipeline disc |> ((catf |> ohe) + (numf |> std)) |> learner
   println(learner.name[1:end-4])
   mean,sd,_ = crossvalidate(pcmc,X,Y,"accuracy_score",10)
   global learners = vcat(learners,DataFrame(name=learner.name[1:end-4],mean=mean,sd=sd))
end;
@show learners;
learners = 5Γ—3 DataFrame
β”‚ Row β”‚ name                   β”‚ mean     β”‚ sd        β”‚
β”‚     β”‚ String                 β”‚ Float64  β”‚ Float64   β”‚
β”œβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ 1   β”‚ rf                     β”‚ 0.653424 β”‚ 0.0754433 β”‚
β”‚ 2   β”‚ AdaBoostClassifier     β”‚ 0.69504  β”‚ 0.0514792 β”‚
β”‚ 3   β”‚ SGDClassifier          β”‚ 0.694908 β”‚ 0.0641564 β”‚
β”‚ 4   β”‚ prunetree              β”‚ 0.621927 β”‚ 0.0578242 β”‚
β”‚ 5   β”‚ LinearSVC              β”‚ 0.726097 β”‚ 0.0498317 β”‚
10.2 Parallel Processing
using Random
using DataFrames
using Distributed

nprocs() == 1 && addprocs()
@everywhere using DataFrames
@everywhere using AutoMLPipeline

@everywhere profbdata = getprofb()
@everywhere X = profbdata[:,2:end] 
@everywhere Y = profbdata[:,1] |> Vector;

@everywhere jrf  = RandomForest()
@everywhere ohe  = OneHotEncoder()
@everywhere catf = CatFeatureSelector()
@everywhere numf = NumFeatureSelector()
@everywhere tree = PrunedTree()
@everywhere disc = CatNumDiscriminator()
@everywhere ada  = skoperator("AdaBoostClassifier")
@everywhere sgd  = skoperator("SGDClassifier")
@everywhere std  = skoperator("StandardScaler")
@everywhere lsvc = skoperator("LinearSVC")

learners = @sync @distributed (vcat) for learner in [jrf,ada,sgd,tree,lsvc]
   pcmc = disc |> ((catf |> ohe) + (numf |> std)) |> learner
   println(learner.name[1:end-4])
   mean,sd,_ = crossvalidate(pcmc,X,Y,"accuracy_score",10)
   DataFrame(name=learner.name[1:end-4],mean=mean,sd=sd)
end
@show learners;
      From worker 3:    AdaBoostClassifier
      From worker 4:    SGDClassifier
      From worker 5:    prunetree
      From worker 2:    rf
      From worker 6:    LinearSVC
      From worker 4:    fold: 1, 0.6716417910447762
      From worker 5:    fold: 1, 0.6567164179104478
      From worker 6:    fold: 1, 0.6865671641791045
      From worker 2:    fold: 1, 0.7164179104477612
      From worker 4:    fold: 2, 0.7164179104477612
      From worker 5:    fold: 2, 0.6119402985074627
      From worker 6:    fold: 2, 0.8059701492537313
      From worker 2:    fold: 2, 0.6716417910447762
      From worker 4:    fold: 3, 0.6764705882352942
      ....

learners = 5Γ—3 DataFrame
β”‚ Row β”‚ name                   β”‚ mean     β”‚ sd        β”‚
β”‚     β”‚ String                 β”‚ Float64  β”‚ Float64   β”‚
β”œβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ 1   β”‚ rf                     β”‚ 0.647388 β”‚ 0.0764844 β”‚
β”‚ 2   β”‚ AdaBoostClassifier     β”‚ 0.712862 β”‚ 0.0471003 β”‚
β”‚ 3   β”‚ SGDClassifier          β”‚ 0.710009 β”‚ 0.05173   β”‚
β”‚ 4   β”‚ prunetree              β”‚ 0.60428  β”‚ 0.0403121 β”‚
β”‚ 5   β”‚ LinearSVC              β”‚ 0.726383 β”‚ 0.0467506 β”‚

11. Automatic Selection of Best Learner

You can use * operation as a selector function which outputs the result of the best learner. If we use the same pre-processing pipeline in 10, we expect that the average performance of best learner which is lsvc will be around 73.0.

Random.seed!(1)
pcmc = disc |> ((catf |> ohe) + (numf |> std)) |> (jrf * ada * sgd * tree * lsvc)
crossvalidate(pcmc,X,Y,"accuracy_score",10)
fold: 1, 0.7164179104477612
fold: 2, 0.7910447761194029
fold: 3, 0.6911764705882353
fold: 4, 0.7761194029850746
fold: 5, 0.6567164179104478
fold: 6, 0.7014925373134329
fold: 7, 0.6417910447761194
fold: 8, 0.7058823529411765
fold: 9, 0.746268656716418
fold: 10, 0.835820895522388
errors: 0
(mean = 0.7262730465320456, std = 0.060932268798867976, folds = 10, errors = 0)

12. Learners as Transformers

It is also possible to use learners in the middle of expression to serve as transformers and their outputs become inputs to the final learner as illustrated below.

expr = ( 
             ((numf |> rb)+(catf |> ohe) |> gb) + 
             ((numf |> rb)+(catf |> ohe) |> rf) 
       ) |> ohe |> ada;                
crossvalidate(expr,X,Y,"accuracy_score")
fold: 1, 0.6567164179104478
fold: 2, 0.5522388059701493
fold: 3, 0.7205882352941176
fold: 4, 0.7313432835820896
fold: 5, 0.6567164179104478
fold: 6, 0.6119402985074627
fold: 7, 0.6119402985074627
fold: 8, 0.6470588235294118
fold: 9, 0.6716417910447762
fold: 10, 0.6119402985074627
errors: 0
(mean = 0.6472124670763829, std = 0.053739947087648336, folds = 10, errors = 0)

One can even include selector function as part of transformer preprocessing routine:

pjrf = disc |> ((catf |> ohe) + (numf |> std)) |> 
         ((jrf * ada ) + (sgd * tree * lsvc)) |> ohe |> ada
crossvalidate(pjrf,X,Y,"accuracy_score")
fold: 1, 0.7164179104477612
fold: 2, 0.7164179104477612
fold: 3, 0.7941176470588235
fold: 4, 0.7761194029850746
fold: 5, 0.6268656716417911
fold: 6, 0.6716417910447762
fold: 7, 0.7611940298507462
fold: 8, 0.7352941176470589
fold: 9, 0.7761194029850746
fold: 10, 0.6865671641791045
errors: 0
(mean = 0.7260755048287972, std = 0.0532393731318768, folds = 10, errors = 0)

Note: The ohe is necessary in both examples because the outputs of the learners and selector function are categorical values that need to be hot-bit encoded before feeding to the final ada learner.

13. Tree Visualization of the Pipeline Structure

You can visualize the pipeline by using AbstractTrees Julia package.

# package installation 
using Pkg
Pkg.update()
Pkg.add("AbstractTrees") 

# load the packages
using AbstractTrees
using AutoMLPipeline

expr = @pipelinex (catf |> ohe) + (numf |> pca) + (numf |> ica) |> rf
:(Pipeline(ComboPipeline(Pipeline(catf, ohe), Pipeline(numf, pca), Pipeline(numf, ica)), rf))

print_tree(stdout, expr)
:(Pipeline(ComboPipeline(Pipeline(catf, ohe), Pipeline(numf, pca), Pipeline(numf, ica)), rf))
β”œβ”€ :Pipeline
β”œβ”€ :(ComboPipeline(Pipeline(catf, ohe), Pipeline(numf, pca), Pipeline(numf, ica)))
β”‚  β”œβ”€ :ComboPipeline
β”‚  β”œβ”€ :(Pipeline(catf, ohe))
β”‚  β”‚  β”œβ”€ :Pipeline
β”‚  β”‚  β”œβ”€ :catf
β”‚  β”‚  └─ :ohe
β”‚  β”œβ”€ :(Pipeline(numf, pca))
β”‚  β”‚  β”œβ”€ :Pipeline
β”‚  β”‚  β”œβ”€ :numf
β”‚  β”‚  └─ :pca
β”‚  └─ :(Pipeline(numf, ica))
β”‚     β”œβ”€ :Pipeline
β”‚     β”œβ”€ :numf
β”‚     └─ :ica
└─ :rf

Extending AutoMLPipeline

If you want to add your own filter or transformer or learner, take note that filters and transformers process the
input features but ignores the output argument. On the other hand, learners process both their input and output arguments during fit! while transform! expects one input argument in all cases. First step is to import the abstract types and define your own mutable structure as subtype of either Learner or Transformer. Next is to import the fit! and transform! functions so that you can overload them. Also, you must load the DataFrames package because it is the main format for data processing. Finally, implement your own fit and transform and export them.

using DataFrames
using AutoMLPipeline.AbsTypes

# import functions for overloading
import AutoMLPipeline.AbsTypes: fit!, transform!   

# export the new definitions for dynamic dispatch
export fit!, transform!, MyFilter

# define your filter structure
mutable struct MyFilter <: Transformer
  name::String
  model::Dict
  args::Dict
  function MyFilter(args::Dict())
      ....
  end
end

# define your fit! function. 
function fit!(fl::MyFilter, inputfeatures::DataFrame, target::Vector=Vector())
     ....
end

#define your transform! function
function transform!(fl::MyFilter, inputfeatures::DataFrame)::DataFrame
     ....
end

Note that the main format to exchange data is dataframe which requires transform! output to return a dataframe. The features as input for fit! and transform! shall be in dataframe format too. This is necessary so that the pipeline passes the dataframe format consistently to its corresponding filters/transformers/learners. Once you have this transformer, you can use it as part of the pipeline together with the other learners and transformers.

Feature Requests and Contributions

We welcome contributions, feature requests, and suggestions. Here is the link to open an issue for any problems you encounter. If you want to contribute, please follow the guidelines in contributors page.

Help usage

Usage questions can be posted in:

More Repositories

1

sarama

Sarama is a Go library for Apache Kafka.
Go
10,858
star
2

plex

The package of IBM’s typeface, IBM Plex.
CSS
9,297
star
3

css-gridish

Automatically build your grid design’s CSS Grid code, CSS Flexbox fallback code, Sketch artboards, and Chrome extension.
CSS
2,253
star
4

openapi-to-graphql

Translate APIs described by OpenAPI Specifications (OAS) into GraphQL
TypeScript
1,594
star
5

Project_CodeNet

This repository is to support contributions for tools for the Project CodeNet dataset hosted in DAX
Python
1,485
star
6

fp-go

functional programming library for golang
Go
1,480
star
7

pytorch-seq2seq

An open source framework for seq2seq models in PyTorch.
Python
1,431
star
8

fhe-toolkit-linux

IBM Fully Homomorphic Encryption Toolkit For Linux. This toolkit is a Linux based Docker container that demonstrates computing on encrypted data without decrypting it! The toolkit ships with two demos including a fully encrypted Machine Learning inference with a Neural Network and a Privacy-Preserving key-value search.
C++
1,427
star
9

ibm.github.io

IBM Open Source at GitHub
JavaScript
1,106
star
10

MicroscoPy

An open-source, motorized, and modular microscope built using LEGO bricks, Arduino, Raspberry Pi and 3D printing.
Python
1,102
star
11

Dromedary

Dromedary: towards helpful, ethical and reliable LLMs.
Python
1,059
star
12

MAX-Image-Resolution-Enhancer

Upscale an image by a factor of 4, while generating photo-realistic details.
Python
863
star
13

elasticsearch-spark-recommender

Use Jupyter Notebooks to demonstrate how to build a Recommender with Apache Spark & Elasticsearch
Jupyter Notebook
806
star
14

differential-privacy-library

Diffprivlib: The IBM Differential Privacy Library
Python
774
star
15

build-blockchain-insurance-app

Sample insurance application using Hyperledger Fabric
JavaScript
719
star
16

FfDL

Fabric for Deep Learning (FfDL, pronounced fiddle) is a Deep Learning Platform offering TensorFlow, Caffe, PyTorch etc. as a Service on Kubernetes
Go
676
star
17

spring-boot-microservices-on-kubernetes

In this code we demonstrate how a simple Spring Boot application can be deployed on top of Kubernetes. This application, Office Space, mimicks the fictitious app idea from Michael Bolton in the movie "Office Space".
JavaScript
548
star
18

cloud-native-starter

Cloud Native Starter for Java/Jakarta EE based Microservices on Kubernetes and Istio
Shell
517
star
19

federated-learning-lib

A library for federated learning (a distributed machine learning process) in an enterprise environment.
Python
480
star
20

nicedoc.io

pretty README as service.
JavaScript
473
star
21

clai

Command Line Artificial Intelligence or CLAI is an open-sourced project from IBM Research aimed to bring the power of AI to the command line interface.
Python
466
star
22

import-tracker

Python utility for tracking third party dependencies within a library
Python
458
star
23

mac-ibm-enrollment-app

The Mac@IBM enrollment app makes setting up macOS with Jamf Pro more intuitive for users and easier for IT. The application offers IT admins the ability to gather additional information about their users during setup, allows users to customize their enrollment by selecting apps or bundles of apps to install during setup, and provides users with next steps when enrollment is complete.
Swift
454
star
24

openapi-validator

Configurable and extensible validator/linter for OpenAPI documents
JavaScript
453
star
25

mobx-react-router

Keep your MobX state in sync with react-router
JavaScript
437
star
26

EvolveGCN

Code for EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs
Python
384
star
27

fhe-toolkit-macos

IBM Homomorphic Encryption Toolkit For MacOS
C++
356
star
28

graphql-query-generator

Randomly generates GraphQL queries from a GraphQL schema
TypeScript
334
star
29

portieris

A Kubernetes Admission Controller for verifying image trust.
Go
329
star
30

BluePic

WARNING: This repository is no longer maintained ⚠️ This repository will not be updated. The repository will be kept available in read-only mode.
Swift
325
star
31

FedMA

Code for Federated Learning with Matched Averaging, ICLR 2020.
Python
320
star
32

lale

Library for Semi-Automated Data Science
Python
320
star
33

powerai-counting-cars

Run a Jupyter Notebook to detect, track, and count cars in a video using Maximo Visual Insights (formerly PowerAI Vision) and OpenCV
Jupyter Notebook
317
star
34

evote

A voting application that leverages Hyperledger Fabric and the IBM Blockchain Platform to record and tally ballots.
JavaScript
316
star
35

aihwkit

IBM Analog Hardware Acceleration Kit
Jupyter Notebook
314
star
36

zshot

Zero and Few shot named entity & relationships recognition
Python
308
star
37

blockchain-network-on-kubernetes

Demonstrates the steps involved in setting up your business network on Hyperledger Fabric using Kubernetes APIs on IBM Cloud Kubernetes Service.
Shell
305
star
38

IBM-Z-zOS

The helpful and handy location for finding and sharing z/OS files, which are not included in the product.
REXX
296
star
39

charts

The IBM/charts repository provides helm charts for IBM and Third Party middleware.
Smarty
295
star
40

TabFormer

Code & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Python
295
star
41

blockchain-application-using-fabric-java-sdk

Create and Deploy a Blockchain Network using Hyperledger Fabric SDK Java
Java
292
star
42

mac-ibm-notifications

macOS agent used to display custom notifications and alerts to the end user.
Swift
289
star
43

MAX-Object-Detector

Localize and identify multiple objects in a single image.
Python
286
star
44

design-kit

The IBM Design kit is a collection of tools aimed to help you design and prototype experiences faster, with confidence and thoughtfulness. This kit is based on the IBM Design System. Also, you may use this documentation to create add-on libraries to the IBM Design System or submit bugs to the current system.
272
star
45

AccDNN

A compiler from AI model to RTL (Verilog) accelerator in FPGA hardware with auto design space exploration.
Verilog
270
star
46

deploy-ibm-cloud-private

Instructions and Code required to install IBM Cloud Private
HCL
263
star
47

vue-a11y-calendar

Accessible, internationalized Vue calendar
JavaScript
253
star
48

audit-ci

Audit NPM, Yarn, and PNPM dependencies in continuous integration environments, preventing integration if vulnerabilities are found at or above a configurable threshold while ignoring allowlisted advisories
TypeScript
253
star
49

watson-banking-chatbot

A chatbot for banking that uses the Watson Assistant, Discovery, Natural Language Understanding and Tone Analyzer services.
JavaScript
250
star
50

UQ360

Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty in machine learning model predictions.
Python
249
star
51

Kubernetes-container-service-GitLab-sample

This code shows how a common multi-component GitLab can be deployed on Kubernetes cluster. Each component (NGINX, Ruby on Rails, Redis, PostgreSQL, and more) runs in a separate container or group of containers.
Shell
243
star
52

tensorflow-hangul-recognition

Handwritten Korean Character Recognition with TensorFlow and Android
Python
232
star
53

transition-amr-parser

SoTA Abstract Meaning Representation (AMR) parsing with word-node alignments in Pytorch. Includes checkpoints and other tools such as statistical significance Smatch.
Python
229
star
54

BlockchainNetwork-CompositeJourney

Part 1 in a series of patterns showing the building blocks of a Blockchain application
Shell
227
star
55

pytorchpipe

PyTorchPipe (PTP) is a component-oriented framework for rapid prototyping and training of computational pipelines combining vision and language
Python
223
star
56

Graph2Seq

Graph2Seq is a simple code for building a graph-encoder and sequence-decoder for NLP and other AI/ML/DL tasks.
Python
219
star
57

LNN

A `Neural = Symbolic` framework for sound and complete weighted real-value logic
Python
214
star
58

Scalable-WordPress-deployment-on-Kubernetes

This code showcases the full power of Kubernetes clusters and shows how can we deploy the world's most popular website framework on top of world's most popular container orchestration platform.
Shell
214
star
59

janusgraph-utils

Develop a graph database app using JanusGraph
Java
204
star
60

ModuleFormer

ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
Python
203
star
61

ibm-generative-ai

IBM-Generative-AI is a Python library built on IBM's large language model REST interface to seamlessly integrate and extend this service in Python programs.
Python
202
star
62

tensorflow-large-model-support

Large Model Support in Tensorflow
199
star
63

Scalable-Cassandra-deployment-on-Kubernetes

In this code we provide a full roadmap the deployment of a multi-node scalable Cassandra cluster on Kubernetes. Cassandra understands that it is running within a cluster manager, and uses this cluster management infrastructure to help implement the application. Kubernetes concepts like Replication Controller, StatefulSets etc. are leveraged to deploy either non-persistent or persistent Cassandra clusters on Kubernetes cluster.
Shell
195
star
64

adaptive-federated-learning

Code for paper "Adaptive Federated Learning in Resource Constrained Edge Computing Systems"
Python
193
star
65

action-recognition-pytorch

This is the pytorch implementation of some representative action recognition approaches including I3D, S3D, TSN and TAM.
Python
193
star
66

gantt-chart

IBM Gantt Chart Component, integrable in Vanilla, jQuery, or React Framework.
JavaScript
193
star
67

api-samples

Samples code that uses QRadar API's
Python
192
star
68

cdfsl-benchmark

(ECCV 2020) Cross-Domain Few-Shot Learning Benchmarking System
Python
190
star
69

kube101

Kubernetes 101 workshop (https://ibm.github.io/kube101/)
Shell
184
star
70

CrossViT

Official implementation of CrossViT. https://arxiv.org/abs/2103.14899
Python
180
star
71

browser-functions

A lightweight serverless platform that uses Web Browsers as execution engines
JavaScript
180
star
72

pwa-lit-template

A template for building Progressive Web Applications using Lit and Vaadin Router.
TypeScript
176
star
73

rl-testbed-for-energyplus

Reinforcement Learning Testbed for Power Consumption Optimization using EnergyPlus
Python
170
star
74

AMLSim

The AMLSim project is intended to provide a multi-agent based simulator that generates synthetic banking transaction data together with a set of known money laundering patterns - mainly for the purpose of testing machine learning models and graph algorithms. We welcome you to enhance this effort since the data set related to money laundering is critical to advance detection capabilities of money laundering activities.
Python
170
star
75

socket-io

A Socket.IO client for C#
C#
169
star
76

tfjs-web-app

A TensorFlow.js Progressive Web App for Offline Visual Recognition
JavaScript
164
star
77

molformer

Repository for MolFormer
Jupyter Notebook
163
star
78

spark-tpc-ds-performance-test

Use the TPC-DS benchmark to test Spark SQL performance
TSQL
160
star
79

watson-online-store

Learn how to use Watson Assistant and Watson Discovery. This application demonstrates a simple abstraction of a chatbot interacting with a Cloudant NoSQL database, using a Slack UI.
HTML
156
star
80

istio101

Istio 101 workshop (https://ibm.github.io/istio101/)
Shell
154
star
81

Medical-Blockchain

A healthcare data management platform built on blockchain that stores medical data off-chain
Vue
150
star
82

watson-assistant-slots-intro

A Chatbot for ordering a pizza that demonstrates how using the IBM Watson Assistant Slots feature, one can fill out an order, form, or profile.
JavaScript
143
star
83

tsfm

Foundation Models for Time Series
Jupyter Notebook
143
star
84

simulai

A toolkit with data-driven pipelines for physics-informed machine learning.
Python
142
star
85

etcd-java

Alternative etcd3 java client
Java
141
star
86

deploy-react-kubernetes

Built for developers who are interested in learning how to deploy a React application on Kubernetes, this pattern uses the React and Redux framework and calls the OMDb API to look up movie information based on user input. This pattern can be built and run on both Docker and Kubernetes.
JavaScript
139
star
87

innovate-digital-bank

This repository contains instructions to build a digital bank composed of a set of microservices that communicate with each other. Using Nodejs, Express, MongoDB and deployed to a Kubernetes cluster on IBM Cloud.
JavaScript
137
star
88

ipfs-social-proof

IPFS Social Proof: A decentralized identity and social proof system
JavaScript
135
star
89

KubeflowDojo

Repository to hold code, instructions, demos and pointers to presentation assets for Kubeflow Dojo
Jupyter Notebook
132
star
90

probabilistic-federated-neural-matching

Bayesian Nonparametric Federated Learning of Neural Networks
Python
132
star
91

fhe-toolkit-ios

IBM Fully Homomorphic Encryption Toolkit For iOS
C++
131
star
92

pytorch-large-model-support

Large Model Support in PyTorch
130
star
93

taxinomitis

Source code for Machine Learning for Kids site
JavaScript
127
star
94

Decentralized-Energy-Composer

WARNING: This repository is no longer maintained ⚠️ We are no longer showing the Hyperledger Composer Service.
TypeScript
127
star
95

quantum-careers

Learn about career opportunities with IBM Quantum.
126
star
96

cloud-pak

IBM Cloud Paks are enterprise-grade containerized software by combining container images with enterprise capabilities for deployment in production use cases with integrations for management and lifecycle operations. Features such as pre-configured deployments based on product expertise, rolling upgrades, and management of production workloads.
Shell
126
star
97

build-knowledge-base-with-domain-specific-documents

Create a knowledge base using domain specific documents and the mammoth python library
Jupyter Notebook
125
star
98

japan-technology

IBM Related Japanese technical documents - Code Patterns, Learning Path, Tutorials, etc.
Jupyter Notebook
125
star
99

DiffuseKronA

DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models
125
star
100

compliance-trestle

An opinionated tooling platform for managing compliance as code, using continuous integration and NIST's OSCAL standard.
Python
124
star