• Stars
    star
    160
  • Rank 234,703 (Top 5 %)
  • Language TSQL
  • License
    Apache License 2.0
  • Created about 7 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Use the TPC-DS benchmark to test Spark SQL performance

Explore Spark SQL and its performance using TPC-DS workload

Data Science Experience is now Watson Studio. Although some images in this code pattern may show the service as Data Science Experience, the steps and processes will still work.

Apache Spark is a popular distributed data processing engine that is built around speed, ease of use and sophisticated analytics, with APIs in Java, Scala, Python, R, and SQL. Like other data processing engines, Spark has a unified optimization engine that computes the optimal way to execute a workload with the main purpose of reducing the disk IO and CPU usage.

We can evaluate and measure the performance of Spark SQL using the TPC-DS benchmark. TPC-DS is a widely used industry standard decision support benchmark that is used to evaluate performance of data processing engines. Given that TPC-DS exercises some key data warehouse features, running TPC-DS successfully reflects the readiness of Spark in terms of addressing the need of a data warehouse application. Apache Spark v2.0 supports all the ninety-nine decision support queries that is part of this TPC-DS benchmark.

This Code Pattern is aimed at helping Spark developers quickly setup and run the TPC-DS benchmark in their own development setup.

When the reader has completed this Code Pattern, they will understand the following:

  • How to setup the TPC-DS toolkit
  • How to generate TPC-DS datasets at different scale factor
  • How to create Spark database artifacts
  • How to run TPC-DS benchmark queries on Spark in local mode and see the results
  • Things to consider when increasing the data scale and run against a spark cluster

Architecture diagram

Flow

  • Commandline
    1. Create the spark tables with pre-generated dataset.
    2. Run the entire query set or a subset of queries and monitor the results.
  • Notebook
    1. Create the spark tables with pre-generated dataset.
    2. Run the entire query set or individual query.
    3. View the query results or performance summary.
    4. View the performance graph.

Included components

  • Apache Spark: An open-source, fast and general-purpose cluster computing system
  • Jupyter Notebook: An open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text.

Featured technologies

  • Data Science: Systems and scientific methods to analyze structured and unstructured data in order to extract knowledge and insights.
  • Artificial Intelligence: Artificial intelligence can be applied to disparate solution spaces to deliver disruptive technologies.
  • Python: Python is a programming language that lets you work more quickly and integrate your systems more effectively.

Steps

There are two modes of exercising this Code Pattern:

Run locally

  1. Clone the repository
  2. Setup development tools (Optional)
  3. Install Spark
  4. Run the script

1. Clone the repository

Clone the spark-tpc-ds-performance-test repo locally. In a terminal, run:

$ git clone https://github.com/IBM/spark-tpc-ds-performance-test 

2. Setup development tools (Optional)

Due to licensing restrictions, the TPCDS toolkit is not included as part of the code pattern. Instead, a pre-generated data set with 1GB scale factor is included in this pattern. If you want to work with a data set with larger scale factor or explore learning the full life sycle of setting up TPCDS, you can download the tool kit from TPC-DS and compile in your development environment.

Make sure the required development tools are installed in your platform. This Code Pattern is supported on Mac and Linux platforms only. Depending on your platform, run the following command to install the necessary development tools:

  • Ubuntu:
    $ sudo apt-get install gcc make flex bison byacc git
  • CentOS/RHEL:
    $ sudo yum install gcc make flex bison byacc git
  • MacOS:
    $ xcode-select --install

To compile the toolkit you need to the following :

unzip <downloaded-tpc-ds-zipfile>
cd <tpc-ds-toolkit-version>/tools
make clean
make OS=<platform>

3. Install Spark

To successfully run the TPC-DS tests, Spark must be installed and pre-configured to work with an Apache Hive metastore.

Perform 1 or more of the following options to ensure that Spark is installed and configured correctly. Once completed, modify bin/tpcdsenv.sh to set SPARK_HOME pointing to your Spark installation directory.

Option 1 - If you already have Spark installed, complete the following steps to ensure your Spark version is properly configured:

$ cd $SPARK_HOME
$ bin/spark-shell

  // Enter the following command at the scala prompt
  scala> spark.conf
  scale> spark.conf.get("spark.sql.catalogImplementation")
  res5: String = hive
  scala> <ctrl-c>

Note: You must exit out of the spark-shell process or you will encounters errors when performing the TPC-DS tests.

If the prompt returns String = hive, then your installation is properly configured.

Option 2 - If you don't have an installed Spark version, or your current installation is not properly configured, we suggest trying to pull down version 2.2.0 from the Spark downloads page. This version should be configured to work with Apache Hive, but please run the test in the previous option to make sure.

Option 3 - The last option available is it to download and build it yourself. The first step is to clone the Spark repo:

$ git clone https://github.com/apache/spark.git

Then build it using these instructions. Please make sure to build Spark with Hive support by following the Building With Hive and JDBC Support section.

4. Run the script

Note: Verify that the bin/tpcdsenv.sh script has SPARK_HOME setup correctly.

Now that we have Spark setup and the TPC-DS scripts downloaded, we are ready to setup and start running the TPC-DS queries using the bin/tpcdsspark.sh utility script. This driver script will allow you to compile the TPC-DS toolkit to produce the data and the queries, and then run them to collect results.

Perform the following steps to complete the execution of the script:

 $ cd spark-tpc-ds-performance-test
 $ bin/tpcdsspark.sh 

==============================================
TPC-DS On Spark Menu
----------------------------------------------
SETUP
 (1) Create spark tables
RUN
 (2) Run a subset of TPC-DS queries
 (3) Run All (99) TPC-DS Queries
CLEANUP
 (4) Cleanup
 (Q) Quit
----------------------------------------------
Please enter your choice followed by [ENTER]: 

Setup Option: "(1) - Create Spark Tables"

This option creates the tables in the database name specified by TPCDS_DBNAME defined in bin/tpcdsenv.sh. The default name is TPCDS but can be changed if needed. The created tables are based on the pre-generated data.

The SQL statements to create the tables can be found in src/ddl/individual, and are created in parquet format for efficient processing.

Due to licensing restrictions, the TPCDS toolkit is not included as part of the code pattern. Instead, a pre-generated data set with 1GB scale factor is included in this pattern. If you want to work with a data set with larger scale factor or explore learning the full life sycle of setting up TPCDS, you can download the tool kit from TPC-DS and compile in your development environment. Here are the instructions that describes how to compile the tool kit and generate data.

  1. Compile the toolkit

    unzip <downloaded-tpc-ds-zipfile>
    cd <tpc-ds-toolkit-version>/tools
    make clean
    make OS=<platform>
    # (platform can be 'macos' or 'linux').
    
  2. Generate the data.

    cd <tpc-ds-toolkit-version>/src/toolkit/tools
    ./dsdgen -dir <data_gen_dir> -scale <scale_factor>  -verbose y -terminate n 
    # data_gen_dir => The output directory where data will be generated at.
    # scale_factor => The scale factor of data.
    
    
  3. Generate the queries.

    The dsqgen utility in the tpcds toolkit may be used to generate the queries. Appropiate options should be passed to this utility. A typical example of its usage is:

    cd <tpc-ds-toolkit-version>/tools
    ./dsqgen -VERBOSE Y -DIALECT <dialectname> -DIRECTORY <query-template-dir> -SCALE <scale-factor> -OUTPUT_DIR <output-dir>
    

Below is example output for when this option is chosen.

==============================================
TPC-DS On Spark Menu
----------------------------------------------
SETUP
 (1) Create spark tables
RUN
 (2) Run a subset of TPC-DS queries
 (3) Run All (99) TPC-DS Queries
CLEANUP
 (4) Cleanup
 (Q) Quit
----------------------------------------------
Please enter your choice followed by [ENTER]: 1
----------------------------------------------

INFO: Creating tables. Will take a few minutes ...
INFO: Progress : [########################################] 100%
INFO: Spark tables created successfully..
Press any key to continue

Run Option: "(2) - Run a subset of TPC-DS queries"

A comma separated list of queries can be specified in this option. The result of each query in the supplied list is generated in TPCDS_WORK_DIR, with a default directory location of work. The format of the result file is query<number>.res.

A summary file named run_summary.txt is also generated. It contains information about query number, execution time and number of rows returned.

Note: The query number is a two digit number, so for query 1 the results will be in query01.res.

Note: If you are debugging and running queries using this option, make sure to save run_summary.txt after each of your runs.

==============================================
TPC-DS On Spark Menu
----------------------------------------------
SETUP
 (1) Create spark tables
RUN
 (2) Run a subset of TPC-DS queries
 (3) Run All (99) TPC-DS Queries
CLEANUP
 (4) Cleanup toolkit
 (Q) Quit
----------------------------------------------
Please enter your choice followed by [ENTER]: 2
----------------------------------------------

Enter a comma separated list of queries to run (ex: 1, 2), followed by [ENTER]:
1,2
INFO: Checking pre-reqs for running TPC-DS queries. May take a few seconds..
INFO: Checking pre-reqs for running TPC-DS queries is successful.
INFO: Running TPCDS queries. Will take a few minutes depending upon the number of queries specified.. 
INFO: Progress : [########################################] 100%
INFO: TPCDS queries ran successfully. Below are the result details
INFO: Individual result files: spark-tpc-ds-performance-test/work/query<number>.res
INFO: Summary file: spark-tpc-ds-performance-test/work/run_summary.txt
Press any key to continue

Run Option: "(3) - Run all (99) TPC-DS queries"

The only difference between this and option (5) is that all 99 TPC-DS queries are run instead of a subset.

Note: If you are running this on your laptop, it can take a few hours to run all 99 TPC-DS queries.

==============================================
TPC-DS On Spark Menu
----------------------------------------------
SETUP
 (1) Create spark tables
RUN
 (2) Run a subset of TPC-DS queries
 (3) Run All (99) TPC-DS Queries
CLEANUP
 (4) Cleanup toolkit
 (Q) Quit
----------------------------------------------
Please enter your choice followed by [ENTER]: 3
----------------------------------------------
INFO: Checking pre-reqs for running TPC-DS queries. May take a few seconds..
INFO: Checking pre-reqs for running TPC-DS queries is successful.
INFO: Running TPCDS queries. Will take a few minutes depending upon the number of queries specified.. 
INFO: Progress : [########################################] 100%
INFO: TPCDS queries ran successfully. Below are the result details
INFO: Individual result files: spark-tpc-ds-performance-test/work/query<number>.res
INFO: Summary file: spark-tpc-ds-performance-test/work/run_summary.txt
Press any key to continue

Cleanup option: "(4) - Cleanup"

This will clean up all of the files generated during option steps 1, 2, and 3. If you use this option, make sure to run the setup steps (1) before running queries using option 2 and 3.

Cleanup option: "(Q) - Quit"

This will exit the script.

Run using a Jupyter notebook in Watson Studio

  1. Sign up for Watson Studio
  2. Create the notebook
  3. Run the notebook
  4. Save and Share

1. Sign up for Watson Studio

Sign up for IBM's Watson Studio. By creating a project in Watson Studio a free tier Object Storage service will be created in your IBM Cloud account.

Note: When creating your Object Storage service, select the Free storage type in order to avoid having to pay an upgrade fee.

Take note of your service names as you will need to select them in the following steps.

2. Create the notebook

3. Run the notebook

When a notebook is executed, what is actually happening is that each code cell in the notebook is executed, in order, from top to bottom.

Each code cell is selectable and is preceded by a tag in the left margin. The tag format is In [x]:. Depending on the state of the notebook, the x can be:

  • A blank, this indicates that the cell has never been executed.
  • A number, this number represents the relative order this code step was executed.
  • A *, this indicates that the cell is currently executing.

There are several ways to execute the code cells in your notebook:

  • One cell at a time.
    • Select the cell, and then press the Play button in the toolbar.
  • Batch mode, in sequential order.
    • From the Cell menu bar, there are several options available. For example, you can Run All cells in your notebook, or you can Run All Below, that will start executing from the first cell under the currently selected cell, and then continue executing all cells that follow.
  • At a scheduled time.
    • Press the Schedule button located in the top right section of your notebook panel. Here you can schedule your notebook to be executed once at some future time, or repeatedly at your specified interval.

4. Save and Share

How to save your work:

Under the File menu, there are several ways to save your notebook:

  • Save will simply save the current state of your notebook, without any version information.
  • Save Version will save your current state of your notebook with a version tag that contains a date and time stamp. Up to 10 versions of your notebook can be saved, each one retrievable by selecting the Revert To Version menu item.

How to share your work:

You can share your notebook by selecting the “Share” button located in the top right section of your notebook panel. The end result of this action will be a URL link that will display a “read-only” version of your notebook. You have several options to specify exactly what you want shared from your notebook:

  • Only text and output: will remove all code cells from the notebook view.
  • All content excluding sensitive code cells: will remove any code cells that contain a sensitive tag. For example, # @hidden_cell is used to protect your dashDB credentials from being shared.
  • All content, including code: displays the notebook as is.
  • A variety of download as options are also available in the menu.

Considerations while increasing the scale factor.

This Code Pattern walks us through the steps that need to be performed to run the TPC-DS benchmark with the qualification scale factor(1GB). Since this is a performance benchmark, typically we need to run the benchmark with varying scale factors to gauge the throughput of the underlying data processing engine. In the section below, we will briefly touch on things to be considered while increasing the data and running the workload against a production cluster.

  • Generation of the data in larger scale factor: In order to increase the scale, please follow the section titled "Scaling and Database Population" in the benchmark spec.
  • Movement of data to the distributed file system: After generating the data, we need to copy or move them to the underlying distributed file system (typically hdfs) that your spark cluster is configured to work with.
  • Creation of spark tables: Modify the create table ddl script to change the path to the location of the data after the above copy step. Additionally we may consider to partition the fact tables for better performance.
  • We need to tune several spark configs to get optimal performance. Some of them are discussed in the following links.

Learn more

  • Data Analytics Code Patterns: Enjoyed this Code Pattern? Check out our other Data Analytics Code Patterns
  • AI and Data Code Pattern Playlist: Bookmark our playlist with all of our Code Pattern videos
  • Watson Studio: Master the art of data science with IBM's Watson Studio
  • Spark on IBM Cloud: Need a Spark cluster? Create up to 30 Spark executors on IBM Cloud with our Spark service

License

This code pattern is licensed under the Apache Software License, Version 2. Separate third party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 (DCO) and the Apache Software License, Version 2.

Apache Software License (ASL) FAQ

More Repositories

1

sarama

Sarama is a Go library for Apache Kafka.
Go
11,359
star
2

plex

The package of IBM’s typeface, IBM Plex.
CSS
9,603
star
3

css-gridish

Automatically build your grid design’s CSS Grid code, CSS Flexbox fallback code, Sketch artboards, and Chrome extension.
CSS
2,253
star
4

openapi-to-graphql

Translate APIs described by OpenAPI Specifications (OAS) into GraphQL
TypeScript
1,609
star
5

fp-go

functional programming library for golang
Go
1,550
star
6

Project_CodeNet

This repository is to support contributions for tools for the Project CodeNet dataset hosted in DAX
Python
1,537
star
7

fhe-toolkit-linux

IBM Fully Homomorphic Encryption Toolkit For Linux. This toolkit is a Linux based Docker container that demonstrates computing on encrypted data without decrypting it! The toolkit ships with two demos including a fully encrypted Machine Learning inference with a Neural Network and a Privacy-Preserving key-value search.
C++
1,436
star
8

pytorch-seq2seq

An open source framework for seq2seq models in PyTorch.
Python
1,431
star
9

ibm.github.io

IBM Open Source at GitHub
JavaScript
1,106
star
10

Dromedary

Dromedary: towards helpful, ethical and reliable LLMs.
Python
1,104
star
11

MicroscoPy

An open-source, motorized, and modular microscope built using LEGO bricks, Arduino, Raspberry Pi and 3D printing.
Python
1,102
star
12

MAX-Image-Resolution-Enhancer

Upscale an image by a factor of 4, while generating photo-realistic details.
Python
863
star
13

differential-privacy-library

Diffprivlib: The IBM Differential Privacy Library
Python
819
star
14

elasticsearch-spark-recommender

Use Jupyter Notebooks to demonstrate how to build a Recommender with Apache Spark & Elasticsearch
Jupyter Notebook
806
star
15

build-blockchain-insurance-app

Sample insurance application using Hyperledger Fabric
JavaScript
719
star
16

FfDL

Fabric for Deep Learning (FfDL, pronounced fiddle) is a Deep Learning Platform offering TensorFlow, Caffe, PyTorch etc. as a Service on Kubernetes
Go
676
star
17

spring-boot-microservices-on-kubernetes

In this code we demonstrate how a simple Spring Boot application can be deployed on top of Kubernetes. This application, Office Space, mimicks the fictitious app idea from Michael Bolton in the movie "Office Space".
JavaScript
548
star
18

cloud-native-starter

Cloud Native Starter for Java/Jakarta EE based Microservices on Kubernetes and Istio
Shell
516
star
19

openapi-validator

Configurable and extensible validator/linter for OpenAPI documents
JavaScript
496
star
20

federated-learning-lib

A library for federated learning (a distributed machine learning process) in an enterprise environment.
Python
495
star
21

clai

Command Line Artificial Intelligence or CLAI is an open-sourced project from IBM Research aimed to bring the power of AI to the command line interface.
Python
476
star
22

nicedoc.io

pretty README as service.
JavaScript
473
star
23

import-tracker

Python utility for tracking third party dependencies within a library
Python
457
star
24

mac-ibm-enrollment-app

The Mac@IBM enrollment app makes setting up macOS with Jamf Pro more intuitive for users and easier for IT. The application offers IT admins the ability to gather additional information about their users during setup, allows users to customize their enrollment by selecting apps or bundles of apps to install during setup, and provides users with next steps when enrollment is complete.
Swift
455
star
25

mobx-react-router

Keep your MobX state in sync with react-router
JavaScript
440
star
26

EvolveGCN

Code for EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs
Python
384
star
27

fhe-toolkit-macos

IBM Homomorphic Encryption Toolkit For MacOS
C++
358
star
28

AutoMLPipeline.jl

A package that makes it trivial to create and evaluate machine learning pipeline architectures.
HTML
355
star
29

aihwkit

IBM Analog Hardware Acceleration Kit
Jupyter Notebook
352
star
30

graphql-query-generator

Randomly generates GraphQL queries from a GraphQL schema
TypeScript
337
star
31

zshot

Zero and Few shot named entity & relationships recognition
Python
336
star
32

lale

Library for Semi-Automated Data Science
Python
333
star
33

portieris

A Kubernetes Admission Controller for verifying image trust.
Go
330
star
34

FedMA

Code for Federated Learning with Matched Averaging, ICLR 2020.
Python
326
star
35

BluePic

WARNING: This repository is no longer maintained ⚠️ This repository will not be updated. The repository will be kept available in read-only mode.
Swift
325
star
36

evote

A voting application that leverages Hyperledger Fabric and the IBM Blockchain Platform to record and tally ballots.
JavaScript
320
star
37

TabFormer

Code & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Python
319
star
38

powerai-counting-cars

Run a Jupyter Notebook to detect, track, and count cars in a video using Maximo Visual Insights (formerly PowerAI Vision) and OpenCV
Jupyter Notebook
317
star
39

blockchain-network-on-kubernetes

Demonstrates the steps involved in setting up your business network on Hyperledger Fabric using Kubernetes APIs on IBM Cloud Kubernetes Service.
Shell
305
star
40

charts

The IBM/charts repository provides helm charts for IBM and Third Party middleware.
Smarty
297
star
41

IBM-Z-zOS

The helpful and handy location for finding and sharing z/OS files, which are not included in the product.
REXX
296
star
42

mac-ibm-notifications

macOS agent used to display custom notifications and alerts to the end user.
Swift
294
star
43

blockchain-application-using-fabric-java-sdk

Create and Deploy a Blockchain Network using Hyperledger Fabric SDK Java
Java
290
star
44

MAX-Object-Detector

Localize and identify multiple objects in a single image.
Python
286
star
45

design-kit

The IBM Design kit is a collection of tools aimed to help you design and prototype experiences faster, with confidence and thoughtfulness. This kit is based on the IBM Design System. Also, you may use this documentation to create add-on libraries to the IBM Design System or submit bugs to the current system.
272
star
46

AccDNN

A compiler from AI model to RTL (Verilog) accelerator in FPGA hardware with auto design space exploration.
Verilog
270
star
47

deploy-ibm-cloud-private

Instructions and Code required to install IBM Cloud Private
HCL
263
star
48

audit-ci

Audit NPM, Yarn, PNPM, and Bun dependencies in continuous integration environments, preventing integration if vulnerabilities are found at or above a configurable threshold while ignoring allowlisted advisories
TypeScript
261
star
49

vue-a11y-calendar

Accessible, internationalized Vue calendar
JavaScript
253
star
50

UQ360

Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty in machine learning model predictions.
Python
252
star
51

watson-banking-chatbot

A chatbot for banking that uses the Watson Assistant, Discovery, Natural Language Understanding and Tone Analyzer services.
JavaScript
250
star
52

ibm-generative-ai

IBM-Generative-AI is a Python library built on IBM's large language model REST interface to seamlessly integrate and extend this service in Python programs.
Python
246
star
53

Kubernetes-container-service-GitLab-sample

This code shows how a common multi-component GitLab can be deployed on Kubernetes cluster. Each component (NGINX, Ruby on Rails, Redis, PostgreSQL, and more) runs in a separate container or group of containers.
Shell
243
star
54

transition-amr-parser

SoTA Abstract Meaning Representation (AMR) parsing with word-node alignments in Pytorch. Includes checkpoints and other tools such as statistical significance Smatch.
Python
241
star
55

tensorflow-hangul-recognition

Handwritten Korean Character Recognition with TensorFlow and Android
Python
232
star
56

molformer

Repository for MolFormer
Jupyter Notebook
228
star
57

BlockchainNetwork-CompositeJourney

Part 1 in a series of patterns showing the building blocks of a Blockchain application
Shell
227
star
58

LNN

A `Neural = Symbolic` framework for sound and complete weighted real-value logic
Python
225
star
59

pytorchpipe

PyTorchPipe (PTP) is a component-oriented framework for rapid prototyping and training of computational pipelines combining vision and language
Python
223
star
60

Graph2Seq

Graph2Seq is a simple code for building a graph-encoder and sequence-decoder for NLP and other AI/ML/DL tasks.
Python
219
star
61

ModuleFormer

ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
Python
219
star
62

data-prep-kit

Open source project for data preparation of LLM application builders
Jupyter Notebook
217
star
63

Scalable-WordPress-deployment-on-Kubernetes

This code showcases the full power of Kubernetes clusters and shows how can we deploy the world's most popular website framework on top of world's most popular container orchestration platform.
Shell
214
star
64

janusgraph-utils

Develop a graph database app using JanusGraph
Java
207
star
65

tensorflow-large-model-support

Large Model Support in Tensorflow
201
star
66

Scalable-Cassandra-deployment-on-Kubernetes

In this code we provide a full roadmap the deployment of a multi-node scalable Cassandra cluster on Kubernetes. Cassandra understands that it is running within a cluster manager, and uses this cluster management infrastructure to help implement the application. Kubernetes concepts like Replication Controller, StatefulSets etc. are leveraged to deploy either non-persistent or persistent Cassandra clusters on Kubernetes cluster.
Shell
195
star
67

adaptive-federated-learning

Code for paper "Adaptive Federated Learning in Resource Constrained Edge Computing Systems"
Python
193
star
68

action-recognition-pytorch

This is the pytorch implementation of some representative action recognition approaches including I3D, S3D, TSN and TAM.
Python
193
star
69

gantt-chart

IBM Gantt Chart Component, integrable in Vanilla, jQuery, or React Framework.
JavaScript
193
star
70

api-samples

Samples code that uses QRadar API's
Python
192
star
71

cdfsl-benchmark

(ECCV 2020) Cross-Domain Few-Shot Learning Benchmarking System
Python
190
star
72

kube101

Kubernetes 101 workshop (https://ibm.github.io/kube101/)
Shell
181
star
73

CrossViT

Official implementation of CrossViT. https://arxiv.org/abs/2103.14899
Python
180
star
74

rl-testbed-for-energyplus

Reinforcement Learning Testbed for Power Consumption Optimization using EnergyPlus
Python
180
star
75

browser-functions

A lightweight serverless platform that uses Web Browsers as execution engines
JavaScript
180
star
76

pwa-lit-template

A template for building Progressive Web Applications using Lit and Vaadin Router.
TypeScript
178
star
77

fastfit

FastFit ⚡ When LLMs are Unfit Use FastFit ⚡ Fast and Effective Text Classification with Many Classes
Python
174
star
78

AMLSim

The AMLSim project is intended to provide a multi-agent based simulator that generates synthetic banking transaction data together with a set of known money laundering patterns - mainly for the purpose of testing machine learning models and graph algorithms. We welcome you to enhance this effort since the data set related to money laundering is critical to advance detection capabilities of money laundering activities.
Python
170
star
79

socket-io

A Socket.IO client for C#
C#
169
star
80

tfjs-web-app

A TensorFlow.js Progressive Web App for Offline Visual Recognition
JavaScript
164
star
81

simulai

A toolkit with data-driven pipelines for physics-informed machine learning.
Python
157
star
82

watson-online-store

Learn how to use Watson Assistant and Watson Discovery. This application demonstrates a simple abstraction of a chatbot interacting with a Cloudant NoSQL database, using a Slack UI.
HTML
156
star
83

unitxt

🦄 Unitxt: a python library for getting data fired up and set for training and evaluation
Python
155
star
84

istio101

Istio 101 workshop (https://ibm.github.io/istio101/)
Shell
154
star
85

Medical-Blockchain

A healthcare data management platform built on blockchain that stores medical data off-chain
Vue
150
star
86

terratorch

a Python toolkit for fine-tuning Geospatial Foundation Models (GFMs).
Python
148
star
87

node-odbc

ODBC bindings for node
JavaScript
146
star
88

taxinomitis

Source code for Machine Learning for Kids site
JavaScript
143
star
89

watson-assistant-slots-intro

A Chatbot for ordering a pizza that demonstrates how using the IBM Watson Assistant Slots feature, one can fill out an order, form, or profile.
JavaScript
143
star
90

tsfm

Foundation Models for Time Series
Jupyter Notebook
143
star
91

SALMON

Self-Alignment with Principle-Following Reward Models
Python
142
star
92

ipfs-social-proof

IPFS Social Proof: A decentralized identity and social proof system
JavaScript
142
star
93

kgi-slot-filling

This is the code for our KILT leaderboard submissions (KGI + Re2G models).
Python
141
star
94

etcd-java

Alternative etcd3 java client
Java
141
star
95

regression-transformer

Regression Transformer (2023; Nature Machine Intelligence)
Python
140
star
96

deploy-react-kubernetes

Built for developers who are interested in learning how to deploy a React application on Kubernetes, this pattern uses the React and Redux framework and calls the OMDb API to look up movie information based on user input. This pattern can be built and run on both Docker and Kubernetes.
JavaScript
139
star
97

probabilistic-federated-neural-matching

Bayesian Nonparametric Federated Learning of Neural Networks
Python
137
star
98

innovate-digital-bank

This repository contains instructions to build a digital bank composed of a set of microservices that communicate with each other. Using Nodejs, Express, MongoDB and deployed to a Kubernetes cluster on IBM Cloud.
JavaScript
137
star
99

core-dump-handler

Save core dumps from a Kubernetes Service or RedHat OpenShift to an S3 protocol compatible object store
Rust
136
star
100

KubeflowDojo

Repository to hold code, instructions, demos and pointers to presentation assets for Kubeflow Dojo
Jupyter Notebook
133
star