• Stars
    star
    1,485
  • Rank 30,460 (Top 0.7 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 3 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This repository is to support contributions for tools for the Project CodeNet dataset hosted in DAX

Project CodeNet

DOI

The goal of Project CodeNet is to provide the AI-for-Code research community with a large scale, diverse, and high quality curated dataset to drive innovation in AI techniques.

Table of Contents

Introduction

A decade ago, Marc Andreessen famously wrote that "software is eating the world." Software now permeates every part of our existence; Google services combine for 2 billion lines of code, and a modern vehicle contains around 100 million lines of code. It's a monumental challenge to create, debug, maintain, and update these complex software systems. Recently, a fast-growing discipline known as AI for Code aims to help software developers improve their productivity by automating the software engineering process. AI for Code researchers have been leveraging technologies like NLP and augmenting them with code analysis and compilation techniques to perform a myriad of practical tasks, such as code search, summarization, and completion, as well as code-to-code translation. The discipline isn't limited to academic research either: Ruchir Puri, IBM Research's chief research scientist, discussed in a recent podcast how technologies from AI for Code are being used to modernize legacy software by helping migrate monolithic applications to microservices for IBM's enterprise clients.

AI for Code is poised to transition from proof-of-concept to widespread adoption. To provide a catalyst for such a tipping point, researchers at IBM Research have introduced Project CodeNet, a large-scale dataset for benchmarking and experimentation. Project CodeNet has many characteristics (large scale, diveristy, etc.) similar to ImageNet, a huge dataset for imagery that had a dramatic impact on the field of computer vision research. Project CodeNet is a large scale dataset with approximately 14 million code samples, each of which is an intended solution to one of 4000 coding problems. Project CodeNet aims to do for AI for Code what ImageNet did for computer vision.

Differentiation

There are a few differentiating features of Project CodeNet when compared to other similar efforts. In addition to the size of the dataset, the code samples are written in over 50 programming languages, though the dominant languages are C++, C, Python, and Java. The code samples in Project CodeNet are annotated with a rich set of information, such as the code size, memory footprint, CPU run time, and status, which indicates acceptance or error types. Over 90% of the problems come with the respective problem description, which contains a concise problem statement, specification of the input format and the output format. When available, we also extracted from the problem description sample input and output, and provide them as part of the dataset. Users can execute the accepted codes samples (over 50% of the submissions are accepted) to extract additional metadata and verify outputs from generative AI models for correctness.

Another area that Project CodeNet addressed is the quality of the data samples. From a paper by Allamanis, we learned that quite a large number of frequently used AI for Code datasets have duplicate or near-duplicate code samples, which can inflate performance metrics as much as 100%. In addition, we found that problem-submission style datasets from online judging systems can contain clusters of identical problems, which will certainly skew the performance metrics. One example is POJ-104, in which problems 26 and 62 are identical. Therefore we identified the near-duplicates and the identical problem clusters in Project CodeNet and provide these information for the benefit of the users.

Benchmarks

In light of these issues, we have extracted several benchmark datasets from CodeNet for users to perform code classification and code similarity experiments. They have been filtered to remove identical problem clusters and near-duplicate code samples, so that performance metrics can be measured on training and test data samples with the appropriate statistics. There are two C++ benchmark datasets that are similar to the popular POJ-104 but are approximately ten times in size. We felt that the size increase is necessary, since 98% accuracy has been already achieved in code classification on POJ-104. An order of magnitude larger dataset will leave ample room to advance the state of the art with more complex neural networks and algorithms. The other two benchmark datasets are in Python and Java, which provides a different flavor because the frequent use of library functions.

Potential use cases

The rich metadata and diversity open Project CodeNet to a plethora of uses cases. The problem-submission relationship in Project CodeNet corresponds to type-4 similarity and can be used for code search and clone detection. The code samples in Project CodeNet are labeled with their acceptance status and we can explore AI techniques to distinguish correct codes from problematic ones. Project CodeNet's metadata also enables the tracking of how a submission evolves from problematic to accepted, which could be used for exploring automatic code correction. Each code sample is labeled with CPU run time and memory footprint, which can be used for regression studies and prediction. Given its wealth of programs written in a multitude of languages, Project CodeNet may serve as a valuable benchmark dataset for source-to-source translation.

Usability

To facilitate creation of customized benchmarks and dataset, we provide a set of productivity tools to aggregate codes samples based on user criteria. We are also releasing pre-processing tools to transform code samples into token sequences, simplified parse trees and other code graphs.

Models and experiments

We have performed numerous experiments on the CodeNet dataset. The goal of these experiments is to produce a set of baseline models and results for users of the CodeNet dataset to gauge their research. The run scripts and training scripts are available in the model-experiments directory. The classification and similarity experiments use the benchmark datasets we extracted from CodeNet as training and test datasets. In addition to experiments based on token sequences, we also have experiments leveraging graph neural networks (GNN). For the convenience of the users interested in GNN's, we have included the simplified parse tree (SPT) representation of the code samples for each benchmark dataset. The experiment on Masked Language Model has a companion Jupyter notebook in the notebooks directory.

Problem Descriptions

For the vast majority of problem classes, short problem descriptions are available in 'doc/problem_descriptions.tar.gz', a small html file for each problem.

Relevant links

Download the dataset

Download the full dataset in our data repository.

tar -zxf Project_CodeNet_full.tar.gz to uncompress and untar. The directory structure and how the code samples are organized are explained here.

The 4 benchmark datasets, Project_CodeNet_C++1000, Project_CodeNet_C++1400, Project_CodeNet_Python800, and Project_CodeNet_Java250 are included in the full dataset and are available separately in the "Archive Dataset File" column of the table in the "Get this Dataset" section in our data repository. They can be used for code classification and code similarity research as a replacement of or in addition to the dataset POJ-104.

To expedite AI for code research using graph neural networks, we have included the simplified parse tree (SPT) representation of the code samples for each benchmark dataset. They are available in the "Archive SPT File" column of the table in the "Get this Dataset" section in our data repository.

Dataset overview

The Project CodeNet Dataset consists of a very large collection of source files, extensive metadata, tooling to access the dataset and make tailored selections, and documentation.

The basis of the dataset is the data available on two online judge web sites:

  1. AIZU Online Judge
  2. AtCoder

An online judge website offers programmers an opportunity to test their skills by posing programming problems in the form of courses or contests. Users may submit their solution which is then judged by an automatic review mechanism. The outcome is reported back to the user. Both problem descriptions, user submissions and associated metadata are available for study via various REST APIs.

The first step in constructing Project CodeNet is downloading the problem descriptions and the source code submissions from the websites mentioned above, followed by reshaping and consolidating the metadata and cleaning up the inconsistencies, omissions, and mistakes in the source data itself.

Dataset statistics

The dataset comprises 13,916,868 submissions, divided into 4053 problems (of which 5 are empty). Of the submissions 53.6% (7,460,588) are accepted, 29.5% are marked as wrong answer and the remaining suffer from one of the possible rejection causes. The data contains submissions in 55 different languages, although 95% of them are coded in the six most common languages (C++, Python, Java, C, Ruby, C#). C++ is the most common language with 8,008,527 submissions (57% of the total) of which 4,353,049 are accepted. Here are 2 pie charts depicting submissions and status distribution of Project CodeNet.

Drawing Drawing

A detailed overview of the dataset statistics can be found in this spreadsheet.

Data

The data consist of complete programs in a particular programming language. Each program is contained in a single file. The file will have a name with an extension that denotes the programming language used. (More details about the specific programming language and the version of the compiler/interpreter used, can be found in the metadata.)

Each program attempts to solve a certain programming task or problem. There are many problems and each problem might have many solutions in different languages. We refer to each program as a submission instead of a solution since it might not be complete and correct. Solutions are the accepted submissions that are compilable and executable, and at least correctly produce the expected results on all provided test cases. (Of course, according to the late Dijkstra, tests are no proof of correctness.)

Metadata

The metadata provides properties of interest about the problems and their submissions. Foremost it formalizes the organization of the data and the relationship between problems, languages, and the source code files. The metadata allows for queries about the data and to make specific selections among the large collection of problems, languages, and source files.

Metadata is made available in comma-separated value (CSV) files. This allows for easy processing, even with simple command-line tools. Some of the fields in the csv files might be empty, and for submissions that are not accepted, some fields might have invalid entries such as negative numbers for CPU time. Extra checking needs to be implemented in parsing these files.

The metadata is hierarchically organized on 2 levels: the first level is the dataset level that relates to all the different problems defined by the various dataset sources. The second level is the problem level that relates to all source code submissions pertaining to a single problem or task.

Metadata and data are deliberately kept fully separated within the file system.

Metadata at the dataset level

At the dataset level there is a single CSV file (problem_list.csv) listing all the different problems. Additionally, for each problem there is a more extensive description that sets the problem and any further requirements and constraints and often provides examples of data input and expected output.

The fields and their format of this CSV file are captured by the following table:

name of column data type unit description
id string none unique anonymized id of the problem
name string none short name of the problem
dataset string none original dataset, AIZU or AtCoder
time_limit int millisecond maximum time allowed for a submission
memory_limit int KB maximum memory allowed for a submission
rating int none rating, i.e., difficulty of the problem
tags string none list of tags separated by "|"; not used
complexity string none degree of difficulty of the problem; not used

Metadata at the problem level

At the problem level there is a CSV file per problem and all content of these files is of course organized under one and the same header.

The fields and their format of this CSV file are captured by the following table:

name of column data type unit description
submission_id string none unique anonymized id of the submission
problem_id string none anonymized id of the problem
user_id string none anonymized user id of the submission
date int seconds date and time of submission in the Unix timestamp format (seconds since the epoch)
language string none mapped language of the submission (ex: C++14 -> C++)
original_language string none original language specification
filename_ext string none extension of the filename that indicates the programming language used
status string none acceptance status, or error type
cpu_time int millisecond execution time
memory int KB memory used
code_size int bytes size of the submission source code in bytes
accuracy string none number of tests passed; *Only for AIZU

Here is a table of all the possible status values. The β€œabbreviation” and β€œnumeric code” are sometimes seen in the original metadata on the websites; it is listed here for reference and completeness. These fields do not occur in the Project CodeNet metadata.

status abbreviation numeric code
Compile Error CE 0
Wrong Answer WA 1
Time Limit Exceeded TLE 2
Memory Limit Exceeded MLE 3
Accepted AC 4
Judge Not Available JNA 5
Output Limit Exceeded OLE 6
Runtime Error RE 7
WA: Presentation Error PE 8
Waiting for Judging WJ
Waiting for Re-judging WR
Internal Error IE
Judge System Error

Directory structure and naming convention

The data and metadata are organized in a rigorous directory structure. At the top level sits the Project CodeNet directory with several sub-directories, data, metadata, and problem_descriptions:

  • data is further subdivided into a directory per problem and within each problem directory, directories for each language. The language directory contains all the source files supposed to be written in that particular programming or scripting language. When there are no submissions for a particular language, there will be no directory for it, but the problem directory will always be there, even if there are no submissions at all.

    The name of the directory for a programming language is the common name for the language using proper capitalization and special characters. This name is the consolidation of the names used in the metadata. Information is available about how the original language designations are mapped into the directory names and how these more general and common names are mapped to the submission file name extensions. As an example, a source could be designated c++14, which is mapped into the directory C++ (notice the capital C) and will get the extension .cpp.

  • derived holds information about near-duplicates, identical problem clusters, sample input and output for each problem, as well as the benchmarks.

  • metadata holds all the problem CSV files and the problem_list.csv file.

  • problem_descriptions holds HTML files for most problems, giving an extensive description of the problem, often accompanied with some sample input and expected output.

For the sake of creating a uniform set of metadata across all data sources, and to hide any sensitive information, some metadata fields are anonymized by randomly (but uniquely and consistently) renumbering problem, submission, and user identifiers (ids). The identifiers we use are defined by simple regular expressions:

  • problem ids are anonymized and follow this pattern: p[0-9]{5} (a p followed by exactly 5 digits).
  • submission ids are anonymized and follow this pattern: s[0-9]{9} (an s followed by exactly 9 digits).
  • user ids are anonymized and follow this pattern: u[0-9]{9} (a u followed by exactly 9 digits).

Relationships among the metadata and data

The main relationship between problem metadata and data is the fact that each metadata record (a non-header row in a problem CSV file) describes one source file and provides all information about its location. The directory structure and naming convention as stated above are implicitly assumed.

Example of getting the source file for a particular submission

Starting at a CSV metadata entry for a particular submission, here is how to get to the corresponding source file. Say that the submission id is s300682070. Either we know this is a submission to problem p00001 upfront or we can grep through all Project_CodeNet/metadata/p?????.csv files to learn that. We get a brief description of this problem by looking at the p00001 entry in the Project_CodeNet/metadata/problem_list.csv:

p00001,List of Top 3 Hills,AIZU,1000,131072,,,

We can get a more verbose description of this problem by reading Project_CodeNet/problem_descriptions/p00001.html.

The Project_CodeNet/metadata/p00001.csv file provides the info on all submissions. For our selected submission we find:

s300682070,p00001,u558442027,1480319506,JavaScript,JavaScript,js,Accepted,60,15496,219,4/4

We see it is an Accepted submission in the language JavaScript with file extension .js.

The source file path therefore is: Project_CodeNet/data/p00001/JavaScript/s300682070.js

Example of getting the metadata for a particular source file

Likewise, we can play the reverse game of finding the metadata entry for a given submission source file. Say the source file is Project_CodeNet/data/p00001/JavaScript/s300682070.js.

Encoded in this file name path we see the problem id p00001 and language JavaScript and of course the submission id s300682070. We find the metadata CSV file to be: Project_CodeNet/metadata/p00001.csv. Opening that file and searching for the submission id we find the entry:

s300682070,p00001,u558442027,1480319506,JavaScript,JavaScript,js,Accepted,60,15496,219,4/4

Tools to process source files

The source files of Project CodeNet represent examples of some 50+ different programming and scripting languages. Of course not all languages are equally represented: most submissions are written in the more popular languages C, C++, Java, and Python.

To complement our large dataset of source code, a suite of tools and utilities will be provided. These tools target several purposes:

  • derive statistics from the dataset
  • access the dataset files to make selections
  • preprocess the source files to extract certain information
  • facilitate conversions between popular formats

Statistics

Since Project CodeNet uses the file system as storage and uses a rigorous directory structure, many (Linux) command-line utilities can be directly used to extract interesting statistics about the dataset. Utilities like ls, wc and grep are very useful. The CSV metadata can best be browsed using csvkit components like csvstat.

More elaborate statistics about the dataset can easily be retrieved using SQL queries on a database representation of the metadata. HSQLDB is a database that runs off a CSV file. Our CSV problem metadata files are simply stripped of their headers and concatenated. A suite of useful SQL queries is available. A separate document explains the necessary steps.

Access and selection

As described above, it should be easy to create specific subsets of the dataset merely by copying (or symlinking) relevant files and/or directories. For more elaborate selections based on a subset or range of problems, a subset of languages, statuses, and code sizes, several Bash scripts are available to accomplish that. These scripts reside in the tools/aggregation-scripts directory and are separately documented in this README.

Pre-processing

We provide tools to convert code samples into a representation that can be consumed by AI algorithms

Whether and to what extent the above steps can successfully be applied to any given source file depends on several factors. Obviously, if the submission is not of Accepted status, it is to be expected that even simple tokenization will fail because of malformed lexical elements. But the situation for Accepted submissions is not always better: programmers might have used certain non-standard features of the language that happen to be accepted by a certain compiler or interpreter. Simple cases are the use of a dollar sign as part of a C identifier. For languages like C and C++ that use a pre-processor, use of macros and conditional defines can hugely change how the code ultimately looks like.

Contributors

Ruchir Puri, David S. Kung, Geert Janssen, Wei Zhang, Giacomo Domeniconi, Vladimir Zolotov, Julian Dolby, Jie Chen, Mihir Choudhury, Lindsey Decker, Veronika Thost, Luca Buratti, Saurabh Pujar, Shyam Ramji, Ulrich Finkler, Susan Malaika, Frederick Reiss.

More Repositories

1

sarama

Sarama is a Go library for Apache Kafka.
Go
10,858
star
2

plex

The package of IBM’s typeface, IBM Plex.
CSS
9,297
star
3

css-gridish

Automatically build your grid design’s CSS Grid code, CSS Flexbox fallback code, Sketch artboards, and Chrome extension.
CSS
2,253
star
4

openapi-to-graphql

Translate APIs described by OpenAPI Specifications (OAS) into GraphQL
TypeScript
1,594
star
5

fp-go

functional programming library for golang
Go
1,480
star
6

pytorch-seq2seq

An open source framework for seq2seq models in PyTorch.
Python
1,431
star
7

fhe-toolkit-linux

IBM Fully Homomorphic Encryption Toolkit For Linux. This toolkit is a Linux based Docker container that demonstrates computing on encrypted data without decrypting it! The toolkit ships with two demos including a fully encrypted Machine Learning inference with a Neural Network and a Privacy-Preserving key-value search.
C++
1,427
star
8

ibm.github.io

IBM Open Source at GitHub
JavaScript
1,106
star
9

MicroscoPy

An open-source, motorized, and modular microscope built using LEGO bricks, Arduino, Raspberry Pi and 3D printing.
Python
1,102
star
10

Dromedary

Dromedary: towards helpful, ethical and reliable LLMs.
Python
1,059
star
11

MAX-Image-Resolution-Enhancer

Upscale an image by a factor of 4, while generating photo-realistic details.
Python
863
star
12

elasticsearch-spark-recommender

Use Jupyter Notebooks to demonstrate how to build a Recommender with Apache Spark & Elasticsearch
Jupyter Notebook
806
star
13

differential-privacy-library

Diffprivlib: The IBM Differential Privacy Library
Python
774
star
14

build-blockchain-insurance-app

Sample insurance application using Hyperledger Fabric
JavaScript
719
star
15

FfDL

Fabric for Deep Learning (FfDL, pronounced fiddle) is a Deep Learning Platform offering TensorFlow, Caffe, PyTorch etc. as a Service on Kubernetes
Go
676
star
16

spring-boot-microservices-on-kubernetes

In this code we demonstrate how a simple Spring Boot application can be deployed on top of Kubernetes. This application, Office Space, mimicks the fictitious app idea from Michael Bolton in the movie "Office Space".
JavaScript
548
star
17

cloud-native-starter

Cloud Native Starter for Java/Jakarta EE based Microservices on Kubernetes and Istio
Shell
517
star
18

federated-learning-lib

A library for federated learning (a distributed machine learning process) in an enterprise environment.
Python
480
star
19

nicedoc.io

pretty README as service.
JavaScript
473
star
20

clai

Command Line Artificial Intelligence or CLAI is an open-sourced project from IBM Research aimed to bring the power of AI to the command line interface.
Python
466
star
21

import-tracker

Python utility for tracking third party dependencies within a library
Python
458
star
22

mac-ibm-enrollment-app

The Mac@IBM enrollment app makes setting up macOS with Jamf Pro more intuitive for users and easier for IT. The application offers IT admins the ability to gather additional information about their users during setup, allows users to customize their enrollment by selecting apps or bundles of apps to install during setup, and provides users with next steps when enrollment is complete.
Swift
454
star
23

openapi-validator

Configurable and extensible validator/linter for OpenAPI documents
JavaScript
453
star
24

mobx-react-router

Keep your MobX state in sync with react-router
JavaScript
437
star
25

EvolveGCN

Code for EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs
Python
384
star
26

fhe-toolkit-macos

IBM Homomorphic Encryption Toolkit For MacOS
C++
356
star
27

AutoMLPipeline.jl

A package that makes it trivial to create and evaluate machine learning pipeline architectures.
HTML
345
star
28

graphql-query-generator

Randomly generates GraphQL queries from a GraphQL schema
TypeScript
334
star
29

portieris

A Kubernetes Admission Controller for verifying image trust.
Go
329
star
30

BluePic

WARNING: This repository is no longer maintained ⚠️ This repository will not be updated. The repository will be kept available in read-only mode.
Swift
325
star
31

FedMA

Code for Federated Learning with Matched Averaging, ICLR 2020.
Python
320
star
32

lale

Library for Semi-Automated Data Science
Python
320
star
33

powerai-counting-cars

Run a Jupyter Notebook to detect, track, and count cars in a video using Maximo Visual Insights (formerly PowerAI Vision) and OpenCV
Jupyter Notebook
317
star
34

evote

A voting application that leverages Hyperledger Fabric and the IBM Blockchain Platform to record and tally ballots.
JavaScript
316
star
35

aihwkit

IBM Analog Hardware Acceleration Kit
Jupyter Notebook
314
star
36

zshot

Zero and Few shot named entity & relationships recognition
Python
308
star
37

blockchain-network-on-kubernetes

Demonstrates the steps involved in setting up your business network on Hyperledger Fabric using Kubernetes APIs on IBM Cloud Kubernetes Service.
Shell
305
star
38

IBM-Z-zOS

The helpful and handy location for finding and sharing z/OS files, which are not included in the product.
REXX
296
star
39

charts

The IBM/charts repository provides helm charts for IBM and Third Party middleware.
Smarty
295
star
40

TabFormer

Code & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Python
295
star
41

blockchain-application-using-fabric-java-sdk

Create and Deploy a Blockchain Network using Hyperledger Fabric SDK Java
Java
292
star
42

mac-ibm-notifications

macOS agent used to display custom notifications and alerts to the end user.
Swift
289
star
43

MAX-Object-Detector

Localize and identify multiple objects in a single image.
Python
286
star
44

design-kit

The IBM Design kit is a collection of tools aimed to help you design and prototype experiences faster, with confidence and thoughtfulness. This kit is based on the IBM Design System. Also, you may use this documentation to create add-on libraries to the IBM Design System or submit bugs to the current system.
272
star
45

AccDNN

A compiler from AI model to RTL (Verilog) accelerator in FPGA hardware with auto design space exploration.
Verilog
270
star
46

deploy-ibm-cloud-private

Instructions and Code required to install IBM Cloud Private
HCL
263
star
47

vue-a11y-calendar

Accessible, internationalized Vue calendar
JavaScript
253
star
48

audit-ci

Audit NPM, Yarn, and PNPM dependencies in continuous integration environments, preventing integration if vulnerabilities are found at or above a configurable threshold while ignoring allowlisted advisories
TypeScript
253
star
49

watson-banking-chatbot

A chatbot for banking that uses the Watson Assistant, Discovery, Natural Language Understanding and Tone Analyzer services.
JavaScript
250
star
50

UQ360

Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty in machine learning model predictions.
Python
249
star
51

Kubernetes-container-service-GitLab-sample

This code shows how a common multi-component GitLab can be deployed on Kubernetes cluster. Each component (NGINX, Ruby on Rails, Redis, PostgreSQL, and more) runs in a separate container or group of containers.
Shell
243
star
52

tensorflow-hangul-recognition

Handwritten Korean Character Recognition with TensorFlow and Android
Python
232
star
53

transition-amr-parser

SoTA Abstract Meaning Representation (AMR) parsing with word-node alignments in Pytorch. Includes checkpoints and other tools such as statistical significance Smatch.
Python
229
star
54

BlockchainNetwork-CompositeJourney

Part 1 in a series of patterns showing the building blocks of a Blockchain application
Shell
227
star
55

pytorchpipe

PyTorchPipe (PTP) is a component-oriented framework for rapid prototyping and training of computational pipelines combining vision and language
Python
223
star
56

Graph2Seq

Graph2Seq is a simple code for building a graph-encoder and sequence-decoder for NLP and other AI/ML/DL tasks.
Python
219
star
57

LNN

A `Neural = Symbolic` framework for sound and complete weighted real-value logic
Python
214
star
58

Scalable-WordPress-deployment-on-Kubernetes

This code showcases the full power of Kubernetes clusters and shows how can we deploy the world's most popular website framework on top of world's most popular container orchestration platform.
Shell
214
star
59

janusgraph-utils

Develop a graph database app using JanusGraph
Java
204
star
60

ModuleFormer

ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
Python
203
star
61

ibm-generative-ai

IBM-Generative-AI is a Python library built on IBM's large language model REST interface to seamlessly integrate and extend this service in Python programs.
Python
202
star
62

tensorflow-large-model-support

Large Model Support in Tensorflow
199
star
63

Scalable-Cassandra-deployment-on-Kubernetes

In this code we provide a full roadmap the deployment of a multi-node scalable Cassandra cluster on Kubernetes. Cassandra understands that it is running within a cluster manager, and uses this cluster management infrastructure to help implement the application. Kubernetes concepts like Replication Controller, StatefulSets etc. are leveraged to deploy either non-persistent or persistent Cassandra clusters on Kubernetes cluster.
Shell
195
star
64

adaptive-federated-learning

Code for paper "Adaptive Federated Learning in Resource Constrained Edge Computing Systems"
Python
193
star
65

action-recognition-pytorch

This is the pytorch implementation of some representative action recognition approaches including I3D, S3D, TSN and TAM.
Python
193
star
66

gantt-chart

IBM Gantt Chart Component, integrable in Vanilla, jQuery, or React Framework.
JavaScript
193
star
67

api-samples

Samples code that uses QRadar API's
Python
192
star
68

cdfsl-benchmark

(ECCV 2020) Cross-Domain Few-Shot Learning Benchmarking System
Python
190
star
69

kube101

Kubernetes 101 workshop (https://ibm.github.io/kube101/)
Shell
184
star
70

CrossViT

Official implementation of CrossViT. https://arxiv.org/abs/2103.14899
Python
180
star
71

browser-functions

A lightweight serverless platform that uses Web Browsers as execution engines
JavaScript
180
star
72

pwa-lit-template

A template for building Progressive Web Applications using Lit and Vaadin Router.
TypeScript
176
star
73

rl-testbed-for-energyplus

Reinforcement Learning Testbed for Power Consumption Optimization using EnergyPlus
Python
170
star
74

AMLSim

The AMLSim project is intended to provide a multi-agent based simulator that generates synthetic banking transaction data together with a set of known money laundering patterns - mainly for the purpose of testing machine learning models and graph algorithms. We welcome you to enhance this effort since the data set related to money laundering is critical to advance detection capabilities of money laundering activities.
Python
170
star
75

socket-io

A Socket.IO client for C#
C#
169
star
76

tfjs-web-app

A TensorFlow.js Progressive Web App for Offline Visual Recognition
JavaScript
164
star
77

molformer

Repository for MolFormer
Jupyter Notebook
163
star
78

spark-tpc-ds-performance-test

Use the TPC-DS benchmark to test Spark SQL performance
TSQL
160
star
79

watson-online-store

Learn how to use Watson Assistant and Watson Discovery. This application demonstrates a simple abstraction of a chatbot interacting with a Cloudant NoSQL database, using a Slack UI.
HTML
156
star
80

istio101

Istio 101 workshop (https://ibm.github.io/istio101/)
Shell
154
star
81

Medical-Blockchain

A healthcare data management platform built on blockchain that stores medical data off-chain
Vue
150
star
82

watson-assistant-slots-intro

A Chatbot for ordering a pizza that demonstrates how using the IBM Watson Assistant Slots feature, one can fill out an order, form, or profile.
JavaScript
143
star
83

tsfm

Foundation Models for Time Series
Jupyter Notebook
143
star
84

simulai

A toolkit with data-driven pipelines for physics-informed machine learning.
Python
142
star
85

etcd-java

Alternative etcd3 java client
Java
141
star
86

deploy-react-kubernetes

Built for developers who are interested in learning how to deploy a React application on Kubernetes, this pattern uses the React and Redux framework and calls the OMDb API to look up movie information based on user input. This pattern can be built and run on both Docker and Kubernetes.
JavaScript
139
star
87

innovate-digital-bank

This repository contains instructions to build a digital bank composed of a set of microservices that communicate with each other. Using Nodejs, Express, MongoDB and deployed to a Kubernetes cluster on IBM Cloud.
JavaScript
137
star
88

ipfs-social-proof

IPFS Social Proof: A decentralized identity and social proof system
JavaScript
135
star
89

KubeflowDojo

Repository to hold code, instructions, demos and pointers to presentation assets for Kubeflow Dojo
Jupyter Notebook
132
star
90

probabilistic-federated-neural-matching

Bayesian Nonparametric Federated Learning of Neural Networks
Python
132
star
91

fhe-toolkit-ios

IBM Fully Homomorphic Encryption Toolkit For iOS
C++
131
star
92

pytorch-large-model-support

Large Model Support in PyTorch
130
star
93

taxinomitis

Source code for Machine Learning for Kids site
JavaScript
127
star
94

Decentralized-Energy-Composer

WARNING: This repository is no longer maintained ⚠️ We are no longer showing the Hyperledger Composer Service.
TypeScript
127
star
95

quantum-careers

Learn about career opportunities with IBM Quantum.
126
star
96

cloud-pak

IBM Cloud Paks are enterprise-grade containerized software by combining container images with enterprise capabilities for deployment in production use cases with integrations for management and lifecycle operations. Features such as pre-configured deployments based on product expertise, rolling upgrades, and management of production workloads.
Shell
126
star
97

build-knowledge-base-with-domain-specific-documents

Create a knowledge base using domain specific documents and the mammoth python library
Jupyter Notebook
125
star
98

japan-technology

IBM Related Japanese technical documents - Code Patterns, Learning Path, Tutorials, etc.
Jupyter Notebook
125
star
99

DiffuseKronA

DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models
125
star
100

compliance-trestle

An opinionated tooling platform for managing compliance as code, using continuous integration and NIST's OSCAL standard.
Python
124
star