• Stars
    star
    1,113
  • Rank 41,718 (Top 0.9 %)
  • Language
    Jupyter Notebook
  • Created almost 7 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Example deep learning projects that use wandb's features.

Weights & Biases Weights & Biases

Use W&B to build better models faster. Track and visualize all the pieces of your machine learning pipeline, from datasets to production machine learning models. Get started with W&B today, sign up for a free account!

Β 

Weights and Biases Experiments Weights and Biases Reports Weights and Biases Artifacts Weights and Biases Tables Weights and Biases Sweeps Weights and Biases Model Management Weights and Biases Launch

Β 

πŸš€ Getting Started

Never lose your progress again.

Save everything you need to compare and reproduce models β€” architecture, hyperparameters, weights, model predictions, GPU usage, git commits, and even datasets β€” in 5 minutes. W&B is free for personal use and academic projects, and it's easy to get started.

Check out our libraries of example scripts and example colabs or read on for code snippets and more!

If you have any questions, please don't hesitate to ask in our Discourse forum.

🀝 Simple integration with any framework

Install wandb library and login:

pip install wandb
wandb login

Flexible integration for any Python script:

import wandb

# 1. Start a W&B run
wandb.init(project='gpt3')

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# Model training code here ...

# 3. Log metrics over time to visualize performance
for i in range (10):
    wandb.log({"loss": loss})

Try in a colab β†’

If you have any questions, please don't hesitate to ask in our Discourse forum.

Explore a W&B dashboard

πŸ“ˆ Track model and data pipeline hyperparameters

Set wandb.config once at the beginning of your script to save your hyperparameters, input settings (like dataset name or model type), and any other independent variables for your experiments. This is useful for analyzing your experiments and reproducing your work in the future. Setting configs also allows you to visualize the relationships between features of your model architecture or data pipeline and the model performance (as seen in the screenshot above).

wandb.init()
wandb.config.epochs = 4
wandb.config.batch_size = 32
wandb.config.learning_rate = 0.001
wandb.config.architecture = "resnet"

πŸ— Use your favorite framework

πŸ₯• Keras

In Keras, you can use our callback to automatically save all the metrics tracked in model.fit. To get you started here's a minimal example:

# Import W&B
import wandb
from wandb.keras import WandbCallback

# Step1: Initialize W&B run
wandb.init(project='project_name')

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# Model training code here ...

# Step 3: Add WandbCallback 
model.fit(x_train, y_train,  validation_data=(x_test, y_test),
          callbacks=[WandbCallback()])

πŸ”₯ PyTorch

W&B provides first class support for PyTorch. To automatically log gradients and store the network topology, you can call .watch and pass in your PyTorch model. Then use .log for anything else you want to track, like so:

import wandb

# 1. Start a new run
wandb.init(project="gpt-3")

# 2. Save model inputs and hyperparameters
config = wandb.config
config.dropout = 0.01

# 3. Log gradients and model parameters
wandb.watch(model)
for batch_idx, (data, target) in enumerate(train_loader):
  ...  
  if batch_idx % args.log_interval == 0:      
    # 4. Log metrics to visualize performance
    wandb.log({"loss": loss})

⚑ PyTorch Lightning

W&B is integrated directly into PyTorch Lightning through their loggers API.

import wandb
from pytorch_lightning.loggers import WandbLogger
from pytorch_lightning import Trainer

# add logging into your training_step (and elsewhere!)
def training_step(self, batch, batch_idx):
    ...
    self.log('train/loss', loss)
    return loss

# add a WandbLogger to your Trainer
wandb_logger = WandbLogger()
trainer = Trainer(logger=wandb_logger)

# .fit your model
trainer.fit(model, mnist)

🌊 TensorFlow

The simplest way to log metrics in TensorFlow is by logging tf.summary with our TensorFlow logger:

import wandb

# 1. Start a W&B run
wandb.init(project='gpt3')

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# Model training here

# 3. Log metrics over time to visualize performance
with tf.Session() as sess:
  # ...
  wandb.tensorflow.log(tf.summary.merge_all())

πŸ’¨ fastai

Visualize, compare, and iterate on fastai models using Weights & Biases with the WandbCallback.

import wandb
from fastai.callback.wandb import WandbCallback

# 1. Start a new run
wandb.init(project="gpt-3")

# 2. Automatically log model metrics
learn.fit(..., cbs=WandbCallback())

πŸ€— HuggingFace

Just run a script using HuggingFace's Trainer in an environment where wandb is installed and we'll automatically log losses, evaluation metrics, model topology and gradients:

# 1. Install the wandb library
pip install wandb

# 2. Run a script that has the Trainer to automatically logs metrics, model topology and gradients
python run_glue.py \
 --model_name_or_path bert-base-uncased \
 --task_name MRPC \
 --data_dir $GLUE_DIR/$TASK_NAME \
 --do_train \
 --evaluate_during_training \
 --max_seq_length 128 \
 --per_gpu_train_batch_size 32 \
 --learning_rate 2e-5 \
 --num_train_epochs 3 \
 --output_dir /tmp/$TASK_NAME/ \
 --overwrite_output_dir \
 --logging_steps 50

🧹 Optimize hyperparameters with Sweeps

Use Weights & Biases Sweeps to automate hyperparameter optimization and explore the space of possible models.

Try Sweeps in PyTorch in a Colab β†’

Try Sweeps in TensorFlow in a Colab β†’

Benefits of using W&B Sweeps

  • Quick to setup: With just a few lines of code you can run W&B sweeps.
  • Transparent: We cite all the algorithms we're using, and our code is open source.
  • Powerful: Our sweeps are completely customizable and configurable. You can launch a sweep across dozens of machines, and it's just as easy as starting a sweep on your laptop.

Get started in 5 mins β†’

Weights & Biases

Common use cases

  • Explore: Efficiently sample the space of hyperparameter combinations to discover promising regions and build an intuition about your model.
  • Optimize: Use sweeps to find a set of hyperparameters with optimal performance.
  • K-fold cross validation: Here's a brief code example of k-fold cross validation with W&B Sweeps.

Visualize Sweeps results

The hyperparameter importance plot surfaces which hyperparameters were the best predictors of, and highly correlated to desirable values for your metrics.

Weights & Biases

Parallel coordinates plots map hyperparameter values to model metrics. They're useful for honing in on combinations of hyperparameters that led to the best model performance.

Weights & Biases

πŸ“œ Share insights with Reports

Reports let you organize visualizations, describe your findings, and share updates with collaborators.

Common use cases

  • Notes: Add a graph with a quick note to yourself.
  • Collaboration: Share findings with your colleagues.
  • Work log: Track what you've tried and plan next steps.

Explore reports in The Gallery β†’ | Read the Docs

Once you have experiments in W&B, you can visualize and document results in Reports with just a few clicks. Here's a quick demo video.

🏺 Version control datasets and models with Artifacts

Git and GitHub make code version control easy, but they're not optimized for tracking the other parts of the ML pipeline: datasets, models, and other large binary files.

W&B's Artifacts are. With just a few extra lines of code, you can start tracking you and your team's outputs, all directly linked to run.

Try Artifacts in a Colab with a video tutorial

Common use cases

  • Pipeline Management: Track and visualize the inputs and outputs of your runs as a graph
  • Don't Repeat Yourselfβ„’: Prevent the duplication of compute effort
  • Sharing Data in Teams: Collaborate on models and datasets without all the headaches

Learn about Artifacts here β†’ | Read the Docs

Visualize and Query data with Tables

Group, sort, filter, generate calculated columns, and create charts from tabular data.

Spend more time deriving insights, and less time building charts manually.

# log my table

wandb.log({"table": my_dataframe})

Try Tables in a Colab or these examples

Explore Tables here β†’ | Read the Docs

More Repositories

1

openui

OpenUI let's you describe UI using your imagination, then see it rendered live.
TypeScript
18,465
star
2

wandb

The AI developer platform. Use Weights & Biases to train and fine-tune models, and manage models from experimentation to production.
Python
8,850
star
3

weave

Weave is a toolkit for developing AI-powered applications, built by Weights & Biases.
TypeScript
683
star
4

edu

Educational materials on deep learning by Weights & Biases
Jupyter Notebook
492
star
5

awesome-dl-projects

This is a collection of the code that accompanies the reports in The Gallery by Weights & Biases.
Jupyter Notebook
322
star
6

server

W&B Server is the self hosted version of Weights & Biases
HCL
243
star
7

wandbot

wandbot is a technical support bot for Weights & Biases' AI developer tools that can run in Discord, Slack, ChatGPT and Zendesk
Jupyter Notebook
232
star
8

Groundbreaking-Papers

ML Research paper summaries, annotated papers and implementation walkthroughs
111
star
9

llm-leaderboard

Project of llm evaluation to Japanese tasks
Python
66
star
10

droughtwatch

Weights & Biases benchmark for drought prediction
Jupyter Notebook
54
star
11

gitbook

Documentation synced with GitBook. For all issues with the wandb library, please use https://github.com/wandb/client/issues
JavaScript
41
star
12

programmer

Python
36
star
13

sweeps

W&B Hyperparameter Sweep Engine. File sweeps related issues at the W&B client: https://github.com/wandb/client
Python
34
star
14

witness

Deep learning model for recognizing puzzle patterns in The Witness.
Python
27
star
15

Hemm

A holistic evaluation library for multi-modal generative models using Weave
Python
20
star
16

superres

Project to make a higher resolution version of existing images
Python
19
star
17

layoutlm_sroie_demo

Finetune LayoutLM on SROIE dataset using W&B tools
Python
18
star
18

terraform-aws-wandb

A terraform module for deploying Weights & Biases on AWS.
HCL
17
star
19

helm-charts

Our official helm charts for deploying wandb into k8s
Mustache
17
star
20

terraform-google-dagster

HCL
17
star
21

client-ng

Experimental wandb CLI and Python API - See Experimental section below.
Python
16
star
22

launch-jobs

πŸš€πŸ’Ό
Python
16
star
23

lit_utils

Utilities for working with W&B and PyTorch Lightning in an educational context
Python
15
star
24

catz

A machine learning contest to predict the behavior of catz
Python
15
star
25

llm-workshop-fc2024

Resources for the FC 2024 LLM workshop
Jupyter Notebook
15
star
26

terraform-google-wandb

A Terraform module for deploying Weights & Biases on GCP.
HCL
12
star
27

artifacts-examples

W&B Artifacts examples
Python
12
star
28

nb_helpers

A set of tools to work with notebooks
Jupyter Notebook
9
star
29

parallel

Easy & robust parallelism in golang
Go
9
star
30

wandb-js

The W&B SDK for TypeScript, Node, and modern Web Browsers
TypeScript
8
star
31

qualcomm-contest

Jupyter Notebook
7
star
32

wandb-workspaces

Programatically edit the W&B UI
Python
7
star
33

SageMakerStudio

A repo showcasing SMSL and W&B
Jupyter Notebook
6
star
34

assets

Weights & Biases logos, branding, and assets to use and share
6
star
35

react-vis

Fork of github.com/uber/react-vis with bugfixes and extensions
JavaScript
5
star
36

terraform-azurerm-wandb

HCL
5
star
37

server-cli

Go
3
star
38

terraform-kubernetes-wandb

HCL
3
star
39

weaveflow

Jupyter Notebook
3
star
40

wandbmon

wandb wrapper for production monitoring and evaluation usecases
Python
3
star
41

wandb-uat

User acceptance testing for the Weights & Biases python SDK library.
Python
3
star
42

codesearchnet

Python
2
star
43

awesome-dl-resources

2
star
44

docugen

Reference documentation generator for Weights & Biases
Python
2
star
45

client-java

Java
2
star
46

davis-contest

Materials for the DAVIS Video Segmentation Contest
Jupyter Notebook
2
star
47

sampled-log-example

Python
2
star
48

weave-analysis

Jupyter Notebook
2
star
49

connections

Solving NYTimes Connections puzzle
Python
2
star
50

wandb-content-navigator

LLM-powered RAG slackbot and endpoint to suggest Weights & Biases content
Python
2
star
51

runchain

Example of Run Chaining
Python
1
star
52

hub

Default files and setup scripts for the hub
Shell
1
star
53

yea

Yea functional test harness
Python
1
star
54

jetson-webhook

Using WandB Webhooks on Edge Devices
Python
1
star
55

dsviz-demo

Jupyter Notebook
1
star
56

pong

A reinforcement learning contest to master the game of pong
Python
1
star
57

terraform-google-assume-aws-role

HCL
1
star
58

auto-release-notes

TypeScript
1
star
59

tiny-ml

TinyML tools for and with WandB
Jupyter Notebook
1
star
60

wandb-testing

Repo to store testing related tools
Python
1
star
61

text-extraction

Python
1
star
62

mixeval-weave

Evaluating LLMs on the MixEval dataset using W&B Weave
Python
1
star
63

libwandb-cpp

1
star
64

mon-sdk-dev

Python
1
star
65

yea-wandb

Python
1
star
66

nexus

Go
1
star
67

gpu_dashboard

extract gpu usage across the teams
Python
1
star