• Stars
    star
    283
  • Rank 146,066 (Top 3 %)
  • Language
    Jupyter Notebook
  • Created about 6 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for my medium article

Pytorch: how and when to use Module, Sequential, ModuleList and ModuleDict

Effective way to share, reuse and break down the complexity of your models

Updated at Pytorch 1.5

You can find the code here

Pytorch is an open source deep learning frameworks that provide a smart way to create ML models. Even if the documentation is well made, I still see that most people don't write well and organized code in PyTorch.

Today, we are going to see how to use the three main building blocks of PyTorch: Module, Sequential and ModuleList. We are going to start with an example and iteratively we will make it better.

All these four classes are contained into torch.nn

import torch.nn as nn

# nn.Module
# nn.Sequential
# nn.Module

Module: the main building block

The Module is the main building block, it defines the base class for all neural network and you MUST subclass it.

Let's create a classic CNN classifier as example:

import torch.nn.functional as F

class MyCNNClassifier(nn.Module):
    def __init__(self, in_c, n_classes):
        super().__init__()
        self.conv1 = nn.Conv2d(in_c, 32, kernel_size=3, stride=1, padding=1)
        self.bn1 = nn.BatchNorm2d(32)
        
        self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1)
        self.bn2 = nn.BatchNorm2d(64)

        self.fc1 = nn.Linear(64 * 28 * 28, 1024)
        self.fc2 = nn.Linear(1024, n_classes)
        
    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = F.relu(x)
        
        x = self.conv2(x)
        x = self.bn2(x)
        x = F.relu(x)

        x = x.view(x.size(0), -1) # flat
        
        x = self.fc1(x)
        x = F.sigmoid(x)
        x = self.fc2(x)
        
        return x
model = MyCNNClassifier(1, 10)
print(model)
MyCNNClassifier(
  (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (fc1): Linear(in_features=50176, out_features=1024, bias=True)
  (fc2): Linear(in_features=1024, out_features=10, bias=True)
)

This is a very simple classifier with an encoding part that uses two layers with 3x3 convs + batchnorm + relu and a decoding part with two linear layers. If you are not new to PyTorch you may have seen this type of coding before, but there are two problems.

If we want to add a layer we have to again write lots of code in the __init__ and in the forward function. Also, if we have some common block that we want to use in another model, e.g. the 3x3 conv + batchnorm + relu, we have to write it again.

Sequential: stack and merge layers

Sequential is a container of Modules that can be stacked together and run at the same time.

You can notice that we have to store into self everything. We can use Sequential to improve our code.

class MyCNNClassifier(nn.Module):
    def __init__(self, in_c, n_classes):
        super().__init__()
        self.conv_block1 = nn.Sequential(
            nn.Conv2d(in_c, 32, kernel_size=3, stride=1, padding=1),
            nn.BatchNorm2d(32),
            nn.ReLU()
        )
        
        self.conv_block2 = nn.Sequential(
            nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
            nn.BatchNorm2d(64),
            nn.ReLU()
        )
        
        self.decoder = nn.Sequential(
            nn.Linear(64 * 28 * 28, 1024),
            nn.Sigmoid(),
            nn.Linear(1024, n_classes)
        )

        
    def forward(self, x):
        x = self.conv_block1(x)
        x = self.conv_block2(x)

        x = x.view(x.size(0), -1) # flat
        
        x = self.decoder(x)
        
        return x
model = MyCNNClassifier(1, 10)
print(model)
MyCNNClassifier(
  (conv_block1): Sequential(
    (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU()
  )
  (conv_block2): Sequential(
    (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU()
  )
  (decoder): Sequential(
    (0): Linear(in_features=50176, out_features=1024, bias=True)
    (1): Sigmoid()
    (2): Linear(in_features=1024, out_features=10, bias=True)
  )
)

Much Better uhu?

Did you notice that conv_block1 and conv_block2 looks almost the same? We could create a function that reteurns a nn.Sequential to even simplify the code!

def conv_block(in_f, out_f, *args, **kwargs):
    return nn.Sequential(
        nn.Conv2d(in_f, out_f, *args, **kwargs),
        nn.BatchNorm2d(out_f),
        nn.ReLU()
    )

Then we can just call this function in our Module

class MyCNNClassifier(nn.Module):
    def __init__(self, in_c, n_classes):
        super().__init__()
        self.conv_block1 = conv_block(in_c, 32, kernel_size=3, padding=1)
        
        self.conv_block2 = conv_block(32, 64, kernel_size=3, padding=1)

        
        self.decoder = nn.Sequential(
            nn.Linear(64 * 28 * 28, 1024),
            nn.Sigmoid(),
            nn.Linear(1024, n_classes)
        )

        
    def forward(self, x):
        x = self.conv_block1(x)
        x = self.conv_block2(x)

        x = x.view(x.size(0), -1) # flat
        
        x = self.decoder(x)
        
        return x
model = MyCNNClassifier(1, 10)
print(model)
MyCNNClassifier(
  (conv_block1): Sequential(
    (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU()
  )
  (conv_block2): Sequential(
    (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU()
  )
  (decoder): Sequential(
    (0): Linear(in_features=50176, out_features=1024, bias=True)
    (1): Sigmoid()
    (2): Linear(in_features=1024, out_features=10, bias=True)
  )
)

Even cleaner! Still conv_block1 and conv_block2 are almost the same! We can merge them using nn.Sequential

class MyCNNClassifier(nn.Module):
    def __init__(self, in_c, n_classes):
        super().__init__()
        self.encoder = nn.Sequential(
            conv_block(in_c, 32, kernel_size=3, padding=1),
            conv_block(32, 64, kernel_size=3, padding=1)
        )

        
        self.decoder = nn.Sequential(
            nn.Linear(64 * 28 * 28, 1024),
            nn.Sigmoid(),
            nn.Linear(1024, n_classes)
        )

        
    def forward(self, x):
        x = self.encoder(x)
        
        x = x.view(x.size(0), -1) # flat
        
        x = self.decoder(x)
        
        return x
model = MyCNNClassifier(1, 10)
print(model)
MyCNNClassifier(
  (encoder): Sequential(
    (0): Sequential(
      (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU()
    )
    (1): Sequential(
      (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU()
    )
  )
  (decoder): Sequential(
    (0): Linear(in_features=50176, out_features=1024, bias=True)
    (1): Sigmoid()
    (2): Linear(in_features=1024, out_features=10, bias=True)
  )
)

self.encoder now holds booth conv_block. We have decoupled logic for our model and make it easier to read and reuse. Our conv_block function can be imported and used in another model.

Dynamic Sequential: create multiple layers at once

What if we can to add a new layers in self.encoder, hardcoded them is not convinient:

self.encoder = nn.Sequential(
            conv_block(in_c, 32, kernel_size=3, padding=1),
            conv_block(32, 64, kernel_size=3, padding=1),
            conv_block(64, 128, kernel_size=3, padding=1),
            conv_block(128, 256, kernel_size=3, padding=1),

        )

Would it be nice if we can define the sizes as an array and automatically create all the layers without writing each one of them? Fortunately we can create an array and pass it to Sequential

class MyCNNClassifier(nn.Module):
    def __init__(self, in_c, n_classes):
        super().__init__()
        self.enc_sizes = [in_c, 32, 64]
        
        conv_blocks = [conv_block(in_f, out_f, kernel_size=3, padding=1) 
                       for in_f, out_f in zip(self.enc_sizes, self.enc_sizes[1:])]
        
        self.encoder = nn.Sequential(*conv_blocks)

        
        self.decoder = nn.Sequential(
            nn.Linear(64 * 28 * 28, 1024),
            nn.Sigmoid(),
            nn.Linear(1024, n_classes)
        )

        
    def forward(self, x):
        x = self.encoder(x)
        
        x = x.view(x.size(0), -1) # flat
        
        x = self.decoder(x)
        
        return x
model = MyCNNClassifier(1, 10)
print(model)
MyCNNClassifier(
  (encoder): Sequential(
    (0): Sequential(
      (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU()
    )
    (1): Sequential(
      (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU()
    )
  )
  (decoder): Sequential(
    (0): Linear(in_features=50176, out_features=1024, bias=True)
    (1): Sigmoid()
    (2): Linear(in_features=1024, out_features=10, bias=True)
  )
)

Let's break it down. We created an array self.enc_sizes that holds the sizes of our encoder. Then we create an array conv_blocks by iterating the sizes. Since we have to give booth a in size and an outsize for each layer we ziped the size'array with itself by shifting it by one.

Just to be clear, take a look at the following example:

sizes = [1, 32, 64]

for in_f,out_f in zip(sizes, sizes[1:]):
    print(in_f,out_f)
1 32
32 64

Then, since Sequential does not accept a list, we decompose it by using the * operator.

Tada! Now if we just want to add a size, we can easily add a new number to the list. It is a common practice to make the size a parameter.

class MyCNNClassifier(nn.Module):
    def __init__(self, in_c, enc_sizes, n_classes):
        super().__init__()
        self.enc_sizes = [in_c, *enc_sizes]
        
        conv_blocks = [conv_block(in_f, out_f, kernel_size=3, padding=1) 
                       for in_f, out_f in zip(self.enc_sizes, self.enc_sizes[1:])]
        
        self.encoder = nn.Sequential(*conv_blocks)

        
        self.decoder = nn.Sequential(
            nn.Linear(64 * 28 * 28, 1024),
            nn.Sigmoid(),
            nn.Linear(1024, n_classes)
        )

        
    def forward(self, x):
        x = self.encoder(x)
        
        x = x.view(x.size(0), -1) # flat
        
        x = self.decoder(x)
        
        return x
model = MyCNNClassifier(1, [32,64, 128], 10)
print(model)
MyCNNClassifier(
  (encoder): Sequential(
    (0): Sequential(
      (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU()
    )
    (1): Sequential(
      (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU()
    )
    (2): Sequential(
      (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU()
    )
  )
  (decoder): Sequential(
    (0): Linear(in_features=50176, out_features=1024, bias=True)
    (1): Sigmoid()
    (2): Linear(in_features=1024, out_features=10, bias=True)
  )
)

We can do the same for the decoder part

def dec_block(in_f, out_f):
    return nn.Sequential(
        nn.Linear(in_f, out_f),
        nn.Sigmoid()
    )

class MyCNNClassifier(nn.Module):
    def __init__(self, in_c, enc_sizes, dec_sizes,  n_classes):
        super().__init__()
        self.enc_sizes = [in_c, *enc_sizes]
        self.dec_sizes = [64 * 28 * 28, *dec_sizes]

        conv_blocks = [conv_block(in_f, out_f, kernel_size=3, padding=1) 
                       for in_f, out_f in zip(self.enc_sizes, self.enc_sizes[1:])]
        
        self.encoder = nn.Sequential(*conv_blocks)

        
        dec_blocks = [dec_block(in_f, out_f) 
                       for in_f, out_f in zip(self.dec_sizes, self.dec_sizes[1:])]
        
        self.decoder = nn.Sequential(*dec_blocks)
        
        self.last = nn.Linear(self.dec_sizes[-1], n_classes)

        
    def forward(self, x):
        x = self.encoder(x)
        
        x = x.view(x.size(0), -1) # flat
        
        x = self.decoder(x)
        
        return x
model = MyCNNClassifier(1, [32,64], [1024, 512], 10)
print(model)
MyCNNClassifier(
  (encoder): Sequential(
    (0): Sequential(
      (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU()
    )
    (1): Sequential(
      (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU()
    )
  )
  (decoder): Sequential(
    (0): Sequential(
      (0): Linear(in_features=50176, out_features=1024, bias=True)
      (1): Sigmoid()
    )
    (1): Sequential(
      (0): Linear(in_features=1024, out_features=512, bias=True)
      (1): Sigmoid()
    )
  )
  (last): Linear(in_features=512, out_features=10, bias=True)
)

We followed the same pattern, we create a new block for the decoding part, linear + sigmoid, and we pass an array with the sizes. We had to add a self.last since we do not want to activate the output

Now, we can even break down our model in two! Encoder + Decoder

class MyEncoder(nn.Module):
    def __init__(self, enc_sizes):
        super().__init__()
        self.conv_blocks = nn.Sequential(*[conv_block(in_f, out_f, kernel_size=3, padding=1) 
                       for in_f, out_f in zip(enc_sizes, enc_sizes[1:])])

        def forward(self, x):
            return self.conv_blocks(x)
        
class MyDecoder(nn.Module):
    def __init__(self, dec_sizes, n_classes):
        super().__init__()
        self.dec_blocks = nn.Sequential(*[dec_block(in_f, out_f) 
                       for in_f, out_f in zip(dec_sizes, dec_sizes[1:])])
        self.last = nn.Linear(dec_sizes[-1], n_classes)

    def forward(self, x):
        return self.dec_blocks()
    
    
class MyCNNClassifier(nn.Module):
    def __init__(self, in_c, enc_sizes, dec_sizes,  n_classes):
        super().__init__()
        self.enc_sizes = [in_c, *enc_sizes]
        self.dec_sizes = [self.enc_sizes[-1] * 28 * 28, *dec_sizes]

        self.encoder = MyEncoder(self.enc_sizes)
        
        self.decoder = MyDecoder(self.dec_sizes, n_classes)
        
    def forward(self, x):
        x = self.encoder(x)
        
        x = x.flatten(1) # flat
        
        x = self.decoder(x)
        
        return x
model = MyCNNClassifier(1, [32,64], [1024, 512], 10)
print(model)
MyCNNClassifier(
  (encoder): MyEncoder(
    (conv_blocks): Sequential(
      (0): Sequential(
        (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU()
      )
      (1): Sequential(
        (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU()
      )
    )
  )
  (decoder): MyDecoder(
    (dec_blocks): Sequential(
      (0): Sequential(
        (0): Linear(in_features=50176, out_features=1024, bias=True)
        (1): Sigmoid()
      )
      (1): Sequential(
        (0): Linear(in_features=1024, out_features=512, bias=True)
        (1): Sigmoid()
      )
    )
    (last): Linear(in_features=512, out_features=10, bias=True)
  )
)

Be aware that MyEncoder and MyDecoder could also be functions that returns a nn.Sequential. I prefer to use the first pattern for models and the second for building blocks.

By diving our module into submodules it is easier to share the code, debug it and test it.

ModuleList : when we need to iterate

ModuleList allows you to store Module as a list. It can be useful when you need to iterate through layer and store/use some information, like in U-net.

The main difference between Sequential is that ModuleList have not a forward method so the inner layers are not connected. Assuming we need each output of each layer in the decoder, we can store it by:

class MyModule(nn.Module):
    def __init__(self, sizes):
        super().__init__()
        self.layers = nn.ModuleList([nn.Linear(in_f, out_f) for in_f, out_f in zip(sizes, sizes[1:])])
        self.trace = []
        
    def forward(self,x):
        for layer in self.layers:
            x = layer(x)
            self.trace.append(x)
        return x
model = MyModule([1, 16, 32])
import torch

model(torch.rand((4,1)))

[print(trace.shape) for trace in model.trace]
torch.Size([4, 16])
torch.Size([4, 32])





[None, None]

ModuleDict: when we need to choose

What if we want to switch to LearkyRelu in our conv_block? We can use ModuleDict to create a dictionary of Module and dynamically switch Module when we want

def conv_block(in_f, out_f, activation='relu', *args, **kwargs):
    
    activations = nn.ModuleDict([
                ['lrelu', nn.LeakyReLU()],
                ['relu', nn.ReLU()]
    ])
    
    return nn.Sequential(
        nn.Conv2d(in_f, out_f, *args, **kwargs),
        nn.BatchNorm2d(out_f),
        activations[activation]
    )
print(conv_block(1, 32,'lrelu', kernel_size=3, padding=1))
print(conv_block(1, 32,'relu', kernel_size=3, padding=1))
Sequential(
  (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (2): LeakyReLU(negative_slope=0.01)
)
Sequential(
  (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (2): ReLU()
)

Final implementation

Let's wrap it up everything!

def conv_block(in_f, out_f, activation='relu', *args, **kwargs):
    activations = nn.ModuleDict([
                ['lrelu', nn.LeakyReLU()],
                ['relu', nn.ReLU()]
    ])
    
    return nn.Sequential(
        nn.Conv2d(in_f, out_f, *args, **kwargs),
        nn.BatchNorm2d(out_f),
        activations[activation]
    )

def dec_block(in_f, out_f):
    return nn.Sequential(
        nn.Linear(in_f, out_f),
        nn.Sigmoid()
    )

class MyEncoder(nn.Module):
    def __init__(self, enc_sizes, *args, **kwargs):
        super().__init__()
        self.conv_blocks = nn.Sequential(*[conv_block(in_f, out_f, kernel_size=3, padding=1, *args, **kwargs) 
                       for in_f, out_f in zip(enc_sizes, enc_sizes[1:])])
        
        def forward(self, x):
            return self.conv_blocks(x)
        
class MyDecoder(nn.Module):
    def __init__(self, dec_sizes, n_classes):
        super().__init__()
        self.dec_blocks = nn.Sequential(*[dec_block(in_f, out_f) 
                       for in_f, out_f in zip(dec_sizes, dec_sizes[1:])])
        self.last = nn.Linear(dec_sizes[-1], n_classes)

    def forward(self, x):
        return self.dec_blocks()
    
    
class MyCNNClassifier(nn.Module):
    def __init__(self, in_c, enc_sizes, dec_sizes,  n_classes, activation='relu'):
        super().__init__()
        self.enc_sizes = [in_c, *enc_sizes]
        self.dec_sizes = [32 * 28 * 28, *dec_sizes]

        self.encoder = MyEncoder(self.enc_sizes, activation=activation)
        
        self.decoder = MyDecoder(dec_sizes, n_classes)
        
    def forward(self, x):
        x = self.encoder(x)
        
        x = x.flatten(1) # flat
        
        x = self.decoder(x)
        
        return x
model = MyCNNClassifier(1, [32,64], [1024, 512], 10, activation='lrelu')
print(model)
MyCNNClassifier(
  (encoder): MyEncoder(
    (conv_blocks): Sequential(
      (0): Sequential(
        (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): LeakyReLU(negative_slope=0.01)
      )
      (1): Sequential(
        (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): LeakyReLU(negative_slope=0.01)
      )
    )
  )
  (decoder): MyDecoder(
    (dec_blocks): Sequential(
      (0): Sequential(
        (0): Linear(in_features=1024, out_features=512, bias=True)
        (1): Sigmoid()
      )
    )
    (last): Linear(in_features=512, out_features=10, bias=True)
  )
)

Conclusion

So, in summary.

  • Use Module when you have a big block compose of multiple smaller blocks
  • Use Sequential when you want to create a small block from layers
  • Use ModuleList when you need to iterate through some layers or building blocks and do something
  • Use ModuleDict when you need to parametise some blocks of your model, for example an activation function

That's all folks!

Thank you for reading

More Repositories

1

glasses

High-quality Neural Networks for Computer Vision 😎
Jupyter Notebook
360
star
2

PyTorch-Deep-Learning-Template

A Pytorch Computer Vision template to quick start your next project! 🚀🚀
Jupyter Notebook
303
star
3

mirror

Visualisation tool for CNNs in pytorch
Jupyter Notebook
242
star
4

A-journey-into-Convolutional-Neural-Network-visualization-

A journey into Convolutional Neural Network visualization
Jupyter Notebook
235
star
5

ViT

Implementing Vi(sion)T(transformer)
213
star
6

ResNet

Clean, scalable and easy to use ResNet implementation in Pytorch
Jupyter Notebook
160
star
7

Tensorflow-Dataset-Tutorial

Notebook for my medium article about how to use Dataset API in TensorFlow
Jupyter Notebook
159
star
8

skeleton-card-vuejs

A reusable skeleton card component written in Vuejs
Vue
143
star
9

Reinforcement-Learning-Cheat-Sheet

Reinforcement Learning Cheat Sheet
TeX
131
star
10

API-Class

A utility class for calling apis CRUD methods
JavaScript
89
star
11

LinkedInGPT

Skynet
Python
86
star
12

DrawIo2Vuejs

A faster way to create a Vuejs app by using draw.io
JavaScript
82
star
13

Modern-Python-Doc-Example

mkdocs + material + cool stuff
Python
63
star
14

my-spaces

Run hugging face spaces locally with one command!
Python
53
star
15

linkedin_python

Python package to create posts on LinkedIn
Python
41
star
16

ConvNext

Implementing ConvNext in PyTorch
39
star
17

gradioGPT

Easy to hack template for your next chatGPT app with Gradio and Langchain
Python
37
star
18

PytorchModuleStorage

A easy to use API to store outputs from forward/backward hooks in Pytorch
Jupyter Notebook
35
star
19

TensorFlow-Serving-Example

Example of how to use TensorFlow serving
Python
35
star
20

purgIn

Chrome Extension to remove LinkedIn posts containing user defined words
JavaScript
33
star
21

torchserve-tryout

Deploy a CNN with torchserve using a custom handler
Python
31
star
22

drawIoToVuejs

Python
31
star
23

dynamic-batching-asyncio

Python
30
star
24

yolov10

Python
29
star
25

how-to-use-chatgpt-with-python

Tutorial about using ChatGPT APIs in Python
Python
28
star
26

FairytaleDJ

You got a friend in me
Python
26
star
27

Loading-huge-PyTorch-models-with-linear-memory-consumption

Little article showing how to load pytorch's models with linear memory consumption
21
star
28

pytorch-2.0-benchmark

Benchmarking PyTorch 2.0 different models
Python
20
star
29

search-all

Python
19
star
30

yolov11

Python
18
star
31

SegFormer

Implementation of SegFormer in PyTorch
17
star
32

DropPath

Implementing DropPath/StochasticDepth in PyTorch
16
star
33

Flue

Yep, another Flux implementation for Vuejs. Docs: https://francescosaveriozuppichini.github.io/Flue/header.html
JavaScript
14
star
34

detector

Python
14
star
35

non-max-suppression-in-pytorch

How to implement Non Max Suppression (NMS) in PyTorch
13
star
36

BottleNeck-InvertedResidual-FusedMBConv-in-PyTorch

A little walk-trough different types of the block with their corresponding implementation in PyTorch
Jupyter Notebook
13
star
37

RepVgg

Implementing RepVGG in PyTorch
Python
11
star
38

http-streaming-fastapi-js-playground

http-streaming-playground
TypeScript
10
star
39

LSTM-Text-Generator

LSTM written in Tensorflow that generates text
Python
8
star
40

is-3090-good-for-computer-vision

A collection of benchmarks I've run
Python
8
star
41

torchlego

High level building blocks for Neural Networks with examples
Python
8
star
42

DeiT

DeiT: Data-efficient Image Transformers
8
star
43

Face-Unlock

Face Unlock with Deep Learning
Jupyter Notebook
7
star
44

Resource

A more convenient way to store your state data using a map
JavaScript
6
star
45

DropBlock

Implementing DropBlock a better Dropout for Conv Nets in PyTorch!
6
star
46

Object365-download

Zero Dependencies script to download Object365
Python
5
star
47

chatgpt-action-fastapi

chatgpt-action-fastapi
Python
5
star
48

End2End-DataScienceProject

Jupyter Notebook
4
star
49

model-version-with-hf-hub

What if we use hf hub to do versioning of a model?
Python
4
star
50

Search-COVID-papers-with-Deep-Learning

A semantic browser using deep learning to search in COVID papers
Jupyter Notebook
4
star
51

local-youtube-rag

Local YouTube RAG with Qdrant and Ollama
Python
4
star
52

Paxos

Distributed Algorithm 2018 Project - USI
Python
3
star
53

any-inference

Run inference in any model using a message broker.
Python
3
star
54

StopWatchElectron

JavaScript
3
star
55

OneNet

OneNet
Python
3
star
56

How-To-Embed-in-TensorFlow

Code for my medium article
Jupyter Notebook
3
star
57

pytorch-distributed-collective-communication

Code and visualisations about PyTorch distributed collective communication
Python
3
star
58

auto_model_card

Little utility to create model card for huggingface hub
Python
3
star
59

python-autobump

A repo that contains a way to auto version bump based on gitmoji commits
Python
3
star
60

Distributed-Algorithm-USI-2018

notes for Distributed Algorithm course
3
star
61

yolov100

Python
3
star
62

data-gradients-hf-datasets

Using data-gradients with hugging face datasets
Jupyter Notebook
2
star
63

tips_pytorch

Tips for PyTorch
2
star
64

HuggingFaceAutoDocstring

Mustache
2
star
65

PytorchModulePCA

An easy to use API to visualize the latent space of CNN in Pytorch
Jupyter Notebook
2
star
66

.dotfiles

Hosting my home pc configuration
2
star
67

Emotions-Detection

Detection of emotions (happiness, sadness) from a face photo using Deep Learning
Jupyter Notebook
2
star
68

GraphAppCreator

JavaScript
2
star
69

py4ai

Python
2
star
70

COVID-19-Map-Storytelling

Covid-19 story told by maps
JavaScript
2
star
71

Faster-RCNN-tryout

Let's try out Faster-RCNN
Jupyter Notebook
2
star
72

Estimator

A predicatable way to train your deep learning model
Python
2
star
73

redux-promise-action-middleware

JavaScript
2
star
74

brainyquote-Web-Scraper

A easy to use web scraper to get quotes from brainyquote
JavaScript
2
star
75

playground-python

1
star
76

yolov42

Python
1
star
77

LinkedIn-posts

Repo holding my LinkedIn posts 💙
1
star
78

glasses-webapp

Webapp for my computer vision library glasses
JavaScript
1
star
79

FrancescoSaverioZuppichini

1
star
80

spammer

Python
1
star
81

yolov36

Python
1
star
82

Physical-Computing-Project

JavaScript
1
star
83

mobileone-segmentation-models-pytorch

lazy and raw work around to use mobile one in segmentation models pytorch
Python
1
star
84

auto-convert-notebooks-to-markdown-github-action

Python
1
star
85

yolov_-n-1-

hold code for yolov_{n+1}
Python
1
star
86

Activities

A vanilla bootstrap GUI for activity filtering based on 0-1 knapsack problem
JavaScript
1
star
87

glasses-2.0

Python
1
star
88

Fool-Object-Classifier

Repo used for my medium article
JavaScript
1
star
89

Master-Thesis

Repository for my Master Thesis @IDSIA and @USI
Jupyter Notebook
1
star
90

MarkDownToMediumThisNameAlreadyExists

Python
1
star
91

yolov99

Python
1
star
92

Mobile-Computing-Project

Application for Mobile Computing
1
star
93

simebot

Python
1
star
94

TFGraphConvertible

Jupyter Notebook
1
star
95

company-xyxy-challenge

Jupyter Notebook
1
star
96

Advance-Topics-In-Machine-Learning-Project

Project for the Advance Topics in Machine Learning course - USI 2018
Python
1
star
97

Robotics-2018

Python
1
star
98

fastapi-template

A little template with loguru and some sort of json logging
Python
1
star