• Stars
    star
    2,528
  • Rank 18,142 (Top 0.4 %)
  • Language
  • Created over 4 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Open Deep Learning and Reinforcement Learning lectures from top Universities like Stanford, MIT, UC Berkeley.

AI Curriculum

Open Deep Learning and Reinforcement Learning lectures from top Universities like Stanford University, MIT, UC Berkeley.

Contents

Machine Learning

Applied Machine Learning

Cornell CS5785: Applied Machine Learning | Fall 2020

Lecture videos and resources from the Applied Machine Learning course at Cornell Tech, taught in Fall 2020. The lectures are covering ML from the very basics, including the important ML algorithms and how to apply them in practice. All materials are executable, the slides are Jupyter notebooks with programmatically generated figures. Readers can tweak parameters and regenerate the figures themselves.

Source: Cornell

Deep Learning

Introduction to Deep Learning

UC Berkeley CS 182: Deep Learning | Spring 2021

Source: Berkeley

MIT 6.S191: Introduction to Deep Learning | 2020

Lecture Series

MIT's introductory course on deep learning methods with applications to computer vision, natural language processing, biology, and more! Students will gain foundational knowledge of deep learning algorithms and get practical experience in building neural networks in TensorFlow. Course concludes with a project proposal competition with feedback from staff and panel of industry sponsors. Prerequisites assume calculus (i.e. taking derivatives) and linear algebra (i.e. matrix multiplication), we'll try to explain everything else along the way! Experience in Python is helpful but not necessary. Listeners are welcome!

Source: MIT

CNNs for Visual Recognition

CS231n: CNNs for Visual Recognition, Stanford | Spring 2019

Lecture Series

Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This course is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. During the 10-week course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. The final assignment will involve training a multi-million parameter convolutional neural network and applying it on the largest image classification dataset (ImageNet). We will focus on teaching how to set up the problem of image recognition, the learning algorithms (e.g. backpropagation), practical engineering tricks for training and fine-tuning the networks and guide the students through hands-on assignments and a final course project. Much of the background and materials of this course will be drawn from the ImageNet Challenge.

Source: Stanford

NLP with Deep Learning

CS224n: NLP with Deep Learning, Stanford | Winter 2019

Lecture Series

Natural language processing (NLP) is a crucial part of artificial intelligence (AI), modeling how people share information. In recent years, deep learning approaches have obtained very high performance on many NLP tasks. In this course, students gain a thorough introduction to cutting-edge neural networks for NLP.

Source: Stanford

Unsupervised Learning

CS294-158-SP20: Deep Unsupervised Learning, UC Berkeley | Spring 2020

Lecture Series, YouTube

This course covers two areas of deep learning in which labeled data is not required: Deep Generative Models and Self-supervised Learning. Recent advances in generative models have made it possible to realistically model high-dimensional raw data such as natural images, audio waveforms and text corpora. Strides in self-supervised learning have started to close the gap between supervised representation learning and unsupervised representation learning in terms of fine-tuning to unseen tasks. This course will cover the theoretical foundations of these topics as well as their newly enabled applications.

Multi-Task and Meta Learning

Stanford CS330: Multi-Task and Meta Learning | 2019

Lecture Series, YouTube

While deep learning has achieved remarkable success in supervised and reinforcement learning problems, such as image classification, speech recognition, and game playing, these models are, to a large degree, specialized for the single task they are trained for. This course will cover the setting where there are multiple tasks to be solved, and study how the structure arising from multiple tasks can be leveraged to learn more efficiently or effectively. This includes:

  • goal-conditioned reinforcement learning techniques that leverage the structure of the provided goal space to learn many tasks significantly faster
  • meta-learning methods that aim to learn efficient learning algorithms that can learn new tasks quickly
  • curriculum and lifelong learning, where the problem requires learning a sequence of tasks, leveraging their shared structure to enable knowledge transfer

Source: Stanford University CS330

DS-GA 1008: Deep Learning | SPRING 2020

Lecture Series

This course concerns the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. The lectures are taught by Yann LeCun and the entire lecture series is also available as a YouTube Playlist officially provided by them at https://www.youtube.com/playlist?list=PLLHTzKZzVU9eaEyErdV26ikyolxOsz6mq.

Source: NYU Center for Data Science

Deep Reinforcement Learning

CS285: Deep Reinforcement Learning, UC Berkeley | Fall 2020

Lecture Series, YouTube

More Repositories

1

Interactive_Tools

Interactive Tools for Machine Learning, Deep Learning and Math
2,610
star
2

DL-workshop-series

Material used for Deep Learning related workshops for Machine Learning Tokyo (MLT)
Jupyter Notebook
936
star
3

papers-with-annotations

Research papers with annotations, illustrations and explanations
829
star
4

CNN-Architectures

HTML
503
star
5

Math_resources

341
star
6

__init__

Jupyter Notebook
136
star
7

Deep_Reinforcement_Learning

Resources, papers, tutorials
124
star
8

MLT_Talks

Slides, videos and other resources from MLT Talks
108
star
9

AI-ML-Newsletter

AI Digest: Monthly updates on AI and ML topics
105
star
10

MLT_starterkit

92
star
11

Intro-to-GANs

This code was developed for the Intro to GANs workshop for Machine Learning Tokyo (MLT).
Jupyter Notebook
66
star
12

Annotation_Tools

Open Source Annotation Tools for Computer Vision and NLP tasks
52
star
13

EdgeAIContest3

This repository present MLT Team solution for the The 3rd AI Edge Contest.
Jupyter Notebook
52
star
14

Reinforcement_Learning

Material for MLT Reinforcement Learning workshops and study sessions
Jupyter Notebook
50
star
15

Poetry-GAN

Jupyter Notebook
49
star
16

EN-JP-ML-Lexicon

This is a English-Japanese lexicon for Machine Learning and Deep Learning terminology.
33
star
17

public_datasets

Public Machine Learning Datasets
30
star
18

tfjs-workshop

JavaScript
29
star
19

ML-Math

Mathematics for Machine Learning
CSS
29
star
20

d2l.ai

25
star
21

generative_deep_learning

Generative Deep Learning Sessions led by Anugraha Sinha (Machine Learning Tokyo)
Jupyter Notebook
25
star
22

MLT-x-fastai

Fast.ai study sessions organized by MLT.
Jupyter Notebook
24
star
23

ML_Fairness_Ethics_Explainability

Fairness, Ethics, Explainability in AI and ML
Jupyter Notebook
22
star
24

intro-to-DL

Jupyter Notebook
21
star
25

kuzushiji-lite

OCR for recognizing Kuzushiji from ancient Japanese manuscripts deployed for end-users
Python
19
star
26

edgeai-lab-microcontroller-series

This repository is to share the EdgeAI Lab with Microcontrollers Series material to the entire community. We will share documents, presentations and source code of two demo applications.
C++
16
star
27

practical-ml-implementations

ML implementations for practical use
Python
15
star
28

KaggleDaysTokyo2019

Jupyter Notebook
15
star
29

Seq2Seq-Workshop

Seq2Seq workshop materials
Jupyter Notebook
15
star
30

tactile_patterns

Convert photo to tactile image to assist visually impaired
Python
15
star
31

paper_readings

Material for the Paper Reading sessions organized by Machine Learning Tokyo
TeX
15
star
32

ELSI-DL-Bootcamp

Intro to Machine Learning and Deep Learning for Earth-Life Sciences
Jupyter Notebook
14
star
33

ML_Math

This repo contains resources from our MLT math lectures.
Jupyter Notebook
14
star
34

ML_recommendation_system

Python
13
star
35

NLP

13
star
36

MLTx2020

Jupyter Notebook
12
star
37

Edge-AI-Tutorials

Collection of Edge AI tutorials
12
star
38

Agritech

Jupyter Notebook
11
star
39

AI-SUM

"Data, Task and Algorithm Complexity in Deep Learning Projects", Dimitris Katsios and Suzana Ilic at Nikkei's AI/SUM, Tokyo, Japan
Jupyter Notebook
10
star
40

Kaggle

MLT working sessions: Kaggle
Jupyter Notebook
7
star
41

Edge_AI

Resources for our Workshops on Edge AI
7
star
42

search-api-requester

API requester for recommendation system
4
star
43

ML_Search

ML Search – Feedback
2
star