Basic-Mathematics-for-Machine-Learning
The motive behind Creating this repo is to feel the fear of mathematics and do what ever you want to do in Machine Learning , Deep Learning and other fields of AI .
In this Repo I Demonstrated Basics of Algebra, Calculus ,Statistics and Probability. So, try this Code in your python notebook which is provided in edx Course.
In this Repo you will also learn the Libraries which are essential like numpy, pandas, matplotlib...
I am going to upload new material when i find those material useful, you can also help me in keeping this repo fresh.
Source of this topic
Why Worry About The Maths?There are many reasons why the mathematics of Machine Learning is important and I will highlight some of them below:
-
Selecting the right algorithm which includes giving considerations to accuracy, training time, model complexity, number of parameters and number of features.
-
Choosing parameter settings and validation strategies.
-
Identifying underfitting and overfitting by understanding the Bias-Variance tradeoff.
-
Estimating the right confidence interval and uncertainty.
Source of this topic
What Level of Maths Do You Need?Linear Algebra:
A scientist, Skyler Speakman, recently said that “Linear Algebra is the mathematics of the 21st century” and I totally agree with the statement. In ML, Linear Algebra comes up everywhere. Topics such as Principal Component Analysis (PCA), Singular Value Decomposition (SVD), Eigendecomposition of a matrix, LU Decomposition, QR Decomposition/Factorization, Symmetric Matrices, Orthogonalization & Orthonormalization, Matrix Operations, Projections, Eigenvalues & Eigenvectors, Vector Spaces and Norms are needed for understanding the optimization methods used for machine learning. The amazing thing about Linear Algebra is that there are so many online resources. I have always said that the traditional classroom is dying because of the vast amount of resources available on the internet. My favorite Linear Algebra course is the one offered by MIT Courseware (Prof. Gilbert Strang).
Probability Theory and Statistics:
Machine Learning and Statistics aren’t very different fields. Actually, someone recently defined Machine Learning as ‘doing statistics on a Mac’. Some of the fundamental Statistical and Probability Theory needed for ML are Combinatorics, Probability Rules & Axioms, Bayes’ Theorem, Random Variables, Variance and Expectation, Conditional and Joint Distributions, Standard Distributions (Bernoulli, Binomial, Multinomial, Uniform and Gaussian), Moment Generating Functions, Maximum Likelihood Estimation (MLE), Prior and Posterior, Maximum a Posteriori Estimation (MAP) and Sampling Methods.
Multivariate Calculus:
Some of the necessary topics include Differential and Integral Calculus, Partial Derivatives, Vector-Values Functions, Directional Gradient, Hessian, Jacobian, Laplacian and Lagrangian Distribution.
Algorithms and Complex Optimizations:
This is important for understanding the computational efficiency and scalability of our Machine Learning Algorithm and for exploiting sparsity in our datasets. Knowledge of data structures (Binary Trees, Hashing, Heap, Stack etc), Dynamic Programming, Randomized & Sublinear Algorithm, Graphs, Gradient/Stochastic Descents and Primal-Dual methods are needed.
Others:
This comprises of other Math topics not covered in the four major areas described above. They include Real and Complex Analysis (Sets and Sequences, Topology, Metric Spaces, Single-Valued and Continuous Functions, Limits, Cauchy Kernel, Fourier Transforms), Information Theory (Entropy, Information Gain), Function Spaces and Manifolds.