AI for Trading
This repo contains my work to Udacity nanodegree AI for Trading.
Table of Contents
Project
1. Trading with Momentum.-
Learn basics of stock markets. Learn how to calculate stock returns and design momentum trading strategy.
-
Quiz: Stock Prices, Resample Data, Calculate Raw Returns, dtype and astype, top and bottom performer
Project
2. Breakout Strategy.-
Learn about the overall quant workflow, including alpha signal generation, alpha combination, portfolio optimization, and trading.
-
Learn the importance of outliers and how to detect them. Learn about methods designed to handle outliers.
-
Learn about regression, and related statistical tools that pre-process data before regression analysis. Learn commonly-used time series models.
-
Learn about stock volatility, and how the GARCH model analysis volatility. See how volatility is used in equity trading.
-
Learn about pair trading, and study the tools used in identifying stock pairs and making trading decision.
test normality, rolling windows, pairs candidates
Quiz: advanced quant:Project
3. Smart beta and portfolio optimization.- Overview of stocks, indices, and funds. Learn about ETFs.
- Learn fundamentals of portfolio risk and return.
- Learn how to optimize portfolios to meet certain criteria and constraints.
cumsum_and_cumprod, cov, cvxpy_basis, cvxpy_adv
Quiz: funds_etfs_portfolio_optimization:Project
4. Alpha Research and Factor Modeling.- Learn factors and how to convert factor values into portfolio weights in a dollar neutral portfolio with leverage ratio equals to 1 (i.e., standardize factor values).
- Learn fundamentals of factor models and type of factors. Learn how to compute portfolio variance using risk factor models. Learn time series and cross-sectional risk models.
- Learn how to use PCA to build risk factor models.
zipline pipline, zipline execise, historical_variance, factor_model_asset_return,factor_model_portfolio_return, covariance_matrix_assets, portfolio_variance, pca_factor_model,
Quiz:Project
5. Intro to NLP.NLP pipeline consists of text processing, feature extraction, and modeling.
-
Text processing: Learn text acquisition (plane text, tabular data, and online resources), simple data cleaning with python regex and BeautifulSoup, using nltk (natural language toolkit) for tokenization, stemming, and lemmatization.
-
Financial Statement: Learn how to apply Regexes to 10Ks, how BeautifulSoup can ease the parse of (perfectly formatted) html and xml downloaded using request library.
-
Basic NLP Analysis: Learn quantitatively measure readability of documents using readability indices, how to convert document into vectors using bag of word and TF-IDF weighting, and metrics to compare similarities between documents.
Project
6. Sentiment Analysis with Neural Networks.-
Neural Network Basics: Learn maximum likelihood, cross entropy, logistic regression, gradient decent, regularization, and practical heuristics for training neural networks.
- Recurrence Neutral Networks:
- Learn to use RNN to predict simple Time Series and train Character-Level LSTM to generate new text based on the text from the book.
- Learn Word2Vec algorithm using the Skip-gram Architecture and with Negative Sampling.
- Sentiment Analysis RNN: Implement a recurrent neural network that can predict if the text of a movie review is positive or negative.
Project
7. Combining Signals for Enhanced Alpha.-
Decision Tree: Learn how to branching decision tree using entropy and information gain. Implement decision tree using sklearn for Titanic Survival Exploration and visualize the decision tree using graphviz.
-
Model Testing and Evaluation: Learn Type 1 and Type 2 errors, Precision vs. Recall, Cross validation for time series, and using learning curve to determine underfitting and overfitting.
-
Random Forest: Learn the ensemble random forest method and implement it in sklearn.
-
Feature Engineering: Certain alphas perform better or worse depending on market conditions. Feature engineering creates additional inputs to give models more contexts about the current market condition so that the model can adjust its prediction accordingly.
-
Overlapping Labels: Mitigate the problem when features are dependent on each other (non-IID).
-
Feature Importance: Company would prefer simple interpretable models to black-box complex models. interpretability opens the door for complex models to be readily acceptable. One way to interpret a model is to measure how much each feature contributed to the model prediction called feature importance. Learn how sklearn computes features importance for tree-based method. Learn how to calculate shap for feature importance of a single sample.
Project
8. Backtesting.-
Basics: Learn best practices of backtesting and see what overfitting can "look like" in practice.
-
Learn how to optimization a portfolio with transaction cost. Learn some additional ways to design your optimization with efficiency in mind. This is really helpful when backtesting, because having reasonably shorter runtimes allows you to test and iterate on your alphas more quickly.