Hands-On Data Analysis with Pandas – Second Edition
This is the code repository for my book Hands-On Data Analysis with Pandas, published by Packt on July 26, 2019 (1st edition) and April 29, 2021 (2nd edition).
Versions
This repository contains git tags for the materials as they were at time of publishing. Available tags:
Book Description
Data analysis has become an essential skill in a variety of domains where knowing how to work with data and extract insights can generate significant value. Hands-On Data Analysis with Pandas will show you how to analyze your data, get started with machine learning, and work effectively with the Python libraries often used for data science, such as pandas, NumPy, matplotlib, seaborn, and scikit-learn.
Using real-world datasets, you will learn how to use the pandas library to perform data wrangling to reshape, clean, and aggregate your data. Then, you will learn how to conduct exploratory data analysis by calculating summary statistics and visualizing the data to find patterns. In the concluding chapters, you will explore some applications of anomaly detection, regression, clustering, and classification using scikit-learn to make predictions based on past data.
This updated edition will equip you with the skills you need to use pandas 1.x to efficiently perform various data manipulation tasks, reliably reproduce analyses, and visualize your data for effective decision making—valuable knowledge that can be applied across multiple domains.
What You Will Learn
Prerequisite: If you don't have basic knowledge of Python or past experience with another language (R, SAS, MATLAB, etc.), consult the ch_01/python_101.ipynb
Jupyter notebook for a Python crash-course/refresher.
- Understand how data analysts and scientists gather and analyze data
- Perform data analysis and data wrangling in Python
- Combine, group, and aggregate data from multiple sources
- Create data visualizations with
pandas
,matplotlib
, andseaborn
- Apply machine learning algorithms with
sklearn
to identify patterns and make predictions - Use Python data science libraries to analyze real-world datasets.
- Use
pandas
to solve several common data representation and analysis problems - Collect data from APIs
- Build Python scripts, modules, and packages for reusable analysis code.
- Utilize computer science concepts and algorithms to write more efficient code for data analysis
- Write and run simulations
Table of Contents
-
Chapter 1, Introduction to Data Analysis, will teach you the fundamentals of data analysis, give you a foundation in statistics, and get your environment set up for working with data in Python and using Jupyter Notebooks.
-
Chapter 2, Working with Pandas DataFrames, introduces you to the
pandas
library and shows you the basics of working withDataFrames
. -
Chapter 3, Data Wrangling with Pandas, discusses the process of data manipulation, shows you how to explore an API to gather data, and guides you through data cleaning and reshaping with pandas.
-
Chapter 4, Aggregating Pandas DataFrames, teaches you how to query and merge DataFrames, perform complex operations on them, including rolling calculations and aggregations, and how to work effectively with time series data.
-
Chapter 5, Visualizing Data with Pandas and Matplotlib, shows you how to create your own data visualizations in Python, first using the
matplotlib
library, and then directly frompandas
objects. -
Chapter 6, Plotting with Seaborn and Customization Techniques, continues the discussion on data visualization by teaching you how to use the
seaborn
library for visualizing your long form data and giving you the tools you need to customize your visualizations, making them presentation-ready. -
Chapter 7, Financial Analysis: Bitcoin and the Stock Market, walks you through the creation of a Python package for analyzing stocks, building upon everything learned in chapters 1-6 and applying it to a financial application.
-
Chapter 8, Rule-Based Anomaly Detection, covers simulating data and applying everything learned in chapters 1-6 to catching hackers attempting to authenticate to a website, using rule-based strategies for anomaly detection.
-
Chapter 9, Getting Started with Machine Learning in Python, introduces you to machine learning and building models using the
sklearn
library. -
Chapter 10, Making Better Predictions: Optimizing Models, shows you strategies for improving the performance of your machine learning models.
-
Chapter 11, Machine Learning Anomaly Detection, revisits anomaly detection on login attempt data, using machine learning techniques, all while giving you a taste of how the workflow looks in practice.
-
Chapter 12, The Road Ahead, contains resources for taking your skills to the next level and further avenues for exploration.
What's New in This Edition?
All the code examples have been updated for newer versions of the libraries used (see the requirements.txt file for the full list). The second edition also features new/revised examples highlighting new features. For pandas
in particular, the first edition uses a much older version than what is currently available (pre 1.0), and this edition brings the content up to date with the latest version (1.x). You can look through the pandas
release notes to get an idea of all the changes that have happened since the version of pandas
used in the first edition (0.23.4). In addition, there are significant changes to the content of some chapters, while others have new and improved examples and/or datasets.
Notes on Environment Setup
Environment setup instructions are in the chapter 1 of the text. If you don't have the book, you will need to install Python >= 3.7 and < 3.10, set up a virtual environment, activate it, fork and clone this repository to obtain a local copy of the files, change the current directory to your local copy of the files, and then install the required packages using the requirements.txt file inside the directory (note that git
will need to be installed). You can then launch JupyterLab and use the ch_01/checking_your_setup.ipynb
Jupyter notebook to check your setup. Consult this resource if you have issues with using your virtual environment in Jupyter.
Alternatively, consider using this repository on Binder or Google Colab.
Windows Users
If you have Python 3.9+ installed, you should create a virtual environment with conda
and specify Python 3.8 as discussed in this issue:
$ conda create --name book_env python=3.8
Alternatively, you can use the environment.yml
file, which will create the environment and install all the required packages:
$ conda install mamba -n base -c conda-forge
$ cd Hands-On-Data-Analysis-with-Pandas-2nd-edition
~/Hands-On-Data-Analysis-with-Pandas-2nd-edition$ mamba env create --file environment.yml
Apple Silicon Users
Make sure to use Python 3.9 if you plan to install packages with pip
. If you decide to use conda
, make sure to first install mamba
and use that to install everything using the m1_environment.yml
file instead:
$ conda install mamba -n base -c conda-forge
$ cd Hands-On-Data-Analysis-with-Pandas-2nd-edition
~/Hands-On-Data-Analysis-with-Pandas-2nd-edition$ mamba env create --file m1_environment.yml
Solutions
Each chapter comes with exercises. The solutions for chapters 1-11 can be found here. Since the exercises in chapter 12 are open-ended, no solutions are provided.
About the Author
Stefanie Molin (@stefmolin) is a software engineer and data scientist at Bloomberg in New York City, where she tackles tough problems in information security, particularly those revolving around data wrangling/visualization, building tools for gathering data, and knowledge sharing. She holds a bachelor’s of science degree in operations research from Columbia University's Fu Foundation School of Engineering and Applied Science with minors in Economics and Entrepreneurship and Innovation, as well as a master’s degree in computer science, with a specialization in machine learning, from Georgia Tech. In her free time, she enjoys traveling the world, inventing new recipes, and learning new languages spoken both among people and computers.
Acknowledgements
Since the book limited the acknowledgements to 450 characters, the full version is here.