• Stars
    star
    7
  • Rank 2,294,772 (Top 46 %)
  • Language
    Jupyter Notebook
  • Created almost 7 years ago
  • Updated over 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The goal of this project was to solve the problem of a telecom operator losing customers to its competitor. A predictive model was built with classification algorithms to predict the likelihood of churn (leaving a service) of a telecom subscriber.

More Repositories

1

Football-Data-Scrapper-ETL-to-PostgreSQL

In this project, I build a web scrapper to scrappe: football-data.co.uk website and performed Extraction Transformation & loading (ETL) on the screapped data to a Mysql database
Python
5
star
2

Data-Engineering-Learning-Guide

hat aggregates data from multiple sources & consolidate into an Analytics Data warehouse to support organization-wide analytics/reports used by the data analysts, data scientists or the BI team.
3
star
3

Server-Log-ETL-to-Database

This project leverages Python & SQL to Perform ETL and analysis on application users data.
Python
2
star
4

XE-API-And-Internal-Data-ETL-And-Analytics

This repository contains the code, data and resource used to implement an Exchange rate data pull pipeline. The project is part of the requirement for AutoChek's data engineer interview process.
Python
2
star
5

API-Data-Ingestion-to-Data-Lake-using-AWS-Lambda

A data pipeline to pull from a stable API and store it in Amazon Cloud storage
Python
1
star
6

Udacity-Machine-Learning

Contains all resource for Udacity Learning on Machine Learning
Jupyter Notebook
1
star
7

Historic-Events-Data

This project pulls historic event data, create a pipeline that move the data into a Postgresql database.
Python
1
star
8

FavCode54

Contains all resource for favcode54 Code Practices
Jupyter Notebook
1
star
9

Grocery-Sales-Report

This project was carried out to understand the factors influencing sales over a period of 5 years. The data used was from a grocery store that captures all the different categories of product and their respective sales. A visualization dashboard was built to unearth the trend in sales pattern.
1
star
10

Access-Database-Data-ETL-To-PostgreSQL

This repository contains the code, data and resource used to implement a data ingestion pipeline and analyzing the data to draw-out some insight. The project is part of the requirement for Kippa interview process.
Python
1
star
11

Sales-Dashboard-Using-Microsoft-PowerBI

This project was carried out to understand the factors influencing sales over a period of 5 years. The data used was from a grocery store that captures all the different categories of product and their respective sales. A visualization dashboard was built to unearth the trend in the sales pattern. The product category, customer segment and customer region/location were used to drill-down the report.
HTML
1
star
12

Ingesting-Log-Data-Using-Dask-And-Airflow

The problem: There's lack of visibility into resource usage by the engineering team and other members of the team that has access to the company's cloud resources.
Python
1
star
13

Student-Performance-Prediction

The goal of this project was to predict the future score of students based on their historic performance data. A predictive model was built using regression algorithms (Linear & Logistic regression algorithms) that had 85% accuracy. Much emphasis was placed on handling missing values as well as outliers in the data set.
Jupyter Notebook
1
star
14

Internet-Data-Subscribers-Sentiment-Analysis

The goal of this project is to understand subscribersโ€™ preferences as well as their positive or negative sentiments about internet service providers in Nigeria. The data for the analysis is majorly scraped from twitter via Twitter API and Tweepy (Python library for accessing twitter data). Other tools used includes Natural Language tool kit (nltk) for tokenizing tweets and StreamListener for scheduling and downloading tweets in real-time. The project is currently on-going, while allowing for collection of a robust data set.
1
star