There are no reviews yet. Be the first to send feedback to the community and the maintainers!
read-xml-file-and-convert-date-to-csv-file-using-php
Ford-GoBike-Data-Visualization
This project used the following dataset, which are available in (**Datasource**: [Ford goBike](https://s3.amazonaws.com/fordgobike-data/index.html) )Investigate_a_Dataset_titanic_data
Analyze_ab_test_results
This is an A/B testing assignment completed for Udacity's Data Analyst Nano Degree Program. The project consisted of understanding the results of an A/B test run by an e-commerce website and helping the company understand through statistical conclusions, if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.Explore-Weather-Trends
In this project, I have analysed local temperature of Alexandria City, Alexandria in accordance with the global temperature data and compared. I had been provided with a database on Udacity portal from where I have to extract, manipulate and visualize the data as in the following goalInvestigate_a_Dataset_tmdb-movies
# Udacity--Project-Investigate-TMDB-Movies-Dataset Hello Everyone! I am saeed falana from Palestine with specializations in computer information Systems . My ultimate aim is to derive some great results by combining the knowledge and Experince. I am passionate about data and insights. I love Data science and Analytics. As one of the important steps I have joined Data Analyst Nanodegree. #### "Udacity-DA_Nanodegree" repositories, I will be showing my projects in the Udacity's Data Analyst Nanodegree. # Udacity--Project-Investigate-TMDB-Movies-Dataset Project Overview In this project, we have to analyze a dataset and then communicate our findings about it. We will use the Python libraries NumPy, pandas, and Matplotlib to make your analysis easier. What do I need to install? You will need an installation of Python, plus the following libraries: pandas NumPy Matplotlib csv It will be recommend to installing Anaconda, which comes with all of the necessary packages, as well as IPython notebook. Why this Project? In this project, we have to go through the data analysis process and see how everything fits together. I have also use the Python libraries NumPy, pandas, and Matplotlib, which make writing data analysis code in Python a lot easier! What I have learn? After completing the project, I have learned following : Know all the steps involved in a typical data analysis process Be comfortable posing questions that can be answered with a given dataset and then answering those questions Know how to investigate problems in a dataset and wrangle the data into a format you can use Have practice communicating the results of your analysis Be able to use vectorized operations in NumPy and pandas to speed up your data analysis code Be familiar with pandas' Series and DataFrame objects, which let you access your data more conveniently Know how to use Matplotlib to produce plots showing your findingsInvestigate-a-Dataset-titanic_data
Purpose:To performa data analysis on a sample Titanic dataset. This dataset contains demographics and passenger information. You can view a description of this dataset on the Kaggle website, where the data was obtained https://www.kaggle.com/c/titanic/data.cleaning_student-issing-Data-Tidiness-Quality-
Cleaning sequence headers (e.g., "### Missing Data" , "#### treatments: Missing records (280 instead of 350)", "##### Define", "##### Code", and "##### Test") are set up for you for all required cleaning operations. Your tasks are to fill in the Define, Code, and Test sequences. Since some of the cleaning operations are dependent upon earlier operations, if your cleaning code is wrong in this notebook, later cleaning operations as shown in the solution notebooks may not work correctly. Be sure to thoroughly code and test your cleaning operations.Wrangling-Data-WeRateDogs-master
Goal of Project: Wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The Twitter archive is great, but it only contains very basic tweet information. Additional gathering using the Twitter API, then assessing and cleaning was required for great analyses and visualizations.Love Open Source and this site? Check out how you can help us