There are no reviews yet. Be the first to send feedback to the community and the maintainers!
read-xml-file-and-convert-date-to-csv-file-using-php
Ford-GoBike-Data-Visualization
This project used the following dataset, which are available in (**Datasource**: [Ford goBike](https://s3.amazonaws.com/fordgobike-data/index.html) )Investigate_a_Dataset_titanic_data
Udacity
Analyze_ab_test_results
This is an A/B testing assignment completed for Udacity's Data Analyst Nano Degree Program. The project consisted of understanding the results of an A/B test run by an e-commerce website and helping the company understand through statistical conclusions, if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.Explore-Weather-Trends
In this project, I have analysed local temperature of Alexandria City, Alexandria in accordance with the global temperature data and compared. I had been provided with a database on Udacity portal from where I have to extract, manipulate and visualize the data as in the following goalInvestigate-a-Dataset-titanic_data
Purpose:To performa data analysis on a sample Titanic dataset. This dataset contains demographics and passenger information. You can view a description of this dataset on the Kaggle website, where the data was obtained https://www.kaggle.com/c/titanic/data.cleaning_student-issing-Data-Tidiness-Quality-
Cleaning sequence headers (e.g., "### Missing Data" , "#### treatments: Missing records (280 instead of 350)", "##### Define", "##### Code", and "##### Test") are set up for you for all required cleaning operations. Your tasks are to fill in the Define, Code, and Test sequences. Since some of the cleaning operations are dependent upon earlier operations, if your cleaning code is wrong in this notebook, later cleaning operations as shown in the solution notebooks may not work correctly. Be sure to thoroughly code and test your cleaning operations.Wrangling-Data-WeRateDogs-master
Goal of Project: Wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The Twitter archive is great, but it only contains very basic tweet information. Additional gathering using the Twitter API, then assessing and cleaning was required for great analyses and visualizations.Love Open Source and this site? Check out how you can help us