Наборы данных для data science проектов

Вот несколько интересных источников данных в качестве «топлива» для data science проектов.

Data sets for Data Visualization Projects

1. FiveThirtyEight

FiveThirtyEight is an incredibly popular interactive news and sports site started by Nate Silver. They write interesting data-driven articles, like “Don’t blame a skills gap for lack of hiring in manufacturing” and “2016 NFL Predictions”.

FiveThirtyEight makes the data sets used in its articles available online on Github.

View the FiveThirtyEight Data sets

Here are some examples:

2. BuzzFeed

BuzzFeed started as a purveyor of low-quality articles, but has since evolved and now writes some investigative pieces, like “The court that rules the world” and “The short life of Deonte Hoard”.

BuzzFeed makes the data sets used in its articles available on Github.

View the BuzzFeed Data sets

Here are some examples:

3. Socrata OpenData

Socrata OpenData is a portal that contains multiple clean data sets that can be explored in the browser or downloaded to visualize. A significant portion of the data is from US government sources, and many are outdated.

You can explore and download data from OpenData without registering. You can also use visualization and exploration tools to explore the data in the browser.

View Socrata OpenData

Here are some examples:

Data sets for Data Processing Projects

Sometimes you just want to work with a large data set. The end result doesn’t matter as much as the process of reading in and analyzing the data. You might use tools like Spark or Hadoop to distribute the processing across multiple nodes. Things to keep in mind when looking for a good data processing data set:

  • The cleaner the data, the better – cleaning a large data set can be very time consuming.
  • The data set should be interesting.
  • There should be an interesting question that can be answered with the data.

A good place to find large public data sets are cloud hosting providers like Amazon and Google. They have an incentive to host the data sets, because they make you analyze them using their infrastructure (and pay them).

4. AWS Public Data sets

Amazon makes large data sets available on its Amazon Web Services platform. You can download the data and work with it on your own computer, or analyze the data in the cloud using EC2 and Hadoop via EMR. You can read more about how the program works here.

Amazon has a page that lists all of the data sets for you to browse. You’ll need an AWS account, although Amazon gives you a free access tier for new accounts that will enable you to explore the data without being charged.

View AWS Public Data sets

Here are some examples:

5. Google Public Data sets

Much like Amazon, Google also has a cloud hosting service, called Google Cloud Platform. With GCP, you can use a tool called BigQuery to explore large data sets.

Google lists all of the data sets on a page. You’ll need to sign up for a GCP account, but the first 1TB of queries you make are free.

View Google Public Data sets

Here are some examples:

  • USA Names – contains all Social Security name applications in the US, from 1879 to 2015.
  • Github Activity – contains all public activity on over 2.8 million public Github repositories.
  • Historical Weather – data from 9000 NOAA weather stations from 1929 to 2016.

6. Wikipedia

Wikipedia is a free, online, community-edited encyclopedia. Wikipedia contains an astonishing breadth of knowledge, containing pages on everything from the Ottoman-Habsburg Wars to Leonard Nimoy. As part of Wikipedia’s commitment to advancing knowledge, they offer all of their content for free, and regularly generate dumps of all the articles on the site. Additionally, Wikipedia offers edit history and activity, so you can track how a page on a topic evolves over time, and who contributes to it.

You can find the various ways to download the data on the Wikipedia site. You’ll also find scripts to reformat the data in various ways.

View Wikipedia Data sets

Here are some examples:

Data sets for Machine Learning Projects

When you’re working on a machine learning project, you want to be able to predict a column from the other columns in a data set. In order to be able to do this, we need to make sure that:

  • The data set isn’t too messy – if it is, we’ll spend all of our time cleaning the data.
  • There’s an interesting target column to make predictions for.
  • The other variables have some explanatory power for the target column.

There are a few online repositories of data sets that are specifically for machine learning. These data sets are typically cleaned up beforehand, and allow for testing of algorithms very quickly.

7. Kaggle

Kaggle is a data science community that hosts machine learning competitions. There are a variety of externally-contributed interesting data sets on the site. Kaggle has both live and historical competitions. You can download data for either, but you have to sign up for Kaggle and accept the terms of service for the competition.

You can download data from Kaggle by entering a competition. Each competition has its own associated data set. There are also user-contributed data sets found in the new Kaggle Data sets offering.

View Kaggle Data sets View Kaggle Competitions

Here are some examples:

  • Satellite Photograph Order – a data set of satellite photos of Earth – the goal is to predict which photos were taken earlier than others.
  • Manufacturing Process Failures – a data set of variables that were measured during the manufacturing process. The goal is to predict faults with the manufacturing.
  • Multiple Choice Questions – a data set of multiple choice questions and the corresponding correct answers. The goal is to predict the answer for any given question.

8. UCI Machine Learning Repository

The UCI Machine Learning Repository is one of the oldest sources of data sets on the web. Although the data sets are user-contributed, and thus have varying levels of documentation and cleanliness, the vast majority are clean and ready for machine learning to be applied. UCI is a great first stop when looking for interesting data sets.

You can download data directly from the UCI Machine Learning repository, without registration. These data sets tend to be fairly small, and don’t have a lot of nuance, but are good for machine learning.

View UCI Machine Learning Repository

Here are some examples:

  • Email spam – contains emails, along with a label of whether or not they’re spam.
  • Wine classification – contains various attributes of 178 different wines.
  • Solar flares – attributes of solar flares, useful for predicting characteristics of flares.

9. Quandl

Quandl is a repository of economic and financial data. Some of this information is free, but many data sets require purchase. Quandl is useful for building models to predict economic indicators or stock prices. Due to the large amount of available data sets, it’s possible to build a complex model that uses many data sets to predict values in another.

View Quandl Data sets.

Here are some examples:

Data sets for Data Cleaning Projects

Sometimes, it can be very satisfying to take a data set spread across multiple files, clean them up, condense them into one, and then do some analysis. In data cleaning projects, sometimes it takes hours of research to figure out what each column in the data set means. It may sometimes turn out that the data set you’re analyzing isn’t really suitable for what you’re trying to do, and you’ll need to start over.

When looking for a good data set for a data cleaning project, you want it to:

  • Be spread over multiple files.
  • Have a lot of nuance, and many possible angles to take.
  • Require a good amount of research to understand.
  • Be as “real-world” as possible.

These types of data sets are typically found on aggregators of data sets. These aggregators tend to have data sets from multiple sources, without much curation. Too much curation gives us overly neat data sets that are hard to do extensive cleaning on.

10. data.world

data.world describes itself at ‘the social network for data people’, but could be more correctly describe as ‘GitHub for data’. It’s a place where you can search for, copy, analyze, and download data sets. In addition, you can upload your data to data.world and use it to collaborate with others.

In a relatively short time it has become one of the ‘go to’ places to acquire data, with lots of user contributed data sets as well as fantastic data sets through data.world’s partnerships with various organizations includeing a large amount of data from the US Federal Government.

One key differentiator of data.world is the tools they have built to make working with data easier — you can write SQL queries within their interface to explore data and join multiple data sets. They also have SDK’s for R an python to make it easier to acquire and work with data in your tool of choice (You might be interested in reading our tutorial on the data.world Python SDK.)

View data.world Data sets

11. Data.gov

You can browse the data sets on Data.gov directly, without registering. You can browse by topic area, or search for a specific data set.

View Data.gov Data sets

Here are some examples:

12. The World Bank

The World Bank is a global development organization that offers loans and advice to developing countries. The World Bank regularly funds programs in developing countries, then gathers data to monitor the success of these programs.

You can browse World Bank data sets directly, without registering. The data sets have many missing values, and sometimes take several clicks to actually get to data.

View World Bank Data sets

Here are some examples:

13. /r/datasets

Reddit, a popular community discussion site, has a section devoted to sharing interesting data sets. It’s called the datasets subreddit, or /r/datasets. The scope of these data sets varies a lot, since they’re all user-submitted, but they tend to be very interesting and nuanced.

You can browse the subreddit here. You can also see the most highly upvoted data sets here.

View Top /r/datasets Posts

Here are some examples:

14. Academic Torrents

Academic Torrents is a new site that is geared around sharing the data sets from scientific papers. It’s a newer site, so it’s hard to tell what the most common types of data sets will look like. For now, it has tons of interesting data sets that lack context.

View Academic Torrents Data sets

Here are some examples:

  • Enron emails – a set of many emails from executives at Enron, a company that famously went bankrupt.
  • Student learning factors – a set of factors that measure and influence student learning.
  • News articles – contains news article attributes and a target variable.

 

15. Twitter

Twitter has a good streaming API, and makes it relatively straightforward to filter and stream tweets. You can get started here. There are tons of options here – you could figure out what states are the happiest, or which countries use the most complex language. We also recently wrote an article to get you started with the Twitter API here.

Get started with the Twitter API

16. Github

Github has an API that allows you to access repository activity and code. You can get started with the API here. The options are endless – you could build a system to automatically score code quality, or figure out how code evolves over time in large projects.

Get started with the Github API

17. Quantopian

Quantopian is a site where you can develop, test, and operationalize stock trading algorithms. In order to help you do that, they give you access to free minute by minute stock price data. You could build a stock price prediction algorithm.

Get started with Quantopian

Источник

Data Scientist # 1

Машинное обучение, большие данные, наука о данных, анализ данных, цифровой маркетинг, искусственный интеллект, нейронные сети, глубокое обучение, data science, data scientist, machine learning, artificial intelligence, big data, deep learning

Данные — новый актив!

Эффективно управлять можно только тем, что можно измерить.
Copyright © 2016-2021 Data Scientist. Все права защищены.