Data Visualization

Putting US Tax Reform on the Map

Major income tax changes in the US proposed by Republicans have been big news recently.  The impact will be significant for nearly every taxpayer, creating winners and losers in varying degrees. The complexity of the Federal tax system and numerous variables when filing returns result in no one-size-fits-all formula to assess the impact on any given wallet.

We will start with a the most basic taxation scenario, simplify it some more, and present some findings on a map like this one:


The savings are highest in certain Northeast and Mid-Atlantic states, hovering around $600 per year.  This correlates with higher median wages in the region.  In the South and Midwest, it’s less. The savings are under $300 in Mississippi, where the median annual pay of $24,000 is about $11,000 less than in the high-wage states.  But you should ask, who are the people on this map anyway?

They are a composite person between the ages of 18 and 65, not married with no dependents, earning a median wage for the state they live in.  They also rent their home, and they get all of their income as an employee earning a wage.  State income taxes don’t come into play at any of the median wages used here.  All players on the board take the standard deduction under both existing and proposed rules.

This is about as simple as it gets, other than not needing to file taxes at all.  I should also note that incomes below $10,650 were excluded from the dataset before calculating the median, since income under that is not taxable.  All of this data was distilled from 2013 American Survey courtesy of the US Census Bureau. About 261,000 respondents all told.  Real wages haven’t exactly skyrocketed in recent years, so 2013 vintage income data is good enough for this purpose.


Things change when homeownership enters the picture.  For this scenario we will deduct 30% off the median incomes for mortgage interest and property taxes, as well as factor in state income or sales taxes (applies to every state except Alaska).  Now the tax savings are greatly reduced under the new rules, and in fact some would pay more under the new system.  Some of this impact on homeowners can be attributed to eliminating deductions of state tax under the proposed system.  Also, unlike for non-homeowners, the tax changes are less favorable in the Northeast and Mid-Atlantic high wage states than elsewhere.  The loss is as much as $226 per year in Massachusetts at the median wage level.


This is done with Python code in Kaggle, where there are also interactive versions of the above maps.  The homeowner map was generated here.  Details about the source data and how it was processed can be found in these links as well.  I am hoping this will inspire others who enjoy analyzing data to try it out with other tax scenarios.  Such examples could be a married couple, adding dependent children, and using different income brackets.  I believe more maps of this sort will arm taxpayers in the US with a better understanding of any new tax law proposals.


Data Science, Data Visualization

Big Data Or Big Hype?

We have certainly heard a lot about Big Data in recent years, especially with regards to data science and machine learning.  Just how large of a data set constitutes Big Data?  What amount of data science and machine learning work involves truly stratospheric volumes of bits and bytes?  There’s a survey for that, courtesy of Kaggle.

Several thousand data scientists responded to a variety of questions covering many facets of their work.  Just over 7000 respondents gave the size of data sets used in their training models.  They were also asked about the number of employees and industry segment for their organization.  Kaggle provides these results in their entirety as CSV files, and a means to analyze and visualize them within their web-based Kernel environment.  I went with Python, as is my usual custom when I do a Kaggle Kernel run.

Let’s cut to the chase and get to the main takeaway.  Behold, the chart:


It seems that the vast majority of data science and machine learning action happens below 10 GB.  Granted, the question was with regards to model training, so one might suppose that live production data sets are significantly larger.  Less than 2 percent of respondents claimed to be wrangling data sets in excess of 100 TB.


Among the top five industries that employ data science practitioners, we can compare usage of large and moderately-sized data sets.  Unsurprisingly, tech companies are the leading industry segment.



Another viewpoint is by employer size.  Very large organizations comprise the largest segment, followed closely by those with 20 to 500 employees.  One might suspect that startups are a good percentage of the latter group.


This is merely the tip of the iceberg for how we can slice, dice, and visualize with this data alone.  There are dozens more answers in the survey, covering job titles, tools and technologies used, salary, and much more collected from these data scientists.  Check out my Kaggle Kernel to explore further, or contact me to inquire about a customized analysis of the 2017 Machine Learning and Data Science Survey results tailored to your needs.

Stay tuned for more updates from the world of VenaData by following me on Twitter.

Data Science

LEGO Data Science

technic_car.jpgWhen not working with data I can often be found building something with LEGO.  How pleased was I to find a Kaggle dataset with a complete inventory of LEGO sets and parts going back to 1950!  The other cool thing with Kaggle is that you can analyze with Python or R, right in the browser.  So let’s take it for a spin and do some data wrangling!

Data work in Kaggle is done in Kernels, where you can run scripts interactively and see results immediately.  Upload your data or pick an existing Kaggle dataset, then choose from R, Python or Julia for coding, and get to it!  It’s all in the cloud and your browser, so no painful install process or library dependency issues to go through.  Under the hood Kernels are actually Jupyter notebooks, so it’s all standard stuff.

I am particularly fond of LEGO Technic, which is aimed at building mechanized models of things like cars or construction vehicles.  Technic parts are more specialized than your typical LEGO bricks, and more than a few are specific to a handful of sets. I wanted to write a Kernel for analyzing the distribution of Technic parts across LEGO sets.

To get right to the Kernel go here, else continue below where I cover some key things regarding the Python code and process.

To start with we will be using the Pandas and NumPy libraries in Python to do our data wrangling.  These get imported into the Kernel as follows:

import numpy as np
import pandas as pd

In Pandas we use a data structure called a DataFrame, which comes with a long list of methods to work with data.  You can think of DataFrames like a SQL table or spreadsheet.  To get a DataFrame going, we can load it up right from a CSV file:

parts = pd.read_csv('../input/parts.csv')

Note that the file path is actually the default location inside the Kernel where the data files are placed.  Commands similar to the above load the other files we need:

sets = pd.read_csv('../input/sets.csv')
inventories = pd.read_csv('../input/inventories.csv')
inv_parts = pd.read_csv('../input/inventory_parts.csv')

It is often helpful to view DataFrames to verify what’s in there.  This command displays the column names and the first five lines:



Next we filter the LEGO sets DataFrame to select only the ones having a Technic theme ID and that were released in recent years:

technic_sets = sets.loc[sets['year'].isin([2015,2016,2017]) & 

Python and R notebooks in Kaggle are pre-wired for visualizations.  Here we use matplotlib to do a simple bar chart:

import matplotlib.pyplot


To perform useful analysis, two Pandas DataFrames can be combined with a join, similar to SQL table joins.  Here the Technic sets and LEGO inventories DataFrames are joined, using the set number as the key column.  The resulting new DataFrame is then joined to the set inventories (which is a parts list).  The end product of these joins is essentially a flattened table of LEGO Technic sets with all of their parts and quantities.

technic_inv = technic_sets.set_index('set_num').join(inventories[['id','set_num']].set_index('set_num'))
# join Technic inventories to parts lists
technic_inv_parts = technic_inv.set_index('id').join(inv_parts.set_index('inventory_id'))


The describe method of a DataFrame provides a convenient summary of basic statistics.  Since this method attempts to perform aggregations on any numeric columns, including IDs, not every result will make sense and can simply be ignored.  In this case, num_parts and quantity are the columns we want, for the total number of parts in each set and the individual part counts, respectively.  Here we get fun facts such as the mean, min, max, and percentiles.  We can readily see that for 75% of the Technic parts, we can expect to find a half dozen or less of each per set.



To have more control over aggregation we can use the DataFrame groupby method.  Here we total up the quantities for each part from all of the sets.

part_sums = technic_inv_parts.groupby('part_num')['quantity'].sum()

The next set of statements performs a lot of work.  We create another DataFrame using a groupby, this time to get a count of sets each part appears in.  We then join this to the part count tallies from earlier.  Note that the column rename is required or a conflict will occur and cause an error.  We also want to have the part names and filter out any non-Technic pieces from the list.  We need to join to the parts DataFrame for both of those things.

parts_sets = technic_inv_parts.groupby('part_num')['name'].count()
parts_sets = pd.DataFrame(parts_sets)
parts_sets = parts_sets.rename(columns={'name': 'setsfoundin'})
parts_matrix = parts_sets.join(part_sums)
include_category = [12,52,40,29,26,25,22,55,51,53,30,46,54,8,45,44]
part_matrix_names = parts_matrix.join(parts.set_index('part_num'))
part_matrix_names = part_matrix_names.loc[part_matrix_names['part_cat_id'].isin(include_category)]


What we have been doing is creating, joining, and filtering DataFrames, using the result from each operation as input for the next step in the process.  This process can be referred to as a pipeline or a workflow.  By using Kaggle, or any platform that supports Jupyter notebooks (which could even be set up on your local machine), you can have this repeatable and easily editable workflow for experimental analysis on your data.

When you are done you will no doubt want to save your result.  When a command to save a DataFrame is in your notebook, you can access the file and download it from the Output tab after running your Kernel.


One final thing, we must have some kind of visualization for our result!  This data will appear too cluttered in a bar chart, so we will use a scatter plot.  We also filter the DataFrame to “zoom in” on a range of values for part quantity to cut down on overplotting.  This aims to clarify the relationship between totals of LEGO parts across all sets and the number of Technic sets the parts appear in.  Note the use of the Seaborn visualization library, a variant of Matplotlib.

import seaborn as sea
partscatter = parts_matrix.loc[(parts_matrix['quantity'] > 15) & 
(parts_matrix['quantity'] < 160)] 
sea.regplot(x=partscatter["quantity"], y=partscatter["setsfoundin"])


View and run all the code here.


Geospatial, MongoDB

Import GeoJSON Data Into MongoDB

If you want good support for location data and maps in your database, MongoDB can be a great choice.  One of the most widely supported formats on the Web for geospatial data is GeoJSON, and MongoDB location data looks and behaves pretty much exactly like GeoJSON.  This is very cool if you want to create a map using a JavaScript library like Leaflet, or run common types of geospatial queries on your data.

This is all pretty awesome, until you find or generate some GeoJSON data in a file outside of MongoDB and decide you want to import it.

For some reason, the native mongoimport tool, that will import JSON just fine (well, mostly, anyway), falls to pieces dealing with GeoJSON.  There are multiple ways to hack your way through this, as any Web search will tell you, but they generally involve opening your HUGE GeoJSON file in a text editor to remove some stuff (or using a command line tool for this) then running mongoimport on the cleaned up file.  I went ahead and created a Python script that does the cleanup and import in one step.  The script is also fast at importing as it leverages the bulk-write operations available in MongoDB 3.2 and later.  This is a 10x improvement over using normal inserts with the PyMongo driver and is very noticeable with large GeoJSON files.  The script tops things off at the end by creating a 2dsphere index on your collection.  Without this index, you can’t run geospatial queries.

My script is freely available on GitHub, and to run it you could just do this:

python -f points.geojson -d geospatial -c points

This assumes you are running against a local instance of Mongo.  There are additional parameters for host name and user credentials, along with other things to know about the script in the README

So there you have it, a complete, fast, easy solution to importing your GeoJSON data into MongoDB!

In addition, if your geospatial data happens to be in shapefile format, I have a similar tool for importing that into MongoDB.