Shared Ride Efficiency: Data Wrangling

With the ever increasing amount of big data available and the development of dynamic route optimization algorithm, low cost shared ride becomes more and more popular. Uber and Lyft both offer shared ride service, UberPool and Lyft Line, in order to best capture the benefit of shareconomy. Via, an on-demand shared ride start up, offers flat-rate shuttle service in urban areas and has recently expanded its business to Brooklyn.

Here, I explored NYC taxi dataset of year 2015 from Google BigQuery,  started with a big picture analysis of NYC taxi services, analyzed the features of NYC taxi trips, and then focused on Brooklyn local trips. I built a simplified model (rather than a sophisticated dynamic TSP algorithm) to assess shared ride efficiency in Brooklyn. I discovered that over 15% trips within Brooklyn are shareable on late weekend night. Shared ride efficiency largely depends on the total number of trips, emphasizing the importance of scale in shareconomy.

Codes can be found here.

NYC taxi data from Google BigQuery

To get NYC taxi data in 2015, I use SQL to query taxi records from Google BigQuery (dataset: bigquery-public-data.new_york.tlc_green_trips_2015 and bigquery-public-data.new_york.tlc_yellow_trips_2015)

I set the coordinate boundaries, as indicated in the map below, and consider trips traveling within NYC. Without setting the boundary, I observed taxi pickup and dropoff in the middle west, which were irrelevant in this analysis. I include trips with positive trip distance and trip time, and make sure that a trip’s pickup coordinate is not the same as dropoff coordinate. I round coordinates to 4 digits with spatial accuracy of 11 meters, which allow me to aggregate by coordinates to visualize trip density on map.

 

 

I use SQL to extract detailed time information from datetime: “hour”, “dayofmonth”, “dayofyear”, “dayofweek”, “month”, which are used later for aggregation of trip count by time.

To extract data sample from a single day of interest, I use “dayofyear” as the selection condition.

I then stored data in Google Cloud and download the data as csv.

Uber Data

Another dataset that I used is Uber data (uber-raw-data-janjune-15.csv), which is available from fivethirtyeight.

Data Cleaning

I use Python Pandas library to perform data wrangling, and audit data quality for both taxi data and Uber data (Jupyter Notebook can be found here).

A few modifications to mention:

1.  I use Python Pandas library to load and process csv data.

2.  There are no missing values in my data.

3.  SQL dayofweek by default sets Sunday as 1 and Saturday as 5, while Python’s dayofweek sets Monday as 0 and Sunday as 6. To make the format consistent, I set Monday as 1 and Sunday as 7 for all of my data.

4.  To get weekly data, I extract the week number “weekofyear” for 2015.

5.  I group data by month, week, dayofweek, hour to understand the timely total or average for yellow and green taxis.

6. In later analysis of shared ride efficiency, I focus on the week 15 in 2015, which ranges from 2015-04-06 to 2015-04-12, and use Taxi and Uber data in this week. The reason why I use week 15 is because from my initial data exploration, week 15 appears to be representative. In addition, I use Green Taxi data and Uber data for the analysis of Brooklyn trips. I will elaborate the difference between Green and Yellow Taxis trips in the following session.

7. In order to understand local trip density, I decide to convert coordinates to county, neighborhood, and zip code. I use Geocoder API to get the address information and parse it into zip code, county, neighborhood. API query is quite slow (2 seconds per query), I decide to focus on single days in week 15 from Green Taxi data: I use Wednesday to represent a weekday, and Saturday, Sunday to represent weekends.  Still, it takes a few days to get all data processed.

8. Uber trip records do not show pickup coordinates, so I merge the trip table with lookup table with county and neighborhood information.

Data Exploration and Visualization

I use Tableau, Pandas, and matlibplot to perform data exploration and visualization. Check my next post.

References:

[1] Analyzing 1.1 Billion NYC Taxi and Uber Trips, with a Vengeance, http://toddwschneider.com/posts/analyzing-1-1-billion-nyc-taxi-and-uber-trips-with-a-vengeance/

[2] The Data Science of NYC Taxi Trips: An Analysis & Visualization, http://www.kdnuggets.com/2017/02/data-science-nyc-taxi-trips.html

[3] Measuring accuracy of latitude and longitude? http://gis.stackexchange.com/questions/8650/measuring-accuracy-of-latitude-and-longitude

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.