May 22, 2024

A Behind the Scenes Look at GDOT’s Adoption of Big Data with Matt Glasser

Introduction

Matt Glasser is the current National TSMO Account Lead at Arcadis and 10-year veteran at Georgia Department of Transportation (GDOT). He was instrumental in the creation of the Joint Agency Data Acquisition Program between GDOT and the Atlanta Regional Commission (ARC) and is a thought leader in the world of big data in transportation.

In this interview, Matt speaks about his experience with GDOT, its partnership with ARC, adoption of big data, evaluation of data quality, ongoing learnings, and its goals moving forward within an ever-evolving data landscape. 

 

Interview

  1. Would you walk me through the journey to where GDOT and ARC are today?

GDOT and ARC wanted to be good partners. They wanted to share what they felt were the driving factors for Georgia: the things they were interested in, where they thought they might be going, what were critical to them now, the things that might be critical to them in the future, and the nice-to-haves. They wanted to provide context both to The Eastern Transportation Coalition (TETC) and the vendor community. That was where it started. 

At the end of several workshops and coordination efforts, the GDOT and ARC team members realized there was a lot of overlap between them and their data needs. They thought, “We should be doing this together. Why are we doing this in silos?” That group made a request to each of their executive leaderships that they wanted to partner moving forward. The executive leadership from both organizations indicated their support with additional requests that a formal plan and funding plan be developed. The group went back and put together what they wanted to achieve, which included rules of engagement and organization, budget on an annual basis, cost sharing between agencies, where they wanted to see the agencies go and what that meant. This information was formalized and presented to the respective agencies’ executive leaderships. After appropriate adjustments, both agencies signed on and set up a multi-year funding plan. 

The formalized group, now referred to as the Joint Agency Data Acquisition Program, is made up of three subject matter experts from GDOT (Sam Harris, Eric Conklin, and Habte Kassa) and three subject matter experts from ARC (Kofi Wakhisi, Guy Rousseau, and Kyung-Hwa Kim). Those members were selected because they represented distinct areas within their respective organizations. Last year they kicked off their most recent effort, which was to standardize the datasets that were needed. This went beyond the simplistic request: “I need speed and travel time data;” it was about the elements within those datasets. How quickly does the data need to be delivered? What is the accuracy of it? What is the capture rate? How granular is the data? With the recent award of The Eastern Transportation Coalition’s  new Transportation Data Marketplace, we had an abundance of vendors and associated capabilities to begin working with. 

GDOT and ARC, with Arcadis’s support, reviewed each type of dataset and associated capabilities, and eventually came together in a workshop setting to share thoughts and document respective team members’ needs and wants. This gave us the ability to speak in one voice and have constructive back-and-forths with the vendor community. We shared our thoughts and potential requirements and asked for honest feedback. We would have conversations and ask questions like, “Is this requirement possible to deliver?” and, “What would that do to cost?” Georgia wanted to push the envelope but not so much that the community wouldn’t be able to successfully deliver or that the group was inadvertently sole sourcing. We went through this several times and felt it was a very collaborative effort with the vendor community, ensuring that we could get the best product. 

Because of the great collaboration, GDOT and ARC wanted to strike the appropriate balance of obtaining valuable information from the vendors without requiring an overly burdensome response package. This is ultimately how we landed on a package that could allow the vendors to quickly verify and/or clarify their capabilities without providing lengthy technical responses.

Now that we’ve completed that process, we’re reviewing the accomplishments and challenges. GDOT and ARC are always looking to make their processes better. 

  1. Can you tell us more about what you did with the Joint Agency Data Acquisition Program?

The realization that drove the creation of the Program was we’ve all been working in silos but have an overwhelming overlap in data needs. That is a product of the way that Departments of Transportation (DOTs) are built. DOTs are built to deliver projects. We have a budget and within that budget is money set aside for preliminary design efforts, right of way, and actual construction. So the resulting system that all of us found ourselves in was one that was focused on the individual rather than the organization. What made it more difficult to see was that the data acquisition for the individual projects was largely through direct expense rather than traditional line items (like you might see for construction bidding), making it difficult to run agency-wide queries. Also, once it was delivered, oftentimes, there were single-use requirements around the use of that data. Once you’re done with data, the actual raw data either needed to be deleted or needed to be kept in a project folder. We had major interstate projects where O-D data ranges all overlapped, but the projects were contractually prohibited from using any of the previously purchased datasets because of those licensing agreements. 

It was an opportunity that the agencies saw and said, “Oh, we could actually be saving a lot of money for the state government. We keep purchasing these things over and over and overlapping them as a result.” After the first year, the Program projected over $3M in government agency savings. They realized a benefit they hadn’t accounted for; that was time savings from individual projects that were not being held up while data was being procured.

  1. When and how did GDOT first integrate big data into its work?

For GDOT, it really kicked into gear with the utilization of traditional vehicle probe data. That started in the early 2000s with the Federal Highway Administration requiring states to report on the performance of interstate systems and routes of significance. While GDOT had a large deployment of detectors along the roadway, vehicle probe data was needed outside of the major population centers.

More tools came online over the following years, and with the assistance of TETC, Georgia was able to expand its use of vehicle probe data in 2018 by taking advantage of the use cases and analytics tools developed. We got to take advantage of the work and lessons learned from other states to catch up quickly. That was thanks to The Eastern Transportation Coalition’s efforts of connecting people and agencies. 

  1. Did GDOT experience any victories or failures while working with big data? What were they?

The early victories for Georgia (…) It bubbled out of the offices of Planning and Traffic Operations - each of them having their own use cases. Planning was using it for reporting purposes, whereas Operations was needing it for real-time surface street operations. For example, “what was going on in advance of the Super Bowl or the College Football Championship?”

Not so much a failure but an item that remains at the forefront for us is the need to put safeguards in place to indicate what we can and cannot use data for. Oftentimes on low volume roads, the data is being inferred because vendors don’t have enough vehicle probes. There is ongoing education of what this all means to that average user. 

Additionally, GDOT and ARC are very interested in making sure their contractors, consultants, and partner agencies know the data exists and know how to use it. That continues to be an open action item. We continue to find stakeholders that we need to bring in that we need to make sure are educated on it. Every time that we happen upon someone that has gone out and done things by themselves, it’s rewarding but also feels like a missed win. We could have made this easier for them. We just missed it.

  1. What does GDOT look for when selecting quality data?

One of the things that TETC has done that no one else has been doing to my knowledge is open transparent third-party validation studies on a quarterly basis. The states would pool their money together and test all of our vendors on speed and travel time. They would say, “Here are the metrics on what we determine to be quality data.” That was put together over many years and continues to be adjusted as we find gaps. The states and Coalition find a specific use case. They are not going to run speed and travel time on a road just because. Rather, they select  routes because they introduce a new case that we haven’t studied yet. In 2016-2017, agencies started pushing quality data to be on arterials. That was the first time that was being done. Then GDOT wanted to know, “does data quality drop when you’re in a downtown grid? Do you have an urban canyon effect?” Another state wanted to know what happens in tunnels. Every time those evaluations are set, they are shared with all members of The Eastern Transportation Coalition, the vendors, and the public at large. 

What Georgia did was review the validation studies and say “who is performing the best?” We needed to instill confidence in our stakeholders that we are doing the right thing. That created the foundation by which any selections would start: Vendors must pass Georgia’s benchmarks before they start the next conversations. Your data must be validated by a trusted third party like TETC and the University of Maryland. Once those standards are met, then the question becomes “How do we get the best value out of this because we know we have a quality product?”

  1. What kind of big data (e.g. waypoint, O-D trips, etc) has GDOT worked with the most and why?

The starting point for Georgia was speed and travel time. That was because there was an immediate need across multiple agencies and multiple offices within those agencies. That was the primary initial focus. Beyond that was trip analytics. We are riding on the success of other state agencies. We continue to roll out that training for the agencies and consultants to see what additional questions can be asked and how we can get smarter about using the data. 

Beyond vehicle probe and trip data, we are looking at telematics / waypoint and what’s out there.  We’re asking, “Is this quality data? How much is there? How representative of the population is it?” Those are questions that many of the TETC members are asking themselves right now. 

Georgia has indicated they want to be on the forefront of this and wants to be able to help answer these questions. They are working to establish the framework for evaluation, devising use cases, and looking for opportunities to bring the data  into regular business processes.

  1. How does GDOT define trajectory data?

The intent from GDOT and ARC was more in the way of defining what they are trying to do with the data rather than defining the mechanism or source. For example, when it comes to trip and origin-destination data, there are multiple approaches to getting the path between a start and end point. Whether that answer is delivered through a specific trajectory dataset, a larger connected vehicle dataset,  or something else is less important than defining your expectations of how the data will be used and the expected quality. 

  1. How does GDOT define waypoint data vs. trajectory data?

For Georgia, it’s trying to define them in terms of use cases and not wanting to get stuck on the individual pieces that go into it. 

As an example, GDOT, like most states, has specifications for traditional vehicle detection technology. The way that it had originally been specified in the state started with loops. Then video technology came about. Then microwave radar came about. Then magnetometers. Every time something else would come in, it would always cause an issue with the specifications because they had been built specifically around the individual technologies. Eventually GDOT shifted their approach, leaning more towards performance-based specifications. The belief is that this approach allows greater flexibility over time as well as allowing for near-term innovations.

9. What transportation use cases would you say are best for trajectory data?

Presently GDOT is using it for origin-destination and trip analytics for capital build programs and planning process, like the state and regional freight plan updates. 

Additionally, the operations teams have started using it to understand the most popular routes to get to and from major sporting events and around major roadway incidents. Those insights help them determine where and how signal operations need to be adjusted. There is also interest in pulling in high-fidelity, high-resolution trajectory data for supplementing Automated Traffic Signal Performance Measures (ATSPM). These use cases are in the early stages and have not been completely folded into the program.

      10. What advice would you give a transportation planner or engineer who may want to leverage trajectory data?

Make sure that you are doing your homework on the front end. As Georgia was going through this process, they reached out to other states to ask how they selected their data. Oftentimes, it was done through word of mouth. They got recommendations from other agencies, which is a good starting point. But there is certainly more to selections than a recommendation. Organizations like TETC have transparent data validation programs for various datasets, with  trip / origin-destination’s program in development. Consulting those validations is critical to understanding the capabilities of the community. It’s also important to understand the overall representativeness of the data. If data vendor #1 is $1M dollars, and they collect 25% of the population in terms of vehicle miles traveled versus data vendor #2 who is only capturing 1% of vehicle miles traveled and is half the cost, that variation between vendors needs to be evaluated. Additionally, every vendor has their own secret sauce. If an engineer or planner was using a dataset that was only pulled from high-end luxury vehicles, that’s going to misrepresent the way that people travel throughout their city

11. What else would you like to add?

This is an ever-changing market. Just because you have great answers today and understand where the vendor community and technology community is doesn’t mean you get to stop learning. As we saw last year with Wejo, the market  can change very quickly. The ITS community at large was already a rapidly changing community – this is even faster than that. It’s important to stay engaged with professional organizations that track this so that you can make sure you are passing that information along to your respective teams and can make informed decisions. What you know now could completely change.

 

Conclusion

As Matt concluded, big data in transportation is “an ever-changing market.” Georgia Department of Transportation and the Atlanta Regional Commission have done a remarkable job at understanding the data available in the market, creating standards for quality, and incorporating it into their work in an efficient way. Professional organizations like The Eastern Transportation Coalition (TETC) are an invaluable resource for GDOT and other DOTs looking to obtain trustworthy, high-quality data for their transportation projects. Education around big data – its use cases and limitations – will continue to be a priority as the market constantly evolves.

To learn more about AirSage’s data offerings within TETC’s Transportation Data Marketplace, go to: https://airsage.com/tetc/

Interested in learning more about location Intelligence? Check out our other blog posts.

© 2024 AirSage Inc. All rights reserved.
cross