Visualization: Workshop
September 16, 2025
01:30 PM – 03:00 PM at Johnson GreatI can see clearly now, the results are plotted.
This ssession focuses on advancing transportation modeling through innovative visualization, documentation, and reporting. It showcases various open-source tools and web-based platforms that enhance the accessibility, usability, and transparency of travel demand model inputs and outputs for a audiance, including planners, stakeholders, and decision-makers. The presentations highlight approaches for streamlining validation, creating interactive reports, and managing large datasets efficiently.
7 Sub-sessions:Let’s be honest: “documentation” usually makes people yawn. But what if it came with dinosaurs, interactive maps, and zero spreadsheets?
In this session, we’ll show how we brought the documentation for the Wasatch Front Travel Demand Model back to life — not with amber-preserved mosquito DNA, but with Quarto and a bunch of open-source tools. The result? A website where validation isn’t buried in Excel, it’s alive with charts you can click, maps you can zoom, and data you can actually enjoy exploring.
Think of it as Jurassic Park… but instead of velociraptors, you get validation metrics that won’t eat you.
If you’ve ever thought documentation was boring, prepare to have your assumptions go extinct.
Abstract Background
The TDM23 travel demand model simulates demand across Massachusetts, Rhode Island, and southeastern New Hampshire, encompassing over 8.4 million people and 3.4 million households. Each scenario run of this model generates over 100 GB of output data, posing a significant challenge in providing concise and actionable summaries. The model’s 8-hour runtime generates detailed outputs for various components, but distilling these into an efficient, high-level summary that addresses key questions remains complex. Additionally, presenting comparative differences across multiple scenarios in a clear, accessible manner is essential for facilitating user interpretation and learning.
Description of Abstract
This research introduces a comprehensive framework for effectively summarizing large-scale travel demand model outputs and presenting scenario comparisons in a user-friendly manner. By addressing the complexity of interpreting massive volumes of data, this work aims to significantly improve the accessibility and utility of model results for decision-makers.
The presentation is structured in three key sections: the narrative behind the interactive summary reports, the architecture that powers the data pipeline, and the principles of comparative visual design.
The first section presents a story map design that aligns the four-step travel demand model components with an intuitive sequence of interactive tabs. This allows users to easily navigate between model components and quickly access key insights. The second section delves into the architecture supporting this process, where data synthesizer, report renderer, server, and archive seamlessly integrate into a unified report generation interface. Lastly, the presentation highlights innovative and visually compelling summary charts, such as pivot table-based trip rate comparisons using sunburst charts, zone-to-zone trip distribution comparisons visualized through chord diagrams, and polar charts indexed trip length distribution segmented by various travel modes and trip purposes. These visualizations provide a more holistic view of model outputs, enabling users to draw meaningful comparisons across multiple scenarios with ease.
Statement on Why Abstract is Noteworthy
This research is noteworthy for its ability to condense vast amounts of data into rich, yet compact summaries. Two scenario output files, together exceeding 200 GB, are compressed into a 50 MB standalone HTML file with zero software dependencies. Once generated, this file serves not only as an archive but also as a report that can be easily shared via email. Furthermore, the report generator automates scenario selection, report generation, and archiving, significantly broadening the accessibility of the summary capabilities to a wider user base. This approach enhances both the efficiency and usability of large-scale model outputs, making them more accessible and actionable for decision-makers.
Project status
The project is complete.
Using an innovative approach to reporting and data management, the data science team at the Metropolitan Council (Minneapolis-St. Paul MPO) were able to integrate project management, data processing, data analysis, and reporting. We used R, Quarto, and interactive data visualization tools to build a large document, with the help of our transportation planners. Using the same source code and documentation, we delivered a static PDF report and interactive, online website. By organizing data, code, and documentation in one place, we created a resilient and documented workflow for future iterations and updates. We also embedded the site directly in our agency's content management system for a seamless integration with our online presence.
The resulting document, the Transportation System Performance Evaluation (TSPE) is a comprehensive review of the Twin Cities transportation system. It is required by state law to be completed before every Transportation Policy Plan (TPP) update and we were able to fulfill our statutory requirements by delivering both a PDF report and an interactive online site.
The TSPE contains data from a variety of sources, including demographic context from the US Census, comparisons to peer regions from national data sources, data from local agencies and researchers, and travel data from the Council’s Travel Behavior Inventory. Data were accessed through APIs, databases, and local data downloads.
You can view the TSPE on our website.
At Puget Sound Regional Council, we update SoundCast [1], the regional activity-based travel model, every four years to support our Regional Transportation Plans. The process of SoundCast estimation and calibration work involves creating various charts to validate intermediate model results against the latest household travel survey data and other datasets (etc., American Community Survey, traffic and transit data). As we work on each of the 20+ sub-models, we script interactive Python-based Jupyter notebooks to visualize model performance with a multitude of tables, charts, and graphs. The process can quickly become overwhelming and requires a nimble workflow to generate new charts and analyses. Quarto [2] provides a useful framework to generate a dashboard-like interactive document from all the Jupyter notebooks with minimal effort and allows further power to publish the work in a more formalized and cleaner version.
Quarto is a free, open-source publishing tool developed by Posit that weaves together narrative text and code to produce formatted output, including books, presentations, dashboards in HTML, PDF formats and more. With Quarto supporting Python and Jupyter notebooks, this allows us to directly wrap the tool around the existing notebooks to generate an interactive HTML book, with each notebook as its own page. Quarto also provides user-friendly styling and formatting options for customizing our interactive HTML book. We found the table of contents and tabs formatting options, which bring a dashboard-like user experience, especially useful for navigating across the various charts and tables within the document. After integrating our notebooks with Quarto, the HTML book is rendered automatically after each model run to monitor model performance and can be easily shared across our agency.
The biggest advantage of Quarto is that it can be integrated with an existing Jupyter notebook and Python workflow to automatically create professional and ready-to-share documents with minimum staff time. The tool being open-source and free to use makes it accessible without commitment to subscriptions. PSRC has also been implementing the tool to generate documents in other settings, including household travel survey data analysis and model scenario comparisons. With Quarto’s potential, the tool is capable to serve many other roles that fit people’s needs. In the session, we would like to share how we implemented Quarto to visualize SoundCast model performance. We will present our process setting up the framework and provide the audience with a sample through our open-source GitHub repository and documentation.
Open-source software links
[1] SoundCast: https://github.com/psrc/soundcast
[2] Quarto: https://quarto.org/
Authors: Billy Charlton, Susan Xu, Anne Kuller, Ashish Kulshrestha, Bhargava Sana
Abstract Background
The open source SimWrapper data visualization platform is well-described in previous literature[1]. SimWrapper allows users of transport microsimulation software such as ActivitySim and MATSim to display maps of modeling inputs and outputs, to build standard dashboards of results combining maps and statistical charts, and to curate public websites with minimal web expertise. The San Diego Association of Governments (SANDAG) is currently implementing SimWrapper as a complement to existing reporting and visualization tools at the agency. The primary goal of this effort is to utilize SimWrapper to develop a tool for spatially visualizing matrix data.
Description of Abstract
SANDAG utilizes a cloud-based modeling infrastructure which offers seamless scalability and supports various data formats, including structured and semi-structured data, making it ideal for storing model outputs such as OMX files. The goal of this work is to leverage SimWrapper to develop a visualization tool tailored to SANDAG’s needs, enabling the spatial visualization of matrix data for improved analysis and decision-making. For this effort, SimWrapper must be modified to support the use case where files are neither local nor on a public file server, but are stored in managed cloud data lake, which requires authentication.
In previous conferences, SimWrapper’s ability to open and view OMX-format matrices has been demonstrated either in tabular numeric form or by highlighting one row or column of matrix data on a thematic zonal map. When data files are local, the loading and visualization is impressively fast, even for multi-gigabyte OMX matrix files. This is because those files do not have to be loaded “all at once”, but instead are accessed “on-demand” as various matrix cores, rows, and columns are selected for display. In a cloud computing scenario, performance expectations need to be different.
Once the cloud data connections are fully implemented and optimized for performance, the research team will focus on enhancing the tool’s usability and expanding its functionality. Matrices are just one type of modeling output, and we hope to further integrate SimWrapper as a complement to existing SANDAG data analytics systems.
Statement on Why Abstract is Noteworthy
None of the existing tools at SANDAG has the functionality to understand matrix data. This work has a very practical use case: SANDAG does not have an efficient way to view, review, or compare matrix outputs. These matrices are essential outputs of the agency’s activity-based model.
Connecting SimWrapper with cloud service providers should be beneficial to many future users of the platform, as more workloads migrate to remote compute facilities. If this can succeed with SANDAG-sized datasets, it should work even better for smaller data sizes and workloads.
Project is Incomplete by Fall 2025, Expected Milestones:
· Working version of the matrix visualization tool using SimWrapper with added functionality to connect to the cloud storage provider by September 2025
This work is 100% open source:
Available on GitHub: github.com/simwrapper
WFRC built on past success with simple web-based tools to create a TDM-focused visualization app that makes complex, underused model data more accessible, revealing stories and insights that were previously difficult to analyze. The tool is modular, transferable, and browser-based, using JSON/GeoJSON Python converters for data and Esri JavaScript API for maps and ChartJS for charts. No database or cloud required.
The tool has supported the 2027 RTP by visualizing land use and transit scenarios. From station-area intensification to region-wide center densification, it helps compare strategies, revealing trade-offs and opportunities. It continues to support scenario analysis throughout RTP development.