Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier

N/A

AccessType

Open Access Dissertation

Document Type

dissertation

Degree Name

Doctor of Philosophy (PhD)

Degree Program

Civil Engineering

Year Degree Awarded

2019

Month Degree Awarded

February

First Advisor

Song Gao

Second Advisor

Eric Gonzales

Third Advisor

Ahmed Ghoniem

Subject Categories

Transportation Engineering

Abstract

This thesis investigates the dynamic routing decisions for individual travelers and on-demand service providers (e.g., regular taxis, Uber, Lyft, etc). For individual travelers, this thesis models and predicts route choice at two time-scales: the day-to-day and within-day. For day-to-day route choice, methodological development and empirical evidences are presented to understand the roles of learning, inertia and real-time travel information on route choices in a highly disrupted network based on data from a laboratory competitive route choice game. The learning of routing policies instead of simple paths is modeled when real-time travel information is available, where a routing policy is defined as a contingency plan that maps realized traffic conditions to path choices. Using data from a competitive laboratory experiment, prediction performance is then measured in terms of both one-step and full trajectory predictions. For within day route choice, a recursive logit model is formulated in a stochastic time-dependent (STD) network without sampling any choice sets. A decomposition algorithm is then proposed so that the model can be estimated in reasonable time. Estimation and prediction results of the proposed model are presented using a data set collected from a subnetwork of Stockholm, Sweden. Taxis and ride-sourcing vehicles play an important role in providing on-demand mobility in an urban transportation system. Unlike individual travelers, they do not have a clear destination when there's no passenger on board. The optimal routing of a vacant taxi is formulated as a Markov Decision Process (MDP) problem to maximize long-term profit over the full working period. Two approaches are proposed to solve the problem. One is the model-based approach where a model of the state transitions of the environment is obtained from queuing-theory based passenger arrival and competing taxi distribution processes. An enhanced value iteration for solving the MDP problem is then proposed making use of efficient matrix operations. The other is the model-free Reinforcement Learning (RL) approach, which learns the best policy directly from observed trajectory data. Both approaches are implemented and tested in a mega city transportation network with reasonable running time, and a systematic comparison of the two approaches is also provided.

DOI

https://doi.org/10.7275/13484559

Share

COinS