Publication:
Optimization-based Approximate Dynamic Programming

dc.contributor.advisorShlomo Zilberstein
dc.contributor.advisorAndrew Barto
dc.contributor.advisorSridhar Mahadevan
dc.contributor.authorPetrik, Marek
dc.contributor.departmentUniversity of Massachusetts Amherst
dc.date2023-09-22T22:20:18.000
dc.date.accessioned2024-04-26T19:47:58Z
dc.date.available2024-04-26T19:47:58Z
dc.date.issued2010-09-01
dc.description.abstractReinforcement learning algorithms hold promise in many complex domains, such as resource management and planning under uncertainty. Most reinforcement learning algorithms are iterative - they successively approximate the solution based on a set of samples and features. Although these iterative algorithms can achieve impressive results in some domains, they are not sufficiently reliable for wide applicability; they often require extensive parameter tweaking to work well and provide only weak guarantees of solution quality. Some of the most interesting reinforcement learning algorithms are based on approximate dynamic programming (ADP). ADP, also known as value function approximation, approximates the value of being in each state. This thesis presents new reliable algorithms for ADP that use optimization instead of iterative improvement. Because these optimization-based algorithms explicitly seek solutions with favorable properties, they are easy to analyze, offer much stronger guarantees than iterative algorithms, and have few or no parameters to tweak. In particular, we improve on approximate linear programming - an existing method - and derive approximate bilinear programming - a new robust approximate method. The strong guarantees of optimization-based algorithms not only increase confidence in the solution quality, but also make it easier to combine the algorithms with other ADP components. The other components of ADP are samples and features used to approximate the value function. Relying on the simplified analysis of optimization-based methods, we derive new bounds on the error due to missing samples. These bounds are simpler, tighter, and more practical than the existing bounds for iterative algorithms and can be used to evaluate solution quality in practical settings. Finally, we propose homotopy methods that use the sampling bounds to automatically select good approximation features for optimization-based algorithms. Automatic feature selection significantly increases the flexibility and applicability of the proposed ADP methods. The methods presented in this thesis can potentially be used in many practical applications in artificial intelligence, operations research, and engineering. Our experimental results show that optimization-based methods may perform well on resource-management problems and standard benchmark problems and therefore represent an attractive alternative to traditional iterative methods.
dc.description.degreeDoctor of Philosophy (PhD)
dc.description.departmentComputer Science
dc.identifier.doihttps://doi.org/10.7275/1672083
dc.identifier.urihttps://hdl.handle.net/20.500.14394/38743
dc.relation.urlhttps://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1302&context=open_access_dissertations&unstamped=1
dc.source.statuspublished
dc.subjectApproximate Dynamic Programming
dc.subjectApproximate Linear Programming
dc.subjectMarkov Decision Problem
dc.subjectMathematical Optimization
dc.subjectReinforcement Learning
dc.subjectComputer Sciences
dc.titleOptimization-based Approximate Dynamic Programming
dc.typedissertation
dc.typearticle
dc.typedissertation
digcom.contributor.authorisAuthorOfPublication|email:petrik@cs.umass.edu|institution:University of Massachusetts Amherst|Petrik, Marek
digcom.identifieropen_access_dissertations/308
digcom.identifier.contextkey1672083
digcom.identifier.submissionpathopen_access_dissertations/308
dspace.entity.typePublication
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Petrik_umass_0118D_10550.pdf
Size:
2.96 MB
Format:
Adobe Portable Document Format
Collections