Wish Pearl

Approximate Dynamic Programming: Solving the Curses of Dimensionality by Warren

Description: Approximate Dynamic Programming by Warren B. Powell Understanding approximate dynamic programming (ADP) in large industrial settings helps develop practical and high-quality solutions to problems that involve making decisions in the presence of uncertainty. FORMAT Hardcover LANGUAGE English CONDITION Brand New Publisher Description Praise for the First Edition "Finally, a book devoted to dynamic programming and written using the language of operations research (OR)! This beautiful book fills a gap in the libraries of OR specialists and practitioners." —Computing Reviews This new edition showcases a focus on modeling and computation for complex classes of approximate dynamic programming problems Understanding approximate dynamic programming (ADP) is vital in order to develop practical and high-quality solutions to complex industrial problems, particularly when those problems involve making decisions in the presence of uncertainty. Approximate Dynamic Programming, Second Edition uniquely integrates four distinct disciplines—Markov decision processes, mathematical programming, simulation, and statistics—to demonstrate how to successfully approach, model, and solve a wide range of real-life problems using ADP. The book continues to bridge the gap between computer science, simulation, and operations research and now adopts the notation and vocabulary of reinforcement learning as well as stochastic search and simulation optimization. The author outlines the essential algorithms that serve as a starting point in the design of practical solutions for real problems. The three curses of dimensionality that impact complex problems are introduced and detailed coverage of implementation challenges is provided. The Second Edition also features: A new chapter describing four fundamental classes of policies for working with diverse stochastic optimization problems: myopic policies, look-ahead policies, policy function approximations, and policies based on value function approximations A new chapter on policy search that brings together stochastic search and simulation optimization concepts and introduces a new class of optimal learning strategies Updated coverage of the exploration exploitation problem in ADP, now including a recently developed method for doing active learning in the presence of a physical state, using the concept of the knowledge gradient A new sequence of chapters describing statistical methods for approximating value functions, estimating the value of a fixed policy, and value function approximation while searching for optimal policies The presented coverage of ADP emphasizes models and algorithms, focusing on related applications and computation while also discussing the theoretical side of the topic that explores proofs of convergence and rate of convergence. A related website features an ongoing discussion of the evolving fields of approximation dynamic programming and reinforcement learning, along with additional readings, software, and datasets. Requiring only a basic understanding of statistics and probability, Approximate Dynamic Programming, Second Edition is an excellent book for industrial engineering and operations research courses at the upper-undergraduate and graduate levels. It also serves as a valuable reference for researchers and professionals who utilize dynamic programming, stochastic programming, and control theory to solve problems in their everyday work. Back Cover Praise for the First Edition "Finally, a book devoted to dynamic programming and written using the language of operations research (OR)! This beautiful book fills a gap in the libraries of OR specialists and practitioners." -- Computing Reviews This new edition showcases a focus on modeling and computation for complex classes of approximate dynamic programming problems Understanding approximate dynamic programming (ADP) is vital in order to develop practical and high-quality solutions to complex industrial problems, particularly when those problems involve making decisions in the presence of uncertainty. Approximate Dynamic Programming , Second Edition uniquely integrates four distinct disciplines--Markov decision processes, mathematical programming, simulation, and statistics--to demonstrate how to successfully approach, model, and solve a wide range of real-life problems using ADP. The book continues to bridge the gap between computer science, simulation, and operations research and now adopts the notation and vocabulary of reinforcement learning as well as stochastic search and simulation optimization. The author outlines the essential algorithms that serve as a starting point in the design of practical solutions for real problems. The three curses of dimensionality that impact complex problems are introduced and detailed coverage of implementation challenges is provided. The Second Edition also features: A new chapter describing four fundamental classes of policies for working with diverse stochastic optimization problems: myopic policies, look-ahead policies, policy function approximations, and policies based on value function approximations A new chapter on policy search that brings together stochastic search and simulation optimization concepts and introduces a new class of optimal learning strategies Updated coverage of the exploration exploitation problem in ADP, now including a recently developed method for doing active learning in the presence of a physical state, using the concept of the knowledge gradient A new sequence of chapters describing statistical methods for approximating value functions, estimating the value of a fixed policy, and value function approximation while searching for optimal policies The presented coverage of ADP emphasizes models and algorithms, focusing on related applications and computation while also discussing the theoretical side of the topic that explores proofs of convergence and rate of convergence. A related website features an ongoing discussion of the evolving fields of approximation dynamic programming and reinforcement learning, along with additional readings, software, and datasets. Requiring only a basic understanding of statistics and probability, Approximate Dynamic Programming , Second Edition is an excellent book for industrial engineering and operations research courses at the upper-undergraduate and graduate levels. It also serves as a valuable reference for researchers and professionals who utilize dynamic programming, stochastic programming, and control theory to solve problems in their everyday work. Flap Praise for the First Edition "Finally, a book devoted to dynamic programming and written using the language of operations research (OR)! This beautiful book fills a gap in the libraries of OR specialists and practitioners." -- Computing Reviews This new edition showcases a focus on modeling and computation for complex classes of approximate dynamic programming problems Understanding approximate dynamic programming (ADP) is vital in order to develop practical and high-quality solutions to complex industrial problems, particularly when those problems involve making decisions in the presence of uncertainty. Approximate Dynamic Programming , Second Edition uniquely integrates four distinct disciplines--Markov decision processes, mathematical programming, simulation, and statistics--to demonstrate how to successfully approach, model, and solve a wide range of real-life problems using ADP. The book continues to bridge the gap between computer science, simulation, and operations research and now adopts the notation and vocabulary of reinforcement learning as well as stochastic search and simulation optimization. The author outlines the essential algorithms that serve as a starting point in the design of practical solutions for real problems. The three curses of dimensionality that impact complex problems are introduced and detailed coverage of implementation challenges is provided. The Second Edition also features: A new chapter describing four fundamental classes of policies for working with diverse stochastic optimization problems: myopic policies, look-ahead policies, policy function approximations, and policies based on value function approximations A new chapter on policy search that brings together stochastic search and simulation optimization concepts and introduces a new class of optimal learning strategies Updated coverage of the exploration exploitation problem in ADP, now including a recently developed method for doing active learning in the presence of a physical state, using the concept of the knowledge gradient A new sequence of chapters describing statistical methods for approximating value functions, estimating the value of a fixed policy, and value function approximation while searching for optimal policies The presented coverage of ADP emphasizes models and algorithms, focusing on related applications and computation while also discussing the theoretical side of the topic that explores proofs of convergence and rate of convergence. A related website features an ongoing discussion of the evolving fields of approximation dynamic programming and reinforcement learning, along with additional readings, software, and datasets. Requiring only a basic understanding of statistics and probability, Approximate Dynamic Programming , Second Edition is an excellent book for industrial engineering and operations research courses at the upper-undergraduate and graduate levels. It also serves as a valuable reference for researchers and professionals who utilize dynamic programming, stochastic programming, and control theory to solve problems in their everyday work. Author Biography WARREN B. POWELL, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research. The recipient of the 2004 INFORMS Fellow Award, Dr. Powell has authored more than 160 published articles on stochastic optimization, approximate dynamicprogramming, and dynamic resource management. Table of Contents Preface to the Second Edition xi Preface to the First Edition xv Acknowledgments xvii 1 The Challenges of Dynamic Programming 1 1.1 A Dynamic Programming Example: A Shortest Path Problem, 2 1.2 The Three Curses of Dimensionality, 3 1.3 Some Real Applications, 6 1.4 Problem Classes, 11 1.5 The Many Dialects of Dynamic Programming, 15 1.6 What Is New in This Book?, 17 1.7 Pedagogy, 19 1.8 Bibliographic Notes, 22 2 Some Illustrative Models 25 2.1 Deterministic Problems, 26 2.2 Stochastic Problems, 31 2.3 Information Acquisition Problems, 47 2.4 A Simple Modeling Framework for Dynamic Programs, 50 2.5 Bibliographic Notes, 54 Problems, 54 3 Introduction to Markov Decision Processes 57 3.1 The Optimality Equations, 58 3.2 Finite Horizon Problems, 65 3.3 Infinite Horizon Problems, 66 3.4 Value Iteration, 68 3.5 Policy Iteration, 74 3.6 Hybrid Value-Policy Iteration, 75 3.7 Average Reward Dynamic Programming, 76 3.8 The Linear Programming Method for Dynamic Programs, 77 3.9 Monotone Policies*, 78 3.10 Why Does It Work?**, 84 3.11 Bibliographic Notes, 103 Problems, 103 4 Introduction to Approximate Dynamic Programming 111 4.1 The Three Curses of Dimensionality (Revisited), 112 4.2 The Basic Idea, 114 4.3 Q-Learning and SARSA, 122 4.4 Real-Time Dynamic Programming, 126 4.5 Approximate Value Iteration, 127 4.6 The Post-Decision State Variable, 129 4.7 Low-Dimensional Representations of Value Functions, 144 4.8 So Just What Is Approximate Dynamic Programming?, 146 4.9 Experimental Issues, 149 4.10 But Does It Work?, 155 4.11 Bibliographic Notes, 156 Problems, 158 5 Modeling Dynamic Programs 167 5.1 Notational Style, 169 5.2 Modeling Time, 170 5.3 Modeling Resources, 174 5.4 The States of Our System, 178 5.5 Modeling Decisions, 187 5.6 The Exogenous Information Process, 189 5.7 The Transition Function, 198 5.8 The Objective Function, 206 5.9 A Measure-Theoretic View of Information**, 211 5.10 Bibliographic Notes, 213 Problems, 214 6 Policies 221 6.1 Myopic Policies, 224 6.2 Lookahead Policies, 224 6.3 Policy Function Approximations, 232 6.4 Value Function Approximations, 235 6.5 Hybrid Strategies, 239 6.6 Randomized Policies, 242 6.7 How to Choose a Policy?, 244 6.8 Bibliographic Notes, 247 Problems, 247 7 Policy Search 249 7.1 Background, 250 7.2 Gradient Search, 253 7.3 Direct Policy Search for Finite Alternatives, 256 7.4 The Knowledge Gradient Algorithm for Discrete Alternatives, 262 7.5 Simulation Optimization, 270 7.6 Why Does It Work?**, 274 7.7 Bibliographic Notes, 285 Problems, 286 8 Approximating Value Functions 289 8.1 Lookup Tables and Aggregation, 290 8.2 Parametric Models, 304 8.3 Regression Variations, 314 8.4 Nonparametric Models, 316 8.5 Approximations and the Curse of Dimensionality, 325 8.6 Why Does It Work?**, 328 8.7 Bibliographic Notes, 333 Problems, 334 9 Learning Value Function Approximations 337 9.1 Sampling the Value of a Policy, 337 9.2 Stochastic Approximation Methods, 347 9.3 Recursive Least Squares for Linear Models, 349 9.4 Temporal Difference Learning with a Linear Model, 356 9.5 Bellmans Equation Using a Linear Model, 358 9.6 Analysis of TD(0), LSTD, and LSPE Using a Single State, 364 9.7 Gradient-Based Methods for Approximate Value Iteration*, 366 9.8 Least Squares Temporal Differencing with Kernel Regression*, 371 9.9 Value Function Approximations Based on Bayesian Learning*, 373 9.10 Why Does It Work*, 376 9.11 Bibliographic Notes, 379 Problems, 381 10 Optimizing While Learning 383 10.1 Overview of Algorithmic Strategies, 385 10.2 Approximate Value Iteration and Q-Learning Using Lookup Tables, 386 10.3 Statistical Bias in the Max Operator, 397 10.4 Approximate Value Iteration and Q-Learning Using Linear Models, 400 10.5 Approximate Policy Iteration, 402 10.6 The Actor–Critic Paradigm, 408 10.7 Policy Gradient Methods, 410 10.8 The Linear Programming Method Using Basis Functions, 411 10.9 Approximate Policy Iteration Using Kernel Regression*, 413 10.10 Finite Horizon Approximations for Steady-State Applications, 415 10.11 Bibliographic Notes, 416 Problems, 418 11 Adaptive Estimation and Stepsizes 419 11.1 Learning Algorithms and Stepsizes, 420 11.2 Deterministic Stepsize Recipes, 425 11.3 Stochastic Stepsizes, 433 11.4 Optimal Stepsizes for Nonstationary Time Series, 437 11.5 Optimal Stepsizes for Approximate Value Iteration, 447 11.6 Convergence, 449 11.7 Guidelines for Choosing Stepsize Formulas, 451 11.8 Bibliographic Notes, 452 Problems, 453 12 Exploration Versus Exploitation 457 12.1 A Learning Exercise: The Nomadic Trucker, 457 12.2 An Introduction to Learning, 460 12.3 Heuristic Learning Policies, 464 12.4 Gittins Indexes for Online Learning, 470 12.5 The Knowledge Gradient Policy, 477 12.6 Learning with a Physical State, 482 12.7 Bibliographic Notes, 492 Problems, 493 13 Value Function Approximations for Resource Allocation Problems 497 13.1 Value Functions versus Gradients, 498 13.2 Linear Approximations, 499 13.3 Piecewise-Linear Approximations, 501 13.4 Solving a Resource Allocation Problem Using Piecewise-Linear Functions, 505 13.5 The SHAPE Algorithm, 509 13.6 Regression Methods, 513 13.7 Cutting Planes*, 516 13.8 Why Does It Work?**, 528 13.9 Bibliographic Notes, 535 Problems, 536 14 Dynamic Resource Allocation Problems 541 14.1 An Asset Acquisition Problem, 541 14.2 The Blood Management Problem, 547 14.3 A Portfolio Optimization Problem, 557 14.4 A General Resource Allocation Problem, 560 14.5 A Fleet Management Problem, 573 14.6 A Driver Management Problem, 580 14.7 Bibliographic Notes, 585 Problems, 586 15 Implementation Challenges 593 15.1 Will ADP Work for Your Problem?, 593 15.2 Designing an ADP Algorithm for Complex Problems, 594 15.3 Debugging an ADP Algorithm, 596 15.4 Practical Issues, 597 15.5 Modeling Your Problem, 602 15.6 Online versus Offline Models, 604 15.7 If It Works, Patent It!, 606 Bibliography 607 Index 623 Long Description Praise for the First Edition "Finally, a book devoted to dynamic programming and written using the language of operations research (OR)! This beautiful book fills a gap in the libraries of OR specialists and practitioners." -- Computing Reviews This new edition showcases a focus on modeling and computation for complex classes of approximate dynamic programming problems Understanding approximate dynamic programming (ADP) is vital in order to develop practical and high-quality solutions to complex industrial problems, particularly when those problems involve making decisions in the presence of uncertainty. Approximate Dynamic Programming , Second Edition uniquely integrates four distinct disciplines--Markov decision processes, mathematical programming, simulation, and statistics--to demonstrate how to successfully approach, model, and solve a wide range of real-life problems using ADP. The book continues to bridge the gap between computer science, simulation, and operations research and now adopts the notation and vocabulary of reinforcement learning as well as stochastic search and simulation optimization. The author outlines the essential algorithms that serve as a starting point in the design of practical solutions for real problems. The three curses of dimensionality that impact complex problems are introduced and detailed coverage of implementation challenges is provided. The Second Edition also features: A new chapter describing four fundamental classes of policies for working with diverse stochastic optimization problems: myopic policies, look-ahead policies, policy function approximations, and policies based on value function approximations A new chapter on policy search that brings together stochastic search and simulation optimization concepts and introduces a new class of optimal learning strategies Updated coverage of the exploration exploitation problem in ADP, now including a recently developed method for doing active learning in the presence of a physical state, using the concept of the knowledge gradient A new sequence of chapters describing statistical methods for approximating value functions, estimating the value of a fixed policy, and value function approximation while searching for optimal policies The presented coverage of ADP emphasizes models and algorithms, focusing on related applications and computation while also discussing the theoretical side of the topic that explores proofs of convergence and rate of convergence. A related website features an ongoing discussion of the evolving fields of approximation dynamic programming and reinforcement learning, along with additional readings, software, and datasets. Requiring only a basic understanding of statistics and probability, Approximate Dynamic Programming , Second Edition is an excellent book for industrial engineering and operations research courses at the upper-undergraduate and graduate levels. It also serves as a valuable reference for researchers and professionals who utilize dynamic programming, stochastic programming, and control theory to solve problems in their everyday work. Details ISBN047060445X Year 2011 ISBN-10 047060445X ISBN-13 9780470604458 Format Hardcover DEWEY 519.703 Series Wiley Series in Probability and Statistics Edition 2nd Subtitle Solving the Curses of Dimensionality Language English Media Book Series Number 842 Pages 656 Short Title APPROXIMATE DYNAMIC PROGRAM-2E Country of Publication United States UK Release Date 2011-11-18 AU Release Date 2011-09-22 NZ Release Date 2011-09-22 Author Warren B. Powell Publisher John Wiley & Sons Inc Edition Description 2nd edition Publication Date 2011-11-18 Imprint John Wiley & Sons Inc Place of Publication New York Replaces 9780470171554 Illustrations Charts: 5 B&W, 0 Color; Photos: 5 B&W, 0 Color; Drawings: 63 B&W, 0 Color; Maps: 2 B&W, 0 Color; Tables: 0 B&W, 0 Color; Graphs: 39 B&W, 0 Color Audience Postgraduate, Research & Scholarly US Release Date 2011-11-18 We've got this At The Nile, if you're looking for it, we've got it. With fast shipping, low prices, friendly service and well over a million items - you're bound to find what you want, at a price you'll love! TheNile_Item_ID:37336527;

Price: 291.96 AUD

Location: Melbourne

End Time: 2024-11-09T03:10:48.000Z

Shipping Cost: 0 AUD

Product Images

Approximate Dynamic Programming: Solving the Curses of Dimensionality by Warren

Item Specifics

Restocking fee: No

Return shipping will be paid by: Buyer

Returns Accepted: Returns Accepted

Item must be returned within: 30 Days

ISBN-13: 9780470604458

Book Title: Approximate Dynamic Programming

Number of Pages: 656 Pages

Language: English

Publication Name: Approximate Dynamic Programming: Solving the Curses of Dimensionality

Publisher: John Wiley & Sons Inc

Publication Year: 2011

Subject: Mathematics

Item Height: 248 mm

Item Weight: 1050 g

Type: Textbook

Author: Warren B. Powell

Item Width: 166 mm

Format: Hardcover

Recommended

Approximate Dynamic Programming for Dynamic Vehicle Routing (Operations
Approximate Dynamic Programming for Dynamic Vehicle Routing (Operations

$233.87

View Details
Disney 100 Years Dynamic Duos Collector Set Limited Edition 8pc Mint New
Disney 100 Years Dynamic Duos Collector Set Limited Edition 8pc Mint New

$49.99

View Details
Disney 100 Years Dynamic Duos Collector Character Figure Set Limited Edition 8pc
Disney 100 Years Dynamic Duos Collector Character Figure Set Limited Edition 8pc

$24.00

View Details
Mini Lavalier Wireless Microphone Audio Video Recording 3.5mm for Android/iphone
Mini Lavalier Wireless Microphone Audio Video Recording 3.5mm for Android/iphone

$16.99

View Details
"DYNAMIC TRIO" Rebecca Latham and Cynthie Fisher Luxury Plush Blanket Queen
"DYNAMIC TRIO" Rebecca Latham and Cynthie Fisher Luxury Plush Blanket Queen

$68.95

View Details
Ulmer - Approximate Dynamic Programming for Dynamic Vehicle Routing - - S9000z
Ulmer - Approximate Dynamic Programming for Dynamic Vehicle Routing - - S9000z

$188.73

View Details
Approximate Dynamic Programming for Dynamic Vehicle Routing by Marlin Wolf Ulmer
Approximate Dynamic Programming for Dynamic Vehicle Routing by Marlin Wolf Ulmer

$177.33

View Details
Microphone System with Handheld Mic  Professional Dynamic P6C7
Microphone System with Handheld Mic Professional Dynamic P6C7

$70.14

View Details
Microphone System with Handheld Mic  Professional Dynamic L0I7
Microphone System with Handheld Mic Professional Dynamic L0I7

$67.37

View Details
NEXEN, 927246 CALIPER TENSION BRAKE, 1.938 BORE, AIR ENGAGED, SHAFT MOUNT, NEW
NEXEN, 927246 CALIPER TENSION BRAKE, 1.938 BORE, AIR ENGAGED, SHAFT MOUNT, NEW

$2500.00

View Details