Skip to main content

Handbook of Learning and Approximate Dynamic Programming

Handbook of Learning and Approximate Dynamic Programming

Jennie Si (Editor) , Andrew G. Barto (Editor) , Warren B. Powell (Editor) , Don Wunsch (Editor)

ISBN: 978-0-471-66054-5

Aug 2004, Wiley-IEEE Press

672 pages

Select type: Hardcover

Out of stock

$182.00

Description

  • A complete resource to Approximate Dynamic Programming (ADP), including on-line simulation code
  • Provides a tutorial that readers can use to start implementing the learning algorithms provided in the book
  • Includes ideas, directions, and recent results on current research issues and addresses applications where ADP has been successfully implemented
  • The contributors are leading researchers in the field

Buy Both and Save 25%!

This item: Handbook of Learning and Approximate Dynamic Programming

Computational Intelligence and Feature Selection: Rough and Fuzzy Approaches (Hardcover $157.00)

Original Price:$339.00

Purchased together:$254.25

save $84.75

Cannot be combined with any other offers.

Buy Both and Save 25%!

This item: Handbook of Learning and Approximate Dynamic Programming

Clustering (Hardcover $157.00)

Original Price:$339.00

Purchased together:$254.25

save $84.75

Cannot be combined with any other offers.

Foreword.

1. ADP: goals, opportunities and principles.

Part I: Overview.

2. Reinforcement learning and its relationship to supervised learning.

3. Model-based adaptive critic designs.

4. Guidance in the use of adaptive critics for control.

5. Direct neural dynamic programming.

6. The linear programming approach to approximate dynamic programming.

7. Reinforcement learning in large, high-dimensional state spaces.

8. Hierarchical decision making.

Part II: Technical advances.

9. Improved temporal difference methods with linear function approximation.

10. Approximate dynamic programming for high-dimensional resource allocation problems.

11. Hierarchical approaches to concurrency, multiagency, and partial observability.

12. Learning and optimization - from a system theoretic perspective.

13. Robust reinforcement learning using integral-quadratic constraints.

14. Supervised actor-critic reinforcement learning.

15. BPTT and DAC - a common framework for comparison.

Part III: Applications.

16. Near-optimal control via reinforcement learning.

17. Multiobjective control problems by reinforcement learning.

18. Adaptive critic based neural network for control-constrained agile missile.

19. Applications of approximate dynamic programming in power systems control.

20. Robust reinforcement learning for heating, ventilation, and air conditioning control of buildings.

21. Helicopter flight control using direct neural dynamic programming.

22. Toward dynamic stochastic optimal power flow.

23. Control, optimization, security, and self-healing of benchmark power systems.

"…highly recommended to researchers, graduate students, engineers, and scientists…" (E-STREAMS, February 2006)

"Clearly, this book is useful for researchers who do or want to do research on ADP." (IIE Transactions-Quality & Reliability Engineering, February 2006)

"…I would like to congratulate the editors, for putting together this wonderful collection of research contributions." (Computing Reviews.com, March 18, 2005)