- 中图分类号: TN
- 语种: ENG
- 出版信息: The Institution of Engineering and Technology 2012 306页
- EISBN: 9781849194907
- PISBN-P: 9781849194891
- DOI:https://dx.doi.org/10.1049/PBCE081E
- 原文访问地址:
KG评星
知识图谱评星,是一种基于用户使用的评价体系,综合图书的评论数量、引文数量、Amazon评分以及图谱网络中节点的PageRank值(即考虑相邻节点数量和重要性)等多种因素计算而得出的评价数值。星级越高,推荐值越高。CAT核心级
核心学术资源(CAR)项目作为教图公司推出的一项知识型服务,旨在打造一套科学、有效的图书评价体系,并协助用户制定相应的馆藏建设方案。CAR项目调查和分析12所世界一流大学的藏书数据,以收藏学校的数量确定书目的核心级,核心级越高,代表书目的馆藏价值越高。选取核心级在三级以上,即三校以上共藏的图书作为核心书目(CAT)。This book gives an exposition of recently developed approximate dynamic programming (ADP) techniques for decision and control in human engineered systems. ADP is a reinforcement machine learning technique that is motivated by learning mechanisms in biological and animal systems. It is connected from a theoretical point of view with both adaptive control and optimal control methods. The book shows how ADP can be used to design a family of adaptive optimal control algorithms that converge in real-time to optimal control solutions by measuring data along the system trajectories. Generally, in the current literature adaptive controllers and optimal controllers are two distinct methods for the design of automatic control systems. Traditional adaptive controllers learn online in real time how to control systems, but do not yield optimal performance. On the other hand, traditional optimal controllers must be designed offline using full knowledge of the systems dynamics. It is also shown how to use ADP methods to solve multi-player differential games online. Differential games have been shown to be important in H-infinity robust control for disturbance rejection, and in coordinating activities among multiple agents in networked teams. The focus of this book is on continuous-time systems, whose dynamical models can be derived directly from physical principles based on Hamiltonian or Lagrangian dynamics.