Название: Optimal Control, 2nd Edition Автор: Leonid T. Ashchepkov, Dmitriy V. Dolgy Издательство: Springer Год: 2022 Страниц: 252 Язык: английский Формат: pdf (true), epub Размер: 15.9 MB
In this work, the authors cover the theory of linear and nonlinear systems, touching on the basic problem of establishing the necessary and sufficient conditions of optimal processes. Readers will find two new chapters, with results of potential interest to researchers with a focus on the theory of optimal control, as well as to those interested in applications in Engineering and related sciences. In addition, several improvements have been made through the text.
This book is structured in three parts. Part I starts with a gentle introduction to the basic concepts in Optimal Control. In Part II, the theory of linear control systems is constructed on the basis of the separation theorem and the concept of a reachability set. The authors prove the closure of reachability set in the class of piecewise continuous controls and touch on the problems of controllability, observability, identification, performance, and terminal control. Part III, in its turn, is devoted to nonlinear control systems. Using the method of variations and the Lagrange multipliers rule of nonlinear problems, the authors prove the Pontryagin maximum principle for problems with mobile ends of trajectories.
It appears from many known proofs of the maximum principle that the simplest formula that uses small increments of the trajectory is chosen. There is only one step from here to the formula of small increments of a functional, as proposed by Rozonoer, and to the maximum principle for the simplest problem of optimal control. The problems of optimal control with constraints on the ends of a path are studied on the basis of the well-known formula of increments of the functional and the Lagrange multiplier rule for nonlinear problems. The use of nonlinear methods of optimization for optimal control is attractive not only to simplify the technique used to obtain the proof of the maximum principle but also to improve the methodology. The continuity of a finite-dimensional and infinite-dimensional optimization is established, and there is confidence in that simple and clear ideas underpin complex constructions. This methodology has been successfully applied to the study of control systems of differential equations with discontinuous right-hand sides.
Problem sets at the end of chapters and a list of additional tasks, provided in the appendix, are offered for students seeking to master the subject. The exercises have been chosen not only as a way to assimilate the theory but also as to induct the application of such knowledge in more advanced problems.
Contents: Part I. Introduction 1. The Subject of Optimal Control 2. Mathematical Model for Controlled Object Part II. Control of Linear Systems 3. Reachability Set 4. Controllability of Linear Systems 5. Minimum Time Problem 6. Synthesis of the Optimal System Performance 7. The Observability Problem 8. Identification Problem Part III. Control of Nonlinear Systems 9. Types of Optimal Control Problems 10. Small Increments of a Trajectory 11. The Simplest Problem of Optimal Control 12. General Optimal Control Problem 13. Problem with Intermediate States 14. Extremals Field Theory 15. Sufficient Optimality Conditions