Название: Evolutionary Multi-Task Optimization: Foundations and Methodologies Автор: Liang Feng, Abhishek Gupta, Kay Chen Tan Издательство: Springer Серия: Machine Learning: Foundations, Methodologies, and Applications Год: 2023 Страниц: 220 Язык: английский Формат: pdf (true), epub Размер: 38.5 MB
A remarkable facet of the human brain is its ability to manage multiple tasks with apparent simultaneity. Knowledge learned from one task can then be used to enhance problem-solving in other related tasks. In Machine Learning, the idea of leveraging relevant information across related tasks as inductive biases to enhance learning performance has attracted significant interest. In contrast, attempts to emulate the human brain’s ability to generalize in optimization – particularly in population-based evolutionary algorithms – have received little attention to date.
Recently, a novel evolutionary search paradigm, Evolutionary Multi-Task (EMT) optimization, has been proposed in the realm of evolutionary computation. In contrast to traditional evolutionary searches, which solve a single task in a single run, evolutionary multi-tasking algorithm conducts searches concurrently on multiple search spaces corresponding to different tasks or optimization problems, each possessing a unique function landscape. By exploiting the latent synergies among distinct problems, the superior search performance of EMT optimization in terms of solution quality and convergence speed has been demonstrated in a variety of continuous, discrete, and hybrid (mixture of continuous and discrete) tasks.
Optimization is an essential ingredient in many real-world problem solving systems and Artificial Intelligence (AI) algorithms. For instance, optimization minimizes the loss/cost function while training machines to learn from data, gives cost-efficient routing solutions for green city logistics, discovers out of the box engineering design solutions that may be difficult for humans to imagine, and even provides the means to democratize AI itself by automating the configuration of Machine Learning model architectures. Generally, optimization defines the process of finding sets of inputs to a target objective function which result in the minimum or maximum of that function.
Multi-task optimization (MTO) with evolutionary algorithms (EAs), alternatively labelled as evolutionary multitasking (EMT) or even multi-factorial optimization, puts forth the novel concept of simultaneously solving multiple self-contained optimization problems/tasks with the added scope of computationally encoded knowledge transfer between them. If the problems happen to bear some commonality and/or complementarity in terms of their optimal solution(s) and/or function landscapes then the scope for knowledge transfer often leads to significant performance improvements relative to solving each problem in isolation. One of the key motivations behind the idea is drawn from the observation that real-world problems seldom exist in isolation. As such, humans possess the innate cognitive ability of recognizing and reusing recurring patterns from their problem-solving experiences to solve related new tasks more efficiently. Along similar lines, AI systems of practical relevance (including those in industrial settings) are also expected to be faced with multiple related problems over their lifetime—with multitasking offering the algorithmic platform for the reuse of knowledge to take place autonomously without the constant need of a human in the loop.
Скачать Evolutionary Multi-Task Optimization: Foundations and Methodologies