Computer Science â€ș Computer Networks and Communications

Distributed Multi-Agent Coordination and Control

Description

This cluster of papers focuses on the study of distributed multi-agent coordination, consensus, and control, covering topics such as cooperative control, formation control, swarm robotics, leader-follower strategies, event-triggered control, sensor networks, and collective behavior in animal groups.

Keywords

Consensus; Multi-Agent Systems; Cooperative Control; Formation Control; Distributed Optimization; Swarm Robotics; Leader-Follower; Event-Triggered Control; Sensor Networks; Collective Behavior

Recently, there has been growing interest in solving consensus optimization problems in a multiagent network. In this paper, we develop a decentralized algorithm for the consensus optimization problem $\mathrm{minimize}_{x\in\mathbb{R}^p}~\bar{f}(x)=\frac{1}{n}\sum_{i=1}^n f_i(x),$ 
 Recently, there has been growing interest in solving consensus optimization problems in a multiagent network. In this paper, we develop a decentralized algorithm for the consensus optimization problem $\mathrm{minimize}_{x\in\mathbb{R}^p}~\bar{f}(x)=\frac{1}{n}\sum_{i=1}^n f_i(x),$ which is defined over a connected network of $n$ agents, where each function $f_i$ is held privately by agent $i$ and encodes the agent's data and objective. All the agents shall collaboratively find the minimizer while each agent can only communicate with its neighbors. Such a computation scheme avoids a data fusion center or long-distance communication and offers better load balance to the network. This paper proposes a novel decentralized exact first-order algorithm (abbreviated as EXTRA) to solve the consensus optimization problem. “Exact” means that it can converge to the exact solution. EXTRA uses a fixed, large step size, which can be determined independently of the network size or topology. The local variable of every agent $i$ converges uniformly and consensually to an exact minimizer of $\bar{f}$. In contrast, the well-known decentralized gradient descent (DGD) method must use diminishing step sizes in order to converge to an exact minimizer. EXTRA and DGD have the same choice of mixing matrices and similar per-iteration complexity. EXTRA, however, uses the gradients of the last two iterates, unlike DGD which uses just that of the last iterate. EXTRA has the best known convergence rates among the existing synchronized first-order decentralized algorithms for minimizing convex Lipschitz--differentiable functions. Specifically, if the $f_i$'s are convex and have Lipschitz continuous gradients, EXTRA has an ergodic convergence rate $O(\frac{1}{k})$ in terms of the first-order optimality residual. In addition, as long as $\bar{f}$ is (restricted) strongly convex (not all individual $f_i$'s need to be so), EXTRA converges to an optimal solution at a linear rate $O(C^{-k})$ for some constant $C>1$.
This paper introduces AntNet, a novel approach to the adaptive learning of routing tables in communications networks. AntNet is a distributed, mobile agents based Monte Carlo system that was inspired 
 This paper introduces AntNet, a novel approach to the adaptive learning of routing tables in communications networks. AntNet is a distributed, mobile agents based Monte Carlo system that was inspired by recent work on the ant colony metaphor for solving optimization problems. AntNet's agents concurrently explore the network and exchange collected information. The communication among the agents is indirect and asynchronous, mediated by the network itself. This form of communication is typical of social insects and is called stigmergy. We compare our algorithm with six state-of-the-art routing algorithms coming from the telecommunications and machine learning fields. The algorithms' performance is evaluated over a set of realistic testbeds. We run many experiments over real and artificial IP datagram networks with increasing number of nodes and under several paradigmatic spatial and temporal traffic distributions. Results are very encouraging. AntNet showed superior performance under all the experimental conditions with respect to its competitors. We analyze the main characteristics of the algorithm and try to explain the reasons for its superiority.
This self-contained introduction to the distributed control of robotic networks offers a distinctive blend of computer science and control theory. The book presents a broad set of tools for understanding 
 This self-contained introduction to the distributed control of robotic networks offers a distinctive blend of computer science and control theory. The book presents a broad set of tools for understanding coordination algorithms, determining their correctness, and assessing their complexity; and it analyzes various cooperative strategies for tasks such as consensus, rendezvous, connectivity maintenance, deployment, and boundary estimation. The unifying theme is a formal model for robotic networks that explicitly incorporates their communication, sensing, control, and processing capabilities--a model that in turn leads to a common formal language to describe and analyze coordination algorithms.Written for first- and second-year graduate students in control and robotics, the book will also be useful to researchers in control theory, robotics, distributed algorithms, and automata theory. The book provides explanations of the basic concepts and main results, as well as numerous examples and exercises.Self-contained exposition of graph-theoretic concepts, distributed algorithms, and complexity measures for processor networks with fixed interconnection topology and for robotic networks with position-dependent interconnection topology Detailed treatment of averaging and consensus algorithms interpreted as linear iterations on synchronous networks Introduction of geometric notions such as partitions, proximity graphs, and multicenter functions Detailed treatment of motion coordination algorithms for deployment, rendezvous, connectivity maintenance, and boundary estimation
In a consensus protocol an agreement among agents is achieved thanks to the collaborative efforts of all agents, expresses by a communication graph with nonnegative weights. The question we ask 
 In a consensus protocol an agreement among agents is achieved thanks to the collaborative efforts of all agents, expresses by a communication graph with nonnegative weights. The question we ask in this paper is the following: is it possible to achieve a form of agreement also in presence of antagonistic interactions, modeled as negative weights on the communication graph? The answer to this question is affirmative: on signed networks all agents can converge to a consensus value which is the same for all agents except for the sign. Necessary and sufficient conditions are obtained to describe cases in which this is possible. These conditions have strong analogies with the theory of monotone systems. Linear and nonlinear Laplacian feedback designs are proposed.
The purpose of this article is to provide a tutorial overview of information consensus in multivehicle cooperative control. Theoretical results regarding consensus-seeking under both time invariant and dynamically changing communication 
 The purpose of this article is to provide a tutorial overview of information consensus in multivehicle cooperative control. Theoretical results regarding consensus-seeking under both time invariant and dynamically changing communication topologies are summarized. Several specific applications of consensus algorithms to multivehicle coordination are described
This paper addresses the distributed consensus protocol design problem for multi-agent systems with general linear dynamics and directed communication graphs. Existing works usually design consensus protocols using the smallest real 
 This paper addresses the distributed consensus protocol design problem for multi-agent systems with general linear dynamics and directed communication graphs. Existing works usually design consensus protocols using the smallest real part of the nonzero eigenvalues of the Laplacian matrix associated with the communication graph, which however is global information. In this paper, based on only the agent dynamics and the relative states of neighboring agents, a distributed adaptive consensus protocol is designed to achieve leader-follower consensus for any communication graph containing a directed spanning tree with the leader as the root node. The proposed adaptive protocol is independent of any global information of the communication graph and thereby is fully distributed. Extensions to the case with multiple leaders are further studied.
This paper reviews some main results and progress in distributed multi-agent coordination, focusing on papers published in major control systems and robotics journals since 2006. Distributed coordination of multiple vehicles, 
 This paper reviews some main results and progress in distributed multi-agent coordination, focusing on papers published in major control systems and robotics journals since 2006. Distributed coordination of multiple vehicles, including unmanned aerial vehicles, unmanned ground vehicles, and unmanned underwater vehicles, has been a very active research subject studied extensively by the systems and control community. The recent results in this area are categorized into several directions, such as consensus, formation control, optimization, and estimation. After the review, a short discussion section is included to summarize the existing research and to propose several promising research directions along with some open problems that are deemed important for further investigations.
This paper presents a survey of recent research in cooperative control of multivehicle systems, using a common mathematical framework to allow different methods to be described in a unified way. 
 This paper presents a survey of recent research in cooperative control of multivehicle systems, using a common mathematical framework to allow different methods to be described in a unified way. The survey has three primary parts: an overview of current applications of cooperative control, a summary of some of the key technical approaches that have been explored, and a description of some possible future directions for research. Specific technical areas that are discussed include formation control, cooperative tasking, spatiotemporal planning, and consensus.
<para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we 
 <para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy. </para>
Abstract This paper describes a distributed coordination scheme with local information exchange for multiple vehicle systems. We introduce second‐order consensus protocols that take into account motions of the information states 
 Abstract This paper describes a distributed coordination scheme with local information exchange for multiple vehicle systems. We introduce second‐order consensus protocols that take into account motions of the information states and their derivatives, extending first‐order protocols from the literature. We also derive necessary and sufficient conditions under which consensus can be reached in the context of unidirectional information exchange topologies. This work takes into account the general case where information flow may be unidirectional due to sensors with limited fields of view or vehicles with directed, power‐constrained communication links. Unlike the first‐order case, we show that having a (directed) spanning tree is a necessary rather than a sufficient condition for consensus seeking with second‐order dynamics. This work focuses on a formal analysis of information exchange topologies that permit second‐order consensus. Given its importance to the stability of the coordinated system, an analysis of the consensus term control gains is also presented, specifically the strength of the information states relative to their derivatives. As an illustrative example, consensus protocols are applied to coordinate the movements of multiple mobile robots. Copyright © 2006 John Wiley &amp; Sons, Ltd.
We consider a network of distributed sensors, where where each sensor takes a linear measurement of some unknown parameters, corrupted by independent Gaussian noises. We propose a simple distributed iterative 
 We consider a network of distributed sensors, where where each sensor takes a linear measurement of some unknown parameters, corrupted by independent Gaussian noises. We propose a simple distributed iterative scheme, based on distributed average consensus in the network, to compute the maximum-likelihood estimate of the parameters. This scheme doesn't involve explicit point-to-point message passing or routing; instead, it diffuses information across the network by updating each node's data with a weighted average of its neighbors' data (they maintain the same data structure). At each step, every node can compute a local weighted least-squares estimate, which converges to the global maximum-likelihood solution. This scheme is robust to unreliable communication links. We show that it works in a network with dynamically changing topology, provided that the infinitely occurring communication graphs are jointly connected.
This note considers the problem of information consensus among multiple agents in the presence of limited and unreliable information exchange with dynamically changing interaction topologies. Both discrete and continuous update 
 This note considers the problem of information consensus among multiple agents in the presence of limited and unreliable information exchange with dynamically changing interaction topologies. Both discrete and continuous update schemes are proposed for information consensus. This note shows that information consensus under dynamically changing interaction topologies can be achieved asymptotically if the union of the directed interaction graphs have a spanning tree frequently enough as the system evolves.
We provide a model (for both continuous and discrete time) describing the evolution of a flock. Our model is parameterized by a constant beta capturing the rate of decay-which in 
 We provide a model (for both continuous and discrete time) describing the evolution of a flock. Our model is parameterized by a constant beta capturing the rate of decay-which in our model is polynomial-of the influence between birds in the flock as they separate in space. Our main result shows that when beta<1/2 convergence of the flock to a common velocity is guaranteed, while for betages1/2 convergence is guaranteed under some condition on the initial positions and velocities of the birds only
In this paper, we present a theoretical framework for design and analysis of distributed flocking algorithms. Two cases of flocking in free-space and presence of multiple obstacles are considered. We 
 In this paper, we present a theoretical framework for design and analysis of distributed flocking algorithms. Two cases of flocking in free-space and presence of multiple obstacles are considered. We present three flocking algorithms: two for free-flocking and one for constrained flocking. A comprehensive analysis of the first two algorithms is provided. We demonstrate the first algorithm embodies all three rules of Reynolds. This is a formal approach to extraction of interaction rules that lead to the emergence of collective behavior. We show that the first algorithm generically leads to regular fragmentation, whereas the second and third algorithms both lead to flocking. A systematic method is provided for construction of cost functions (or collective potentials) for flocking. These collective potentials penalize deviation from a class of lattice-shape objects called /spl alpha/-lattices. We use a multi-species framework for construction of collective potentials that consist of flock-members, or /spl alpha/-agents, and virtual agents associated with /spl alpha/-agents called /spl beta/- and /spl gamma/-agents. We show that migration of flocks can be performed using a peer-to-peer network of agents, i.e., "flocks need no leaders." A "universal" definition of flocking for particle systems with similarities to Lyapunov stability is given. Several simulation results are provided that demonstrate performing 2-D and 3-D flocking, split/rejoin maneuver, and squeezing maneuver for hundreds of agents using the proposed algorithms.
In this paper, we discuss consensus problems for networks of dynamic agents with fixed and switching topologies. We analyze three cases: 1) directed networks with fixed topology; 2) directed networks 
 In this paper, we discuss consensus problems for networks of dynamic agents with fixed and switching topologies. We analyze three cases: 1) directed networks with fixed topology; 2) directed networks with switching topology; and 3) undirected networks with communication time-delays and fixed topology. We introduce two consensus protocols for networks with and without time-delays and provide a convergence analysis in all three cases. We establish a direct connection between the algebraic connectivity (or Fiedler eigenvalue) of the network and the performance (or negotiation speed) of a linear consensus protocol. This required the generalization of the notion of algebraic connectivity of undirected graphs to digraphs. It turns out that balanced digraphs play a key role in addressing average-consensus problems. We introduce disagreement functions for convergence analysis of consensus protocols. A disagreement function is a Lyapunov function for the disagreement network dynamics. We proposed a simple disagreement function that is a common Lyapunov function for the disagreement dynamics of a directed network with switching topology. A distinctive feature of this work is to address consensus problems for networks with directed information flow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the effectiveness of our theoretical results.
This note considers consensus algorithms for double-integrator dynamics. We propose and analyze consensus algorithms for double-integrator dynamics in four cases: 1) with a bounded control input, 2) without relative velocity 
 This note considers consensus algorithms for double-integrator dynamics. We propose and analyze consensus algorithms for double-integrator dynamics in four cases: 1) with a bounded control input, 2) without relative velocity measurements, 3) with a group reference velocity available to each team member, and 4) with a bounded control input when a group reference state is available to only a subset of the team. We show that consensus is reached asymptotically for the first two cases if the undirected interaction graph is connected. We further show that consensus is reached asymptotically for the third case if the directed interaction graph has a directed spanning tree and the gain for velocity matching with the group reference velocity is above a certain bound. We also show that consensus is reached asymptotically for the fourth case if and only if the group reference state flows directly or indirectly to all of the vehicles in the team.
This paper considers a second-order consensus problem for multiagent systems with nonlinear dynamics and directed topologies where each agent is governed by both position and velocity consensus terms with a 
 This paper considers a second-order consensus problem for multiagent systems with nonlinear dynamics and directed topologies where each agent is governed by both position and velocity consensus terms with a time-varying asymptotic velocity. To describe the system's ability for reaching consensus, a new concept about the generalized algebraic connectivity is defined for strongly connected networks and then extended to the strongly connected components of the directed network containing a spanning tree. Some sufficient conditions are derived for reaching second-order consensus in multiagent systems with nonlinear dynamics based on algebraic graph theory, matrix theory, and Lyapunov control approach. Finally, simulation examples are given to verify the theoretical analysis.
<para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework 
 <para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimates of each agent are restricted to lie in different convex sets. </para>
Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network 
 Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of "gossip" algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the so-called Preferential Connectivity (PC) model.
We study a simple but compelling model of network of agents interacting via time-dependent communication links. The model finds application in a variety of fields including synchronization, swarming and distributed 
 We study a simple but compelling model of network of agents interacting via time-dependent communication links. The model finds application in a variety of fields including synchronization, swarming and distributed decision making. In the model, each agent updates his current state based upon the current information received from neighboring agents. Necessary and/or sufficient conditions for the convergence of the individual agents' states to a common value are presented, thereby extending recent results reported in the literature. The stability analysis is based upon a blend of graph-theoretic and system-theoretic tools with the notion of convexity playing a central role. The analysis is integrated within a formal framework of set-valued Lyapunov theory, which may be of independent interest. Among others, it is observed that more communication does not necessarily lead to faster convergence and may eventually even lead to a loss of convergence, even for the simple models discussed in the present paper.
<para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> This note analyzes the stability properties of a group of mobile agents that align their velocity vectors, and stabilize their inter-agent distances, using decentralized, nearest-neighbor interaction rules, 
 <para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> This note analyzes the stability properties of a group of mobile agents that align their velocity vectors, and stabilize their inter-agent distances, using decentralized, nearest-neighbor interaction rules, exchanging information over networks that change arbitrarily (no dwell time between consecutive switches). These changes introduce discontinuities in the agent control laws. To accommodate for arbitrary switching in the topology of the network of agent interactions we employ nonsmooth analysis. The main result is that regardless of switching, convergence to a common velocity vector and stabilization of inter-agent distances is still guaranteed as long as the network remains connected at all times. </para>
We present a framework for coordinated and distributed control of multiple autonomous vehicles using artificial potentials and virtual leaders. Artificial potentials define interaction control forces between neighboring vehicles and are 
 We present a framework for coordinated and distributed control of multiple autonomous vehicles using artificial potentials and virtual leaders. Artificial potentials define interaction control forces between neighboring vehicles and are designed to enforce a desired inter-vehicle spacing. A virtual leader is a moving reference point that influences vehicles in its neighborhood by means of additional artificial potentials. Virtual leaders can be used to manipulate group geometry and direct the motion of the group. The approach provides a construction for a Lyapunov function to prove closed-loop stability using the system kinetic energy and the artificial potential energy. Dissipative control terms are included to achieve asymptotic stability. The framework allows for a homogeneous group with no ordering of vehicles; this adds robustness of the group to a single vehicle failure.
This paper presents a behavior-based approach to formation maneuvers for groups of mobile robots. Complex formation maneuvers are decomposed into a sequence of maneuvers between formation patterns. The paper presents 
 This paper presents a behavior-based approach to formation maneuvers for groups of mobile robots. Complex formation maneuvers are decomposed into a sequence of maneuvers between formation patterns. The paper presents three formation control strategies. The first strategy uses relative position information configured in a bidirectional ring topology to maintain the formation. The second strategy injects interrobot damping via passivity techniques. The third strategy accounts for actuator saturation. Hardware results demonstrate the effectiveness of the proposed control strategies.
We consider distributed optimization by a collection of nodes, each having access to its own convex function, whose collective goal is to minimize the sum of the functions. The communications 
 We consider distributed optimization by a collection of nodes, each having access to its own convex function, whose collective goal is to minimize the sum of the functions. The communications between nodes are described by a time-varying sequence of directed graphs, which is uniformly strongly connected. For such communications, assuming that every node knows its out-degree, we develop a broadcast-based algorithm, termed the subgradient-push, which steers every node to an optimal value under a standard assumption of subgradient boundedness. The subgradient-push requires no knowledge of either the number of agents or the graph sequence to implement. Our analysis shows that the subgradient-push algorithm converges at a rate of O(\ln t √t). The proportionality constant in the convergence rate depends on the initial values at the nodes, the subgradient norms and, more interestingly, on both the speed of the network information diffusion and the imbalances of influence among the nodes.
We consider the problem of cooperation among a collection of vehicles performing a shared task using intervehicle communication to coordinate their actions. Tools from algebraic graph theory prove useful in 
 We consider the problem of cooperation among a collection of vehicles performing a shared task using intervehicle communication to coordinate their actions. Tools from algebraic graph theory prove useful in modeling the communication network and relating its topology to formation stability. We prove a Nyquist criterion that uses the eigenvalues of the graph Laplacian matrix to determine the effect of the communication topology on formation stability. We also propose a method for decentralized information exchange between vehicles. This approach realizes a dynamical system that supplies each vehicle with a common reference to be used for cooperative motion. We prove a separation principle that decomposes formation stability into two components: Stability of this is achieved information flow for the given graph and stability of an individual vehicle for the given controller. The information flow can thus be rendered highly robust to changes in the graph, enabling tight formation control despite limitations in intervehicle communication capability.
The feasibility problem is studied of achieving a specified formation among a group of autonomous unicycles by local distributed control. The directed graph defined by the information flow plays a 
 The feasibility problem is studied of achieving a specified formation among a group of autonomous unicycles by local distributed control. The directed graph defined by the information flow plays a key role. It is proved that formation stabilization to a point is feasible if and only if the sensor digraph has a globally reachable node. A similar result is given for formation stabilization to a line and to more general geometric arrangements.
We present a stable control strategy for groups of vehicles to move and reconfigure cooperatively in response to a sensed, distributed environment. Each vehicle in the group serves as a 
 We present a stable control strategy for groups of vehicles to move and reconfigure cooperatively in response to a sensed, distributed environment. Each vehicle in the group serves as a mobile sensor and the vehicle network as a mobile and reconfigurable sensor array. Our control strategy decouples, in part, the cooperative management of the network formation from the network maneuvers. The underlying coordination framework uses virtual bodies and artificial potentials. We focus on gradient climbing missions in which the mobile sensor network seeks out local maxima or minima in the environmental field. The network can adapt its configuration in response to the sensed environment in order to optimize its gradient climb.
<para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> This paper provides a theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow, robustness to 
 <para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> This paper provides a theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow, robustness to changes in network topology due to link/node failures, time-delays, and performance guarantees. An overview of basic concepts of information consensus in networks and methods of convergence and performance analysis for the algorithms are provided. Our analysis framework is based on tools from matrix theory, algebraic graph theory, and control theory. We discuss the connections between consensus problems in networked dynamic systems and diverse applications including synchronization of coupled oscillators, flocking, formation control, fast consensus in small-world networks, Markov processes and gossip-based algorithms, load balancing in networks, rendezvous in space, distributed sensor fusion in sensor networks, and belief propagation. We establish direct connections between spectral and structural properties of complex networks and the speed of information diffusion of consensus algorithms. A brief introduction is provided on networked systems with nonlocal information flow that are considerably faster than distributed systems with lattice-type nearest neighbor interactions. Simulation results are presented that demonstrate the role of small-world effects on the speed of consensus algorithms and cooperative control of multivehicle formations. </para>
We describe a framework for cooperative control of a group of nonholonomic mobile robots that allows us to build complex systems from simple controllers and estimators. The resultant modular approach 
 We describe a framework for cooperative control of a group of nonholonomic mobile robots that allows us to build complex systems from simple controllers and estimators. The resultant modular approach is attractive because of the potential for reusability. Our approach to composition also guarantees stability and convergence in a wide range of tasks. There are two key features in our approach: 1) a paradigm for switching between simple decentralized controllers that allows for changes in formation; 2) the use of information from a single type of sensor, an omnidirectional camera, for all our controllers. We describe estimators that abstract the sensory information at different levels, enabling both decentralized and centralized cooperative control. Our results include numerical simulations and experiments using a testbed consisting of three nonholonomic robots.
In a recent Physical Review Letters article, Vicsek et al. propose a simple but compelling discrete-time model of n autonomous agents (i.e., points or particles) all moving in the plane 
 In a recent Physical Review Letters article, Vicsek et al. propose a simple but compelling discrete-time model of n autonomous agents (i.e., points or particles) all moving in the plane with the same speed but with different headings. Each agent's heading is updated using a local rule based on the average of its own heading plus the headings of its "neighbors." In their paper, Vicsek et al. provide simulation results which demonstrate that the nearest neighbor rule they are studying can cause all agents to eventually move in the same direction despite the absence of centralized coordination and despite the fact that each agent's set of nearest neighbors change with time as the system evolves. This paper provides a theoretical explanation for this observed behavior. In addition, convergence results are derived for several other similarly inspired models. The Vicsek model proves to be a graphic example of a switched linear system which is stable, but for which there does not exist a common quadratic Lyapunov function.
This paper addresses the consensus problem of multiagent systems with a time-invariant communication topology consisting of general linear node dynamics. A distributed observer-type consensus protocol based on relative output measurements 
 This paper addresses the consensus problem of multiagent systems with a time-invariant communication topology consisting of general linear node dynamics. A distributed observer-type consensus protocol based on relative output measurements is proposed. A new framework is introduced to address in a unified way the consensus of multiagent systems and the synchronization of complex networks. Under this framework, the consensus of multiagent systems with a communication topology having a spanning tree can be cast into the stability of a set of matrices of the same low dimension. The notion of consensus region is then introduced and analyzed. It is shown that there exists an observer-type protocol solving the consensus problem and meanwhile yielding an unbounded consensus region if and only if each agent is both stabilizable and detectable. A multistep consensus protocol design procedure is further presented. The consensus with respect to a time-varying state and the robustness of the consensus protocol to external disturbances are finally discussed. The effectiveness of the theoretical results is demonstrated through numerical simulations, with an application to low-Earth-orbit satellite formation flying.
Event-driven strategies for multi-agent systems are motivated by the future use of embedded microprocessors with limited resources that will gather information and actuate the individual agent controller updates. The controller 
 Event-driven strategies for multi-agent systems are motivated by the future use of embedded microprocessors with limited resources that will gather information and actuate the individual agent controller updates. The controller updates considered here are event-driven, depending on the ratio of a certain measurement error with respect to the norm of a function of the state, and are applied to a first order agreement problem. A centralized formulation is considered first and then its distributed counterpart, in which agents require knowledge only of their neighbors' states for the controller implementation. The results are then extended to a self-triggered setup, where each agent computes its next update time at the previous one, without having to keep track of the state error that triggers the actuation between two consecutive update instants. The results are illustrated through simulation examples.
This paper presents control and coordination algorithms for groups of vehicles. The focus is on autonomous vehicle networks performing distributed sensing tasks, where each vehicle plays the role of a 
 This paper presents control and coordination algorithms for groups of vehicles. The focus is on autonomous vehicle networks performing distributed sensing tasks, where each vehicle plays the role of a mobile tunable sensor. The paper proposes gradient descent algorithms for a class of utility functions which encode optimal coverage and sensing policies. The resulting closed-loop behavior is adaptive, distributed, asynchronous, and verifiably correct.
In this note, we discuss finite-time state consensus problems for multi-agent systems and present one framework for constructing effective distributed protocols, which are continuous state feedbacks. By employing the theory 
 In this note, we discuss finite-time state consensus problems for multi-agent systems and present one framework for constructing effective distributed protocols, which are continuous state feedbacks. By employing the theory of finite-time stability, we investigate both the bidirectional interaction case and the unidirectional interaction case, and prove that if the sum of time intervals, in which the interaction topology is connected, is sufficiently large, the proposed protocols will solve the finite-time consensus problems.
Event-triggered consensus of multiagent systems (MASs) has attracted tremendous attention from both theoretical and practical perspectives due to the fact that it enables all agents eventually to reach an agreement 
 Event-triggered consensus of multiagent systems (MASs) has attracted tremendous attention from both theoretical and practical perspectives due to the fact that it enables all agents eventually to reach an agreement upon a common quantity of interest while significantly alleviating utilization of communication and computation resources. This paper aims to provide an overview of recent advances in event-triggered consensus of MASs. First, a basic framework of multiagent event-triggered operational mechanisms is established. Second, representative results and methodologies reported in the literature are reviewed and some in-depth analysis is made on several event-triggered schemes, including event-based sampling schemes, model-based event-triggered schemes, sampled-data-based event-triggered schemes, and self-triggered sampling schemes. Third, two examples are outlined to show applicability of event-triggered consensus in power sharing of microgrids and formation control of multirobot systems, respectively. Finally, some challenging issues on event-triggered consensus are proposed for future research.
The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. 
 The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent co-ordination, estimation in sensor networks, and large-scale optimization in machine learning. We develop and analyze distributed algorithms based on dual averaging of subgradients, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our method of analysis allows for a clear separation between the convergence of the optimization algorithm itself and the effects of communication constraints arising from the network structure. In particular, we show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network. The sharpness of this prediction is confirmed both by theoretical lower bounds and simulations for various networks. Our approach includes both the cases of deterministic optimization and communication, as well as problems with stochastic optimization and/or communication.
Abstract Mirroring the complex structures and diverse functions of natural organisms is a long-standing challenge in robotics 1–4 . Modern fabrication techniques have greatly expanded the feasible hardware 5–8 , 
 Abstract Mirroring the complex structures and diverse functions of natural organisms is a long-standing challenge in robotics 1–4 . Modern fabrication techniques have greatly expanded the feasible hardware 5–8 , but using these systems requires control software to translate the desired motions into actuator commands. Conventional robots can easily be modelled as rigid links connected by joints, but it remains an open challenge to model and control biologically inspired robots that are often soft or made of several materials, lack sensing capabilities and may change their material properties with use 9–12 . Here, we introduce a method that uses deep neural networks to map a video stream of a robot to its visuomotor Jacobian field (the sensitivity of all 3D points to the robot’s actuators). Our method enables the control of robots from only a single camera, makes no assumptions about the robots’ materials, actuation or sensing, and is trained without expert intervention by observing the execution of random commands. We demonstrate our method on a diverse set of robot manipulators that vary in actuation, materials, fabrication and cost. Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot. Because it enables robot control using a generic camera as the only sensor, we anticipate that our work will broaden the design space of robotic systems and serve as a starting point for lowering the barrier to robotic automation.
This study addresses the heterogeneous formation control problem for cooperative unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) operating under input quantization constraints. A unified mathematical framework is developed 
 This study addresses the heterogeneous formation control problem for cooperative unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) operating under input quantization constraints. A unified mathematical framework is developed to harmonize the distinct dynamic models of UAVs and USVs in the horizontal plane. The proposed control architecture adopts a hierarchical design, decomposing the system into kinematic and dynamic subsystems. At the kinematic level, an artificial potential field method is implemented to ensure collision avoidance between vehicles and obstacles. The dynamic subsystem incorporates neural network-based estimation to compensate for system uncertainties and unknown parameters. To address communication constraints, a linear quantization model is introduced for control input processing. Additionally, adaptive control laws are formulated in the vertical plane to achieve precise altitude tracking. The overall system stability is rigorously analyzed using input-to-state stability theory. Finally, numerical simulations demonstrate the effectiveness of the proposed control strategy in achieving coordinated formation control.
Abstract This article investigates observer‐based prescribed‐time control for leader‐follower consensus issues in linear multiagent systems. Observers for each agent are designed based on an augmented system that enable simultaneous observation 
 Abstract This article investigates observer‐based prescribed‐time control for leader‐follower consensus issues in linear multiagent systems. Observers for each agent are designed based on an augmented system that enable simultaneous observation of both the system's state and disturbances. The prescribed‐time stabilization problem is addressed using observers that employ periodic delay feedback. Furthermore, a prescribed‐time control protocol based on observers is introduced within a predetermined time frame, independent of system parameters and initial conditions. By employing concepts from algebraic graph theory, linear systems theory, Lyapunov stability, and matrix theory, we establish sufficient conditions for the selected parameters to achieve prescribed‐time consensus. Simulation experiments confirm the validity of proposed methodologies for a second‐order unmanned aerial vehicle (UAV) formation consensus.
In time-sensitive aerial missions such as urban surveillance, emergency response, and adversarial airspace operations, achieving rapid and reliable formation control of multi-UAV systems is crucial. This paper addresses the challenge 
 In time-sensitive aerial missions such as urban surveillance, emergency response, and adversarial airspace operations, achieving rapid and reliable formation control of multi-UAV systems is crucial. This paper addresses the challenge of ensuring robust and efficient formation control under stringent time constraints. The proposed singularity-free prescribed-time formation (PTF) control scheme guarantees task completion within a user-defined time, independent of initial conditions and control parameters. Unlike existing scaling-based prescribed-time methods plagued by unbounded gains and fixed-time strategies with non-tunable convergence bounds, the proposed scheme uses fixed-time stability theory and systematic parameter tuning to avoid singularity issues while ensuring robustness and predictable convergence. The method also accommodates directed communication topologies and unknown external disturbances, allowing follower UAVs to track a dynamic leader and maintain the desired geometric formation. Finally, some simulation results demonstrate the effectiveness of the proposed control strategy, showcasing its superiority over existing methods and validating its potential for practical applications.
This paper studies the robust H∞ time-varying formation tracking (TVFT) problem for heterogeneous nonlinear multi-agent systems (MASs) with parameter uncertainties, external disturbances, and unknown leader inputs. The objective is to 
 This paper studies the robust H∞ time-varying formation tracking (TVFT) problem for heterogeneous nonlinear multi-agent systems (MASs) with parameter uncertainties, external disturbances, and unknown leader inputs. The objective is to ensure that follower agents track the leader’s trajectory while achieving a desired time-varying formation, even under unmodeled dynamics and disturbances. Unlike existing methods that rely on global topology information or homogeneous system assumptions, an adaptive control protocol is proposed in full distribution, requiring no global topology information, and integrates nonlinear compensation terms to handle unknown leader inputs and parameter uncertainties. Based on the Lyapunov theory and laplacian matrix, a robust H∞ TVFT criterion is developed. Finally, a numerical example is given to verify the theory.
Abstract Investigation is conducted for the mean square scaled consensus issue for high‐order stochastic multi‐agent systems (SMASs) in the presence of noise. Scaled consensus indicates that each agent asymptotically converges 
 Abstract Investigation is conducted for the mean square scaled consensus issue for high‐order stochastic multi‐agent systems (SMASs) in the presence of noise. Scaled consensus indicates that each agent asymptotically converges to a specified constant in a certain proportion rather than converge to a common value. To cope with the Laplacian matrix asymmetry problem caused by directed graphs, the concept of generalized algebraic connectivity is introduced to characterize the capability of directed networks for the consensus issue of SMASs. A distributed stochastic approximation dynamic output feedback controller is proposed to obtain the convergence for high‐order SMASs by the stochastic Lyapunov method and Itî's formula. In addition, the convergence rate for high‐order SMASs with communication noise is addressed, in which the convergence rate for our underlying system has a closed relationship with stochastic approximation time‐varying gain. Lastly, a simulation example is applied to demonstrate the validity of the results.
ABSTRACT This paper tackles the consensus tracking issue for nonlinear strict‐feedback multi‐agent systems with unmeasurable states under constrained communication ranges. The primary challenge is maintaining connectivity between agents under limited 
 ABSTRACT This paper tackles the consensus tracking issue for nonlinear strict‐feedback multi‐agent systems with unmeasurable states under constrained communication ranges. The primary challenge is maintaining connectivity between agents under limited and time‐varying communication conditions. To tackle this, a general time‐varying communication constraint model is explicitly formulated using continuous functions, and a comprehensive design framework is proposed. Building on this, a distributed dynamic connectivity‐preserving controller is designed to ensure that agents maintain connectivity throughout the time domain under time‐varying communication constraints. Furthermore, a fuzzy logic system is used to approximate the unknown internal dynamics, and an adaptive fuzzy observer is introduced to estimate the unmeasurable states. The uncertainty in system parameters induced by input hysteresis is tackled by designing a convergent adaptive law to estimate and compensate for the uncertain parameters. Stability analysis confirms that the proposed observer and controller can effectively ensure that all signals within the closed‐loop system are bounded. Finally, a simulation case is provided to validate the feasibility of the proposed control approach.
Abstract Human-swarm interaction (HSI) explores how humans engage with distributed collective systems, aiming to incorporate human cognition into scalable and robust robotic swarms. While most HSI research focuses on remote 
 Abstract Human-swarm interaction (HSI) explores how humans engage with distributed collective systems, aiming to incorporate human cognition into scalable and robust robotic swarms. While most HSI research focuses on remote teleoperation via engineered interfaces, real-world integration of swarms into everyday tasks requires natural, embodied interactions in shared physical spaces. To address the limitations of traditional teleoperation studies, and the high resource demands of using physical robot swarms for HSI research, we introduce CoBe XR, a spatial augmented reality system that projects virtual swarms into the physical environment of the human operator. CoBe XR enables real-time, fine-grained, natural interaction between humans and swarm-like agents through full-body movement without dedicated control interfaces or prior training. As a proof-of-concept, we present a behavioral study involving 40 participants who influenced swarm behavior solely through walking. Our results show that human participants were able to adapt to the collective dynamics of the swarm and control it through natural perception-motion control in a shared physical space. We argue that similar extended reality systems can not only reveal how humans perceive and adapt to collective dynamics, but they offer a general platform to understand human behavior or an intermediate solution to design embodied robot swarms.
ABSTRACT This article explores a novel design of a resilient interaction algorithm for multiagent systems (MAS) based on an event‐triggered mechanism, focusing on distributed optimization in the context of False 
 ABSTRACT This article explores a novel design of a resilient interaction algorithm for multiagent systems (MAS) based on an event‐triggered mechanism, focusing on distributed optimization in the context of False Data Injection Attack (FDIA). A network‐level defense strategy is used based on a virtual system framework, where virtual state variables are introduced to ensure that the local estimate of each agent converges to the optimal solution of the distributed optimization problem, even under unknown FDIA. The article further introduces an event‐triggered strategy that significantly reduces communication overhead, and proper selection criteria are given for picking suitable event‐triggered parameters therein. It is proved that the proposed algorithm also avoids the Zeno behavior. Additionally, a distributed detection method is designed to accurately identify and isolate compromised links, thereby further enhancing the system's resilience. Two numerical simulations are conducted to illustrate the performance of the proposed algorithm, and it is demonstrated that the algorithm can also maintain effectiveness for networks with relatively large‐scale sizes.
Irina Popovici | Nonlinear Differential Equations and Applications NoDEA
Abstract This paper mainly studies the distributed online optimization problem with aggregation variables and time‐varying convex constraints. All agents collaborate to minimize the sum of local convex functions, where each 
 Abstract This paper mainly studies the distributed online optimization problem with aggregation variables and time‐varying convex constraints. All agents collaborate to minimize the sum of local convex functions, where each local cost function is just accessed by one agent and has two variables, including the decision of the agent and aggregation variable of all agents decisions. Different from the most existing works, where the constraint sets were assumed to be time‐invariable, this paper considers the situation of time‐varying constraints by introducing the one‐way Hausdorff distance. First, a new projected aggregation tracking algorithm is proposed, which can guarantee that the decisions of all agents stay at the time‐varying constraints all the time. Second, it is proved that the dynamic regret has a sublinear upper bound. Finally, numerical experiments are conducted to validate the effectiveness of the proposed algorithm.
ABSTRACT In this paper, event‐triggered affine formation control for linear multi‐agent systems with a directed communication topology graph is addressed. The dynamics of each agent are described by complex‐valued differential 
 ABSTRACT In this paper, event‐triggered affine formation control for linear multi‐agent systems with a directed communication topology graph is addressed. The dynamics of each agent are described by complex‐valued differential equations. A distributed event‐triggered strategy for achieving global convergence and reaching the target affine formation is designed, which can effectively avoid continuous updates of the controller. Moreover, the control parameters in the control strategy are dependent on the information of the complex‐valued Laplacian, which avoid the use of global information. The results show that any desired affine formation can be achieved. Furthermore, the event‐triggering time sequence for any agent does not exhibit Zeno behavior. Finally, a numerical simulation example is presented to illustrate the effectiveness of the obtained theoretical results.
This paper studies the problem of time-varying formation-tracking control for a class of nonlinear multi-agent systems. A distributed adaptive controller that avoids the global non-zero minimum eigenvalue is designed for 
 This paper studies the problem of time-varying formation-tracking control for a class of nonlinear multi-agent systems. A distributed adaptive controller that avoids the global non-zero minimum eigenvalue is designed for heterogeneous systems in which leaders and followers contain different nonlinear terms, and which relies only on the relative errors between adjacent agents. By adopting the Riccati inequality method, the adaptive adjustment factor in the controller is designed to solve the problem of automatically adjusting relative errors based solely on local information. Unlike existing research on time-varying formations with fixed and switching topologies, the method of jointly connected topological graphs is adopted to enable nonlinear followers to track the trajectories of leaders with different nonlinear terms and simultaneously achieve the control objective of the desired time-varying formation. The stability of the system under the jointly connected graph is proved by the Lyapunov stability proof method. Finally, numerical simulation experiments confirm the effectiveness of the proposed control method.
For the formation reconfiguration of fixed-wing unmanned aerial vehicles (UAVs), a hierarchical control decision-making method considering both convergence and optimality is studied. To begin with, the dynamic model of the 
 For the formation reconfiguration of fixed-wing unmanned aerial vehicles (UAVs), a hierarchical control decision-making method considering both convergence and optimality is studied. To begin with, the dynamic model of the fixed-wing UAVs is established, and the formation reconfiguration control problem formally constructed. Subsequently, based on information such as the initial positions of the UAVs and the expected geometric configuration, an integer programming issue is formulated to determine the destinations of the UAVs. After completing the aforementioned preparations, by incorporating the concept of hierarchical games, the formation guidance and control problem is consequently reformulated as a multiplayer Stackelberg–Nash game (SNG). Through rigorous analysis, the optimality of using the Stackelberg–Nash equilibrium solution as the UAV control commands was demonstrated. Furthermore, a novel policy iteration (PI) algorithm for solving this equilibrium based on fixed-point iteration is proposed. To guarantee the accurate execution of the control commands, an auxiliary control system is designed, thereby forming a closed-loop real-time control decision-making mechanism. The numerical simulation results illustrate that the UAVs can rapidly switch to the desired formation configuration, thus validating the effectiveness of the proposed method.
ABSTRACT In this paper, we address the problem of adaptive consensus tracking control for a class of incommensurate fractional‐order switched nonlinear multi‐agent systems with external disturbance. All the followers in 
 ABSTRACT In this paper, we address the problem of adaptive consensus tracking control for a class of incommensurate fractional‐order switched nonlinear multi‐agent systems with external disturbance. All the followers in this study are described as arbitrarily switched heterogeneous fractional‐order systems (FOSs), wherein the difficulty of incommensurate fractional order is solved by employing the continuity of fractional differentiation, and the derivative order of the adaptation laws is not determined by the order of the systems. A distributed adaptive control scheme is proposed within the framework of the backstepping control method, common Lyapunov function, and radial basis function neural networks (RBFNNs). By adjusting the parameters appropriately, all the signals of the multi‐agent systems (MASs) are bounded, and the consensus tracking error eventually converges to a small neighborhood of the origin. Finally, the effectiveness of the proposed control strategy is verified through a numerical simulation example.
ABSTRACT The stochastic approximation‐based control law has been proved to be a powerful tool to achieve the robust distributed coordinated control for multi‐agent systems (MASs) with uncertain disturbances in fixed 
 ABSTRACT The stochastic approximation‐based control law has been proved to be a powerful tool to achieve the robust distributed coordinated control for multi‐agent systems (MASs) with uncertain disturbances in fixed or balanced time‐varying network. However, its effectiveness proof in unbalanced time‐varying networks is not well solved. The main contribution of this paper is solving this problem in two typical coordinated control problems based on a time‐varying quadratic Lyapunov function. First, the stochastic approximation for consensus problem of discrete‐time single‐integrator MASs with additive noises is studied. We build the weak consensus, mean square and almost sure consensus conclusion under the assumption that the time‐varying network is uniformly strongly connected (USC) by adopting the stochastic approximation‐based consensus protocol. The convergence rate of the weak consensus is also quantified. Second, the stochastic approximation for formation control problem of MASs with relative‐position information in the plane as an application of the built consensus conclusion is studied. We show that the stochastic approximation‐based formation control law can be used to achieve the desired formation for MASs if the network is USC. We finally give numerical simulations to verify the correctness of the conclusion.
This paper introduces a comprehensive software platform designed to coordinate self-organizing UAV swarms through a secure and modular client-server system. Developed with multi-user collaboration in mind, the platform features an 
 This paper introduces a comprehensive software platform designed to coordinate self-organizing UAV swarms through a secure and modular client-server system. Developed with multi-user collaboration in mind, the platform features an intuitive, cross-platform interface that allows users to define mission tasks, construct navigation graphs, and monitor swarm activity in real time. At the heart of the system is a robust path-planning algorithm based on the rotor-router model with loop reversibility, which enables reliable and evenly distributed task coverage without relying on randomness. To enhance fault tolerance and ensure resilience in communication-limited environments, the platform employs a gossip-based broadcast algorithm. This allows swarm members to share information efficiently and maintain coordinated behaviour, even when some nodes experience failures or connectivity issues. A built-in simulation module enables users to test and refine swarm coordination strategies before deployment, reducing operational risk and improving mission reliability. By simulating various environmental conditions and mission scenarios, users can evaluate system behaviour and optimize task execution. In parallel, the platform supports real-time 3D panorama generation from UAV-captured images, providing rich visual context and enabling more effective post-mission analysis. Taken together, these features form a scalable, secure, and highly flexible system for managing decentralized drone swarms. The platform is well-suited for applications that demand coordination across multiple agents, including environmental monitoring, search and rescue, infrastructure inspection, and autonomous exploration. It bridges theoretical rigor with practical usability, offering a reliable toolset for both researchers and mission operators. Our work builds on earlier systems, introducing hybrid rotor-router initialization, algorithmic no-fly zone enforcement, and dual-toolchain image stitching.
The rapid advancement of unmanned aerial vehicle (UAV) technology has enabled the coordinated operation of multi-UAV systems, offering significant applications in agriculture, logistics, environmental monitoring, and disaster relief. In agriculture, 
 The rapid advancement of unmanned aerial vehicle (UAV) technology has enabled the coordinated operation of multi-UAV systems, offering significant applications in agriculture, logistics, environmental monitoring, and disaster relief. In agriculture, UAVs are widely utilized for tasks such as ecological restoration, crop monitoring, and fertilization, providing efficient and cost-effective solutions for improved productivity and sustainability. This study addresses the collaborative task allocation problem for multi-UAV systems, using ecological grassland restoration as a case study. A multi-objective, multi-constraint collaborative task allocation problem (MOMCCTAP) model was developed, incorporating constraints such as UAV collaboration, task completion priorities, and maximum range restrictions. The optimization objectives include minimizing the maximum task completion time for any UAV and minimizing the total time for all UAVs. To solve this model, a deep reinforcement learning-based seagull optimization algorithm (DRL-SOA) is proposed, which integrates deep reinforcement learning with the seagull optimization algorithm (SOA) for adaptive optimization. The algorithm improves both global and local search capabilities by optimizing key phases of seagull migration, attack, and post-attack refinement. Evaluation against five advanced swarm intelligence algorithms demonstrates that the DRL-SOA outperforms the alternatives in convergence speed and solution diversity, validating its efficacy for solving the MOMCCTAP.
Practical large-scale multiple unmanned aerial vehicle (multi-UAV) networks are susceptible to multiple potential points of vulnerability, such as hardware failures or adversarial attacks. Existing resilient multi-dimensional coordination control algorithms in 
 Practical large-scale multiple unmanned aerial vehicle (multi-UAV) networks are susceptible to multiple potential points of vulnerability, such as hardware failures or adversarial attacks. Existing resilient multi-dimensional coordination control algorithms in multi-UAV networks are rather costly in the computation of a safe point and rely on an assumption of the maximum number of adversarial nodes in the multi-UAV network or neighborhood. In this paper, a dynamic trusted convex hull method is proposed to filter received states in multi-dimensional space without requiring assumptions about the maximum adversaries. Based on the proposed method, a distributed local control protocol is designed with lower computational complexity and higher tolerance of adversarial nodes. Sufficient and necessary graph-theoretic conditions are obtained to achieve resilient multi-dimensional consensus and containment control despite adversarial nodes’ behaviors. The theoretical results are validated through simulations.