Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (9)

Validating and controlling safety-critical systems in uncertain environments necessitates probabilistic reachable sets of future state evolutions. The existing methods of computing probabilistic reachable sets normally assume that the uncertainties are … Validating and controlling safety-critical systems in uncertain environments necessitates probabilistic reachable sets of future state evolutions. The existing methods of computing probabilistic reachable sets normally assume that the uncertainties are independent of the state. However, this assumption falls short in many real-world applications, where uncertainties are state-dependent, referred to as contextual uncertainties. This paper formulates the problem of computing probabilistic reachable sets of stochastic nonlinear states with contextual uncertainties by seeking minimum-volume polynomial sublevel sets with contextual chance constraints. The formulated problem cannot be solved by the existing sample-based approximation method since the existing methods do not consider the conditional probability densities. To address this, we propose a consistent sample approximation of the original problem by leveraging the conditional density estimation and resampling. The obtained approximate problem is a tractable optimization problem. Additionally, we prove the almost uniform convergence of the proposed sample-based approximation, showing that it gives the optimal solution almost consistently with the original ones. Through a numerical example, we evaluate the effectiveness of the proposed method against existing approaches, highlighting its capability to significantly reduce the bias inherent in sample-based approximation without considering a conditional probability density.
Electric-powered wheelchairs play a vital role in ensuring accessibility for individuals with mobility impairments. The design of controllers for tracking tasks must prioritize the safety of wheelchair operation across various … Electric-powered wheelchairs play a vital role in ensuring accessibility for individuals with mobility impairments. The design of controllers for tracking tasks must prioritize the safety of wheelchair operation across various scenarios and for a diverse range of users. In this study, we propose a safety-oriented speed tracking control algorithm for wheelchair systems that accounts for external disturbances and uncertain parameters at the dynamic level. We employ a set-membership approach to estimate uncertain parameters online in deterministic sets. Additionally, we present a model predictive control scheme with real-time adaptation of the system model and controller parameters to ensure safety-related constraint satisfaction during the tracking process. This proposed controller effectively guides the wheelchair speed toward the desired reference while maintaining safety constraints. In cases where the reference is inadmissible and violates constraints, the controller can navigate the system to the vicinity of the nearest admissible reference. The efficiency of the proposed control scheme is demonstrated through high-fidelity speed tracking results from two tasks involving both admissible and inadmissible references.
Water distribution systems (WDSs) are typically designed with a conservative estimate of the ability of a control system to utilize the available infrastructure. The controller is designed and tuned after … Water distribution systems (WDSs) are typically designed with a conservative estimate of the ability of a control system to utilize the available infrastructure. The controller is designed and tuned after a WDS has been laid out, a methodology that may introduce unnecessary conservativeness in both system design and control, adversely impacting operational efficiency and increasing economic costs. To address these limitations, we introduce a method to simultaneously design infrastructure and develop control parameters, the co-design problem, with the aim of improving the overall efficiency of the system. Nevertheless, the co-design of a WDS is a challenging task given the presence of stochastic variables (e.g. water demands and electricity prices). In this paper, we propose a tractable stochastic co-design method to design the best tank size and optimal control parameters for WDS, where the expected operating costs are established based on Markov chain theory. We also give a theoretical result showing that the average long-run operating cost converges to the expected operating cost with probability~1. Furthermore, this method is not only applicable to greenfield projects for the co-design of WDSs but can also be utilized to improve the operations of existing WDSs in brownfield projects. The effectiveness and applicability of the co-design method are validated through three illustrative examples and a real-world case study in South Australia.
Constraint handling during tracking operations is at the core of many real-world control implementations and is well understood when dynamic models of the underlying system exist, yet becomes more challenging … Constraint handling during tracking operations is at the core of many real-world control implementations and is well understood when dynamic models of the underlying system exist, yet becomes more challenging when data-driven models are used to describe the nonlinear system at hand. We seek to combine the nonlinear modeling capabilities of a wide class of neural networks with the constraint-handling guarantees of model predictive control (MPC) in a rigorous and online computationally tractable framework. The class of networks considered can be captured using Koopman operators, and are integrated into a Koopman-based tracking MPC (KTMPC) for nonlinear systems to track piecewise constant references. The effect of model mismatch between original nonlinear dynamics and its trained Koopman linear model is handled by using a constraint tightening approach in the proposed tracking MPC strategy. By choosing two Lyapunov functions, we prove that solution is recursively feasible and input-to-state stable to a neighborhood of both online and offline optimal reachable steady outputs in the presence of bounded modeling errors under mild assumptions. Finally, we demonstrate the results on a numerical example, before applying the proposed approach to the problem of reference tracking by an autonomous ground vehicle.
Summary The estimation of the reachable set for singular systems with distributed delay and nonlinear term under zero initial condition is studied in this article. Based on the Lyapunov functional … Summary The estimation of the reachable set for singular systems with distributed delay and nonlinear term under zero initial condition is studied in this article. Based on the Lyapunov functional with triple integrals, a sufficient condition for bounding the reachable set is established by combining the free‐weighting matrix method, the integral mean value theorem, and the new inequality scaling method. Then the reachable set of nonlinear singular system is bounded by the prescribed ellipsoids. Finally, two numerical examples and a practical example verify the validity of the theoretical analysis.
Optimizing pump operations is a challenging task for real-time management of water distribution systems (WDS). With suitable pump scheduling, pumping costs can be significantly reduced. In this research, a novel … Optimizing pump operations is a challenging task for real-time management of water distribution systems (WDS). With suitable pump scheduling, pumping costs can be significantly reduced. In this research, a novel economic model predictive control (EMPC) framework for real-time management of WDS is proposed. Optimal pump operations are selected based on predicted system behavior over a receding time horizon with the aim to minimize the total pumping energy cost. Time-varying electricity tariffs are considered while all the required water demands are satisfied. The novelty of this framework is to choose the number of pumps to operate in each pump station as decision variables in order to optimize the total pumping energy costs. By using integer programming, the proposed EMPC is applied to a benchmark case study, the Richmond Pruned network. The simulation with an EPANET hydraulic simulator is implemented. Moreover, a comparison of the results obtained using the proposed EMPC with those obtained using trigger-level control demonstrates significant economic benefits of the proposed EMPC.
To provide robustness of distributed model predictive control (DMPC), this work proposes a robust DMPC formulation for discrete-time linear systems subject to unknown-but-bounded disturbances. Taking advantage of the structure of … To provide robustness of distributed model predictive control (DMPC), this work proposes a robust DMPC formulation for discrete-time linear systems subject to unknown-but-bounded disturbances. Taking advantage of the structure of certain classes of distributed systems seen in applications with interagent coupling like vehicle platooning, a novel robust DMPC is formulated. The proposed approach is characterised by separable terminal costs and locally robust terminal sets, with the latter sets adaptively estimated in the online optimisation problem. A constraint tightening approach based on a set-membership approach is used to guarantee constraint satisfaction for coupled subsystems in the presence of disturbances. Under this formulation, the closed-loop system is shown to be recursively feasible and input-to-state stable. To aid in the deployment of the proposed robust DMPC, a possible synthesis method and design conditions for practical implementation are presented. Finally, simulation results with a mass-spring-damper system are provided to demonstrate the proposed robust DMPC.

Commonly Cited References

In distributed model predictive control (DMPC), where a centralized optimization problem is solved in distributed fashion using dual decomposition, it is important to keep the number of iterations in the … In distributed model predictive control (DMPC), where a centralized optimization problem is solved in distributed fashion using dual decomposition, it is important to keep the number of iterations in the solution algorithm small. In this technical note, we present a stopping condition to such distributed solution algorithms that is based on a novel adaptive constraint tightening approach. The stopping condition guarantees feasibility of the optimization problem and stability and a prespecified performance of the closed-loop system.
We propose a distributed model predictive control scheme for linear time-invariant constrained systems that admit a separable structure. To exploit the merits of distributed computation algorithms, the terminal cost and … We propose a distributed model predictive control scheme for linear time-invariant constrained systems that admit a separable structure. To exploit the merits of distributed computation algorithms, the terminal cost and invariant terminal set of the optimal control problem need to respect the coupling structure of the system. Existing methods to address this issue typically separate the synthesis of terminal controllers and costs from the one of terminal sets, and do not explicitly consider the effect of the current and predicted system states on this synthesis process. These limitations can adversely affect performance due to small or even empty terminal sets. Here, we present a unified framework to encapsulate the synthesis of both the stabilizing terminal controller and invariant terminal set into the same optimization problem. Conditions for Lyapunov stability and invariance are imposed in the synthesis problem in a way that allows the terminal cost and invariant terminal set to admit the desired distributed structure. We illustrate the effectiveness of the proposed method on several numerical examples.
Abstract A solution to multivariate state-space modeling, forecasting, and smoothing is discussed. We allow for the possibilities of nonnormal errors and nonlinear functionals in the state equation, the observational equation, … Abstract A solution to multivariate state-space modeling, forecasting, and smoothing is discussed. We allow for the possibilities of nonnormal errors and nonlinear functionals in the state equation, the observational equation, or both. An adaptive Monte Carlo integration technique known as the Gibbs sampler is proposed as a mechanism for implementing a conceptually and computationally simple solution in such a framework. The methodology is a general strategy for obtaining marginal posterior densities of coefficients in the model or of any of the unknown elements of the state space. Missing data problems (including the k-step ahead prediction problem) also are easily incorporated into this framework. We illustrate the broad applicability of our approach with two examples: a problem involving nonnormal error distributions in a linear model setting and a one-step ahead prediction problem in a situation where both the state and observational equations are nonlinear and involve unknown parameters.
We propose a distributed algorithm, named Distributed Alternating Direction Method of Multipliers (D-ADMM), for solving separable optimization problems in networks of interconnected nodes or agents. In a separable optimization problem … We propose a distributed algorithm, named Distributed Alternating Direction Method of Multipliers (D-ADMM), for solving separable optimization problems in networks of interconnected nodes or agents. In a separable optimization problem there is a private cost function and a private constraint set at each node. The goal is to minimize the sum of all the cost functions, constraining the solution to be in the intersection of all the constraint sets. D-ADMM is proven to converge when the network is bipartite or when all the functions are strongly convex, although in practice, convergence is observed even when these conditions are not met. We use D-ADMM to solve the following problems from signal processing and control: average consensus, compressed sensing, and support vector machines. Our simulations show that D-ADMM requires less communications than state-of-the-art algorithms to achieve a given accuracy level. Algorithms with low communication requirements are important, for example, in sensor networks, where sensors are typically battery-operated and communicating is the most energy consuming operation.
We investigate various properties of the sublevel set G = {x : g(x) ≤ 1} and the integration of h on this sublevel set when g and h are positively … We investigate various properties of the sublevel set G = {x : g(x) ≤ 1} and the integration of h on this sublevel set when g and h are positively homogeneous functions (and in particular homogeneous polynomials). For instance, the latter integral reduces to integrating h exp (-g) on the whole space ℝ n (a nonGaussian integral) and when g is a polynomial, then the volume of G is a convex function of the coefficients of g. We also provide a numerical approximation scheme to compute the volume of G or integrate h on G (or, equivalently to approximate the associated nonGaussian integral). We also show that finding the sublevel set {x : g(x) ≤ 1} of minimum volume that contains some given subset K is a (hard) convex optimization problem for which we also propose two convergent numerical schemes. Finally, we provide a Gaussian-like property of nonGaussian integrals for homogeneous polynomials that are sums of squares and critical points of a specific function.
Measure, Integral and Probability is a gentle introduction that makes measure and integration theory accessible to the average third-year undergraduate student. The ideas are developed at an easy pace Measure, Integral and Probability is a gentle introduction that makes measure and integration theory accessible to the average third-year undergraduate student. The ideas are developed at an easy pace
We propose a new method for solving chance constrained optimization problems that lies between robust optimization and scenario-based methods. Our method does not require prior knowledge of the underlying probability … We propose a new method for solving chance constrained optimization problems that lies between robust optimization and scenario-based methods. Our method does not require prior knowledge of the underlying probability distribution as in robust optimization methods, nor is it based entirely on randomization as in the scenario approach. It instead involves solving a robust optimization problem with bounded uncertainty, where the uncertainty bounds are randomized and are computed using the scenario approach. To guarantee that the resulting robust problem is solvable we impose certain assumptions on the dependency of the constraint functions with respect to the uncertainty and show that tractability is ensured for a wide class of systems. Our results lead immediately to guidelines under which the proposed methodology or the scenario approach is preferable in terms of providing less conservative guarantees or reducing the computational cost.
We consider the problem of fitting given data (u <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</inf> ,y <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</inf> ),...,(u <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">m</inf> ,y <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">m</inf> ) where u <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i</inf> … We consider the problem of fitting given data (u <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</inf> ,y <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</inf> ),...,(u <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">m</inf> ,y <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">m</inf> ) where u <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i</inf> ∈ R <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</sup> and y <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i</inf> ∈ R with a convex polynomial f. A technique to solve this problem using sum of squares polynomials is presented. This technique is extended to enforce convexity of f only on a specified region. Also, an algorithm to fit the convex hull of a set of points with a convex sub-level set of a polynomial is presented. This problem is a natural extension of the problem of finding the minimum volume ellipsoid covering a set. The algorithm, like that for the minimum volume ellipsoid problem, has the property of being invariant to affine coordinate transformations. We generalize this technique to fit arbitrary unions and intersections of polynomial sub-level sets.
Abstract Motivated by the problem of setting prediction intervals in time series analysis, we suggest two new methods for conditional distribution estimation. The first method is based on locally fitting … Abstract Motivated by the problem of setting prediction intervals in time series analysis, we suggest two new methods for conditional distribution estimation. The first method is based on locally fitting a logistic model and is in the spirit of recent work on locally parametric techniques in density estimation. It produces distribution estimators that may be of arbitrarily high order but nevertheless always lie between 0 and 1. The second method involves an adjusted form of the Nadaraya–Watson estimator. It preserves the bias and variance properties of a class of second-order estimators introduced by Yu and Jones but has the added advantage of always being a distribution itself. Our methods also have application outside the time series setting; for example, to quantile estimation for independent data. This problem motivated the work of Yu and Jones.
Nonlinear chance constrained optimization (CCOPT) problems are known to be difficult to solve. This work proposes a smooth approximation approach consisting of an inner and an outer analytic approximation of … Nonlinear chance constrained optimization (CCOPT) problems are known to be difficult to solve. This work proposes a smooth approximation approach consisting of an inner and an outer analytic approximation of chance constraints. In this way, CCOPT is approximated by two parametric nonlinear programming (NLP) problems which can be readily solved by an NLP solver. Any optimal solution of the inner approximation problem is a priori feasible to the CCOPT. The solutions of the inner and outer problems, respectively, converge asymptotically to the optimal solution of the CCOPT.
The scenario approach is a general methodology for data-driven optimization that has attracted a great deal of attention in the past few years. It prescribes that one collects a record … The scenario approach is a general methodology for data-driven optimization that has attracted a great deal of attention in the past few years. It prescribes that one collects a record of previous cases (scenarios) from the same setup in which optimization is being conducted and makes a decision that attains optimality for the seen cases. Scenario optimization is by now very well understood for convex problems, where a theory exists that rigorously certifies the generalization properties of the solution, that is, the ability of the solution to perform well in connection to new situations. This theory supports the scenario methodology and justifies its use. This paper considers nonconvex problems. While other contributions in the nonconvex setup already exist, we here take a major departure from previous approaches. We suggest that the generalization level is evaluated only after the solution is found and its complexity in terms of the length of a support subsample (a notion precisely introduced in this paper) is assessed. As a consequence, the generalization level is stochastic and adjusted case by case to the available scenarios. This fact is key to obtain tight results. The approach adopted in this paper applies not only to optimization, but also to generic decision problems where the solution is obtained according to a rule that is not necessarily the optimization of a cost function. Accordingly, in our presentation we adopt a general stance of which optimization is just seen as a particular case.
Weak Convergence in Metric Spaces. The Space C. The Space D. Dependent Variables. Other Modes of Convergence. Appendix. Some Notes on the Problems. Bibliographical Notes. Bibliography. Index. Weak Convergence in Metric Spaces. The Space C. The Space D. Dependent Variables. Other Modes of Convergence. Appendix. Some Notes on the Problems. Bibliographical Notes. Bibliography. Index.
The aim of this paper is twofold: In the first part, we leverage recent results on scenario design to develop randomized algorithms for approximating the image set of a nonlinear … The aim of this paper is twofold: In the first part, we leverage recent results on scenario design to develop randomized algorithms for approximating the image set of a nonlinear mapping, that is, a (possibly noisy) mapping of a set via a nonlinear function. We introduce minimum-volume approximations which have the characteristic of guaranteeing a low probability of violation, i.e., we admit for a probability that some points in the image set are not contained in the approximating set, but this probability is kept below a pre-specified threshold e. In the second part of the paper, this idea is then exploited to develop a new family of randomized prediction-corrector filters. These filters represent a natural extension and rapprochement of Gaussian and set-valued filters, and bear similarities with modern tools such as particle filters.
In this article, we present a nonlinear robust model predictive control (MPC) framework for general (state and input dependent) disturbances. This approach uses an online constructed tube in order to … In this article, we present a nonlinear robust model predictive control (MPC) framework for general (state and input dependent) disturbances. This approach uses an online constructed tube in order to tighten the nominal (state and input) constraints. To facilitate an efficient online implementation, the shape of the tube is based on an offline computed incremental Lyapunov function with a corresponding (nonlinear) incrementally stabilizing feedback. Crucially, the online optimization only implicitly includes these nonlinear functions in terms of scalar bounds, which enables an efficient implementation. Furthermore, to account for an efficient evaluation of the worst case disturbance, a simple function is constructed offline that upper bounds the possible disturbance realizations in a neighborhood of a given point of the open-loop trajectory. The resulting MPC scheme ensures robust constraint satisfaction and practical asymptotic stability with a moderate increase in the online computational demand compared to a nominal MPC. We demonstrate the applicability of the proposed framework in comparison to state-of-the-art robust MPC approaches with a nonlinear benchmark example.
We revisit the so-called sampling and discarding approach used to quantify the probability of constraint violation of a solution to convex scenario programs when some of the original samples are … We revisit the so-called sampling and discarding approach used to quantify the probability of constraint violation of a solution to convex scenario programs when some of the original samples are allowed to be discarded. Motivated by two scenario programs that possess analytic solutions and the fact that the existing bound for scenario programs with discarded constraints is not tight, we analyze a removal scheme that consists of a cascade of optimization problems, where, at each step, we remove a superset of the active constraints. By relying on results from compression learning theory, we show that such a removal scheme leads to less conservative bounds for the probability of constraint violation than the existing ones. We also show that the proposed bound is tight by characterizing a class of optimization problems which achieves the given upper bound. The performance improvement of the proposed methodology is illustrated by an example that involves a resource sharing linear program.
We address the problem of estimating the ratio of two probability density functions, which is often referred to as the importance. The importance values can be used for various succeeding … We address the problem of estimating the ratio of two probability density functions, which is often referred to as the importance. The importance values can be used for various succeeding tasks suc...
We introduce a new method for solving nonlinear continuous optimization problems with chance constraints. Our method is based on a reformulation of the probabilistic constraint as a quantile function. The … We introduce a new method for solving nonlinear continuous optimization problems with chance constraints. Our method is based on a reformulation of the probabilistic constraint as a quantile function. The quantile function is approximated via a differentiable sample average approximation. We provide theoretical statistical guarantees of the approximation and illustrate empirically that the reformulation can be directly used by standard nonlinear optimization solvers in the case of single chance constraints. Furthermore, we propose an S$\ell_1$QP-type trust-region method to solve instances with joint chance constraints. We demonstrate the performance of the method on several problems and show that it scales well with the sample size and that the smoothing can be used to counteract the bias in the chance constraint approximation induced by the sample approximation.
Summary For systems with uncertain linear models, bounded additive disturbances and state and control constraints, a robust model predictive control (MPC) algorithm incorporating online model adaptation is proposed. Sets of … Summary For systems with uncertain linear models, bounded additive disturbances and state and control constraints, a robust model predictive control (MPC) algorithm incorporating online model adaptation is proposed. Sets of model parameters are identified online and employed in a robust tube MPC strategy with a nominal cost. The algorithm is shown to be recursively feasible and input‐to‐state stable. Computational tractability is ensured by using polytopic sets of fixed complexity to bound parameter sets and predicted states. Convex conditions for persistence of excitation are derived and are related to probabilistic rates of convergence and asymptotic bounds on parameter set estimates. We discuss how to balance conflicting requirements on control signals for achieving good tracking performance and parameter set estimate accuracy. Conditions for convergence of the estimated parameter set are discussed for the case of fixed complexity parameter set estimates, inexact disturbance bounds, and noisy measurements.
Abstract We show that the Euclidean ball has the smallest volume among sublevel sets of nonnegative forms of bounded Bombieri norm as well as among sublevel sets of sum of … Abstract We show that the Euclidean ball has the smallest volume among sublevel sets of nonnegative forms of bounded Bombieri norm as well as among sublevel sets of sum of squares forms whose Gram matrix has bounded Frobenius or nuclear (or, more generally, p -Schatten) norm. These volume-minimizing properties of the Euclidean ball with respect to its representation (as a sublevel set of a form of fixed even degree) complement its numerous intrinsic geometric properties. We also provide a probabilistic interpretation of the results.
The alternating direction method of multipliers (ADMM) has emerged as a powerful technique for large-scale structured optimization. Despite many recent results on the convergence properties of ADMM, a quantitative characterization … The alternating direction method of multipliers (ADMM) has emerged as a powerful technique for large-scale structured optimization. Despite many recent results on the convergence properties of ADMM, a quantitative characterization of the impact of the algorithm parameters on the convergence times of the method is still lacking. In this paper we find the optimal algorithm parameters that minimize the convergence factor of the ADMM iterates in the context of l2-regularized minimization and constrained quadratic programming. Numerical examples show that our parameter selection rules significantly outperform existing alternatives in the literature.
Probability theory is nowadays applied in a huge variety of fields including physics, engineering, biology, economics and the social sciences. This book is a modern, lively and rigorous account which … Probability theory is nowadays applied in a huge variety of fields including physics, engineering, biology, economics and the social sciences. This book is a modern, lively and rigorous account which has Doob's theory of martingales in discrete time as its main theme. It proves important results such as Kolmogorov's Strong Law of Large Numbers and the Three-Series Theorem by martingale techniques, and the Central Limit Theorem via the use of characteristic functions. A distinguishing feature is its determination to keep the probability flowing at a nice tempo. It achieves this by being selective rather than encyclopaedic, presenting only what is essential to understand the fundamentals; and it assumes certain key results from measure theory in the main text. These measure-theoretic results are proved in full in appendices, so that the book is completely self-contained. The book is written for students, not for researchers, and has evolved through several years of class testing. Exercises play a vital rôle. Interesting and challenging problems, some with hints, consolidate what has already been learnt, and provide motivation to discover more of the subject than can be covered in a single introduction.
This paper considers a stochastic control framework, in which the residual model uncertainty of a dynamical system is learned using a Gaussian Process (GP). In the proposed formulation, the residual … This paper considers a stochastic control framework, in which the residual model uncertainty of a dynamical system is learned using a Gaussian Process (GP). In the proposed formulation, the residual model uncertainty consists of a nonlinear function and state-dependent noise. The proposed formulation uses a posterior-GP to approximate the residual model uncertainty and a prior-GP to account for state-dependent noise. The two GPs are interdependent and are thus learned jointly using an iterative algorithm. Theoretical properties of the iterative algorithm are established. Advantages of the state-dependent formulation include (i) faster convergence of the GP estimate to the unknown function as the GP learns which data samples are more trustworthy and (ii) an accurate estimate of state-dependent noise, which can, e.g., be useful for a controller or decision-maker to determine the uncertainty of an action. Simulation studies highlight these two advantages.
The probabilistic boundary is necessary for robust control design in uncertain dynamical systems. The problem of computing the tightest ellipsoidal boundary of the future trajectory with a given probability is … The probabilistic boundary is necessary for robust control design in uncertain dynamical systems. The problem of computing the tightest ellipsoidal boundary of the future trajectory with a given probability is intractable. This paper proposes a sample-based continuous approximation of the original problem. The approximate problem is solvable by a general nonlinear programming algorithm. We prove that the approximate problem's optimal solution and objective value converge to those of the original problem. The feasibility of the approximate solution by finite samples is also investigated. A numerical example has been implemented to compare the proposed and existing methods. The results show that the proposed method increases the approximate solution's robustness and reduces the computational complexity to obtain the approximate solution.
This paper presents an adaptive reference governor (RG) framework for a linear system with matched nonlinear uncertainties that can depend on both time and states, subject to both state and … This paper presents an adaptive reference governor (RG) framework for a linear system with matched nonlinear uncertainties that can depend on both time and states, subject to both state and input constraints. The proposed framework leverages an ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathcal{L}_{1}$</tex-math></inline-formula> ) adaptive controller ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathcal{L}_{1}$</tex-math></inline-formula> AC). that compensates for the uncertainties, and provides guaranteed transient performance, in terms of uniform bounds on the error between actual states and inputs and those of a nominal (i.e., uncertainty-free) system. The uniform performance bounds provided by the ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathcal{L}_{1}$</tex-math></inline-formula> ) AC are used to tighten the pre-specified state and control constraints. A reference governor is then designed for the nominal system using the tightened constraints, and guarantees robust constraint satisfaction. Moreover, the conservatism introduced by the constraint tightening can be systematically reduced by tuning some parameters within the ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathcal{L}_{1}$</tex-math></inline-formula> ) AC. Compared with existing solutions, the proposed adaptive RG framework can potentially yield reduced conservativeness for constraint enforcement and improved tracking performance due to the inherent uncertainty compensation mechanism. Simulation results for a flight control example illustrate the efficacy of the proposed framework.
Abstract A risk-aware decision-making problem can be formulated as a chance-constrained linear program in probability measure space. Chance-constrained linear program in probability measure space is intractable, and no numerical method … Abstract A risk-aware decision-making problem can be formulated as a chance-constrained linear program in probability measure space. Chance-constrained linear program in probability measure space is intractable, and no numerical method exists to solve this problem. This paper presents numerical methods to solve chance-constrained linear programs in probability measure space for the first time. We propose two solvable optimization problems as approximate problems of the original problem. We prove the uniform convergence of each approximate problem. Moreover, numerical experiments have been implemented to validate the proposed methods.
Many optimization problems are naturally delivered in an uncertain framework, and one would like to exercise prudence against the uncertainty elements present in the problem. In previous contributions, it has … Many optimization problems are naturally delivered in an uncertain framework, and one would like to exercise prudence against the uncertainty elements present in the problem. In previous contributions, it has been shown that solutions to uncertain convex programs that bear a high probability to satisfy uncertain constraints can be obtained at low computational cost through constraint randomization. In this paper, we establish new feasibility results for randomized algorithms. Specifically, the exact feasibility for the class of the so-called fully-supported problems is obtained. It turns out that all fully-supported problems share the same feasibility properties, revealing a deep kinship among problems of this class. It is further proven that the feasibility of the randomized solutions for all other convex programs can be bounded based on the feasibility for the prototype class of fully-supported problems. The feasibility result of this paper outperforms previous bounds and is not improvable because it is exact for fully-supported problems.