Publications

Journal Papers:

[7] Charalambos D. Charalambous, Christos Kourtellaris and Ioannis Tzortzis. ”Ergodic Control-Coding Capacity of Stochastic Control Systems: Information Signalling and Hierarchical Optimality of Gaussian Systems”, SIAM Journal on Control and Optimization (accepted).

Full Text | External Link

Show Abstract

To be written…

Close Abstract

[6] Ioannis Tzortzis, Charalambos D. Charalambous and Themistoklis Charalambous. ”Infinite horizon average cost dynamic programming subject to total variation distance ambiguity”, SIAM Journal on Control and Optimization, Volume 57 (4), pp.2843-2872, August 2019.

Full Text | External Link

Show Abstract

We analyze the infinite horizon minimax average cost Markov Control Model (MCM), for a class of controlled process conditional distributions, which belong to a ball, with respect to total variation distance metric, centered at a known nominal controlled conditional distribution with radius R∈[0,2], in which the minimization is over the control strategies and the maximization is over conditional distributions. Upon performing the maximization, a dynamic programming equation is obtained which includes, in addition to the standard terms, the oscillator semi-norm of the cost-to-go. First, the dynamic programming equation is analyzed for finite state and control spaces. We show that if the nominal controlled process distribution is irreducible, then for every stationary Markov control policy the maximizing conditional distribution of the controlled process is also irreducible for R∈[0,Rmax]. Second, the generalized dynamic programming is analyzed for Borel spaces. We derive necessary and sufficient conditions for any control strategy to be optimal. Through our analysis, new dynamic programming equations and new policy iteration algorithms are derived. The main feature of the new policy iteration algorithms (which are applied for finite alphabet spaces) is that the policy evaluation and policy improvement steps are performed by using the maximizing conditional distribution, which is obtained via a water filling solution. Finally, the application of the new dynamic programming equations and the corresponding policy iteration algorithms are shown via illustrative examples.

Close Abstract

[5] Andreas A. Malikopoulos, Charalambos D. Charalambous and Ioannis Tzortzis. ”The average cost of Markov chains subject to total variation distance uncertainty”, Systems and Control Letters, Volume 120, pp.29-35, October 2018.

Full Text | External Link

Show Abstract

This paper addresses the problem of controlling a Markov chain so as to minimize the long-run expected average cost per unit time when the invariant distribution is unknown but we know it belongs to a given uncertain set. The mathematical model used to describe this set is the total variation distance uncertainty. We show that the equilibrium control policy, which yields higher probability to the states with low cost and lower probability to the states with the high cost, is an optimal control policy that minimizes the average cost. Recognition of such a policy may be of value in practical situations with constraints consistent to those studied here when the invariant distribution is uncertain and deriving online an optimal control policy is required.

Close Abstract

[4] Charalambos D. Charalambous, Christos K. Kourtellaris and Ioannis Tzortzis. “Information transfer of control strategies: Dualities of stochastic optimal control theory and feedback capacity of information theory”, IEEE Transactions on Automatic Control, Volume 62 (10), pp.5010-5025, October 2017.

Full Text | External Link

Show Abstract

The control-coding capacity of stochastic control systems is introduced, and its operational meaning is established using randomized control strategies, which simultaneously control output processes encode information, and communicate information from control processes to output processes. The control-coding capacity is the analog Shannon’s coding-capacity of noisy channels. Furthermore, duality relations to stochastic optimal control problems with deterministic and randomized control strategies are identified including the following. First, extremum problems of stochastic optimal control with directed information payoff are equivalent to feedback capacity problems of information theory, in which the control system act as a communication channel. Second, for Gaussian linear decision models with average quadratic constraints, it is shown that optimal randomized strategies are Gaussian, and decompose into a deterministic part and a random part. The deterministic part is precisely the optimal strategy of the linear quadratic Gaussian stochastic optimal control problem, whereas the random part is the solution of an water-filling information transmission problem that encodes information, which is estimated by a decoder.

Close Abstract

[3] Ioannis Tzortzis, Charalambos D. Charalambous, Themistoklis Charalambous, Christoforos N. Hadjicostis and Mikael Johansson. ”Approximation of Markov processes by lower dimensional processes via total variation metrics”, IEEE Transactions of Automatic Control, Volume 62 (3), pp.1030-1045, March 2017.

Full Text | External Link

Show Abstract

The aim of this paper is to approximate a Finite-State Markov (FSM) process by another process defined on a lower dimensional state space, called the approximating process, with respect to a total variation distance fidelity criterion. The approximation problem is formulated as an optimization problem using two different approaches. The first approach is based on approximating the transition probability matrix of the FSM process by a lower-dimensional transition probability matrix, resulting in an approximating process which is a Finite-State Hidden Markov (FSHM) process. The second approach is based on approximating the invariant probability vector of the original FSM process by another invariant probability vector defined on a lower-dimensional state space. Going a step further, a method is proposed based on optimizing a Kullback-Leibler divergence to approximate the FSHM processes by FSM processes. The solutions of these optimisation problems are described by optimal partition functions which aggregate the states of the FSM process via a corresponding water-filling solution, resulting in lower-dimensional approximating processes which are FSHM or FSM processes. Throughout the paper, the theoretical results are justified by illustrative examples that demonstrate our proposed methodology.

Close Abstract

[2] Ioannis Tzortzis, Charalambos D. Charalambous and Themistoklis Charalambous. “Dynamic programming subject to total variation distance ambiguity”, SIAM Journal on Control and Optimization, Volume 53 (4), pp.2040-2075, July 2015.

Full Text | External Link

Show Abstract

The aim of this paper is to address optimality of stochastic control strategies via dynamic programming subject to total variation distance ambiguity on the conditional distribution of the controlled process. We formulate the stochastic control problem using minimax theory, in which the control minimizes the payoff while the conditional distribution, from the total variation distance set, maximizes it. First, we investigate the maximization of a linear functional on the space of probability measures on abstract spaces, among those probability measures which are within a total variation distance from a nominal probability measure, and then we give the maximizing probability measure in closed form. Second, we utilize the solution of the maximization to solve minimax stochastic control with deterministic control strategies, under a Markovian and a non-Markovian assumption, on the conditional distributions of the controlled process. The results of this part include (1) minimax optimization subject to total variation distance ambiguity constraint; (2) new dynamic programming recursions, which involve the oscillator seminorm of the value function, in addition to the standard terms; and (3) a new infinite horizon discounted dynamic programming equation, the associated contractive property, and a new policy iteration algorithm. Finally, we provide illustrative examples for both the finite and infinite horizon cases. For the infinite horizon case, we invoke the new policy iteration algorithm to compute the optimal strategies.

Close Abstract

[1] Charalambos D. Charalambous, Ioannis Tzortzis, Sergey Loyka and Themistoklis Charalambous. “Extremum problems with total variation distance and their applications”, IEEE Transactions of Automatic Control, Volume 59 (9), pp.2353-2368, September 2014.

Full Text | External Link

Show Abstract

The aim of this paper is to investigate extremum problems with pay-off being the total variation distance metric defined on the space of probability measures, subject to linear functional constraints on the space of probability measures, and vice-versa; that is, with the roles of total variation metric and linear functional interchanged. Utilizing concepts from signed measures, the extremum probability measures of such problems are obtained in closed form, by identifying the partition of the support set and the mass of these extremum measures on the partition. The results are derived for abstract spaces; specifically, complete separable metric spaces known as Polish spaces, while the high level ideas are also discussed for denumerable spaces endowed with the discrete topology. These extremum problems often arise in many areas, such as, approximating a family of probability distributions by a given probability distribution, maximizing or minimizing entropy subject to total variation distance metric constraints, quantifying uncertainty of probability distributions by total variation distance metric, stochastic minimax control, and in many problems of information, decision theory, and minimax theory.

Close Abstract


Conference Papers:

[21] Ioannis Tzortzis, Charalambos D. Charalambous and Christoforos N. Hadjicostis. “Robust linear quadratic regulator for Markov Jump Systems”, In The 58th IEEE Conference on Decision and Control (CDC 2019), Nice, France, 11-13 December 2019 (to appear).

Full Text | External Link

Show Abstract

This paper develops a robust linear quadratic regulator applicable to Markov jump linear systems with uncertain transition probability distributions. The stochastic control problem is formulated using (i) minimax optimization theory, and (ii) total variation distance metric as a tool for codifying the level of uncertainty of the jumping process. By following a dynamic programming approach, a robust optimal controller is derived, which in addition to minimizing the quadratic cost, it also restricts the influence of uncertainty. A procedure for solving the linear quadratic regulator problem is also proposed, and an illustrative example is presented. Numerical results indicate the applicability and effectiveness of the proposed approach.

Close Abstract

[20] Ioannis Tzortzis and Marios M. Polycarpou. “Distributionally robust active fault diagnosis”, In The 2019 European Control Conference (ECC), Naples, Italy, 25-28 June 2019, pp.3886-3891.

Full Text | External Link

Show Abstract

The main objective of active fault diagnosis is the design of auxiliary input signals that enhance the detection and isolation of faults in modern technological systems. A major consideration when evaluating active fault diagnosis methods is robustness in the presence of modeling uncertainties. The presence of modeling inaccuracies will typically compromise the performance of the auxiliary input signals designed for effective fault diagnosability. This work investigates a distributionally robust active fault diagnosis approach for nonlinear systems, which takes into consideration variation or ambiguity in the uncertain parameters of the models. A model-based approach is presented using total variation distance as an information constraint, and as a measure for the separation of multiple models based on the similarity of their output probability density functions. The effectiveness of the proposed approach is demonstrated through an application to a three-tank system.

Close Abstract

[19] Charalambos D. Charalambous, Photios A. Stavrou, Christos K. Kourtellaris and Ioannis Tzortzis. “Directed information subject to a fidelity: applications to conditionally Gaussian processes”, In The 2018 European Control Conference (ECC), Limassol, Cyprus, 12-15 June 2018, pp.3071-3076.

Full Text | External Link

Show Abstract

This paper is concerned with the minimization of directed information over conditional distributions that satisfy a fidelity of reconstructing a conditionally Gaussian random process by another process, causally. This information theoretic extremum problem is directly linked, via bounds to the optimal performance theoretically attainable by non-causal, causal and zero-delay codes of data compression. The application example includes the characterization of causal rate distortion function for conditionally Gaussian random processes subject to a mean-square error fidelity.

Close Abstract

[18] Charalambos D. Charalambous, Christos K. Kourtellaris and Ioannis Tzortzis. “Signalling of information in networked stochastic dynamical control systems”, In The 56th IEEE Conference on Decision and Control (CDC), Melbourne, Australia, 12-15 December 2017, pp.3138-3143.

Full Text | External Link

Show Abstract

Signalling of information is made feasible from one controller to another controller, in a networked control system consisting of two interconnected control systems. In this application, controller 2 of control system 2 has access to feedback information from its output, while controller 1 of control system 1 does not have access to feedback information from its output. The methodology is based on computing the Control-Coding (CC) Capacity of control system 2, and then transforming the randomized control strategy which achieves the CC Capacity of control system 2, into a controller-encoder, which simultaneously controls the output of control system 2, and encodes the output of control system 1, then decodes it and applies it to the controller of control system 1, to achieve the control objectives of system 1. The paper demonstrates that Control Systems are capable of information transfer from one processor to another processor, provided the rate at which information is transmitted through the control system is below the CC Capacity of the control system.

Close Abstract

[17] Charalambos D. Charalambous, Christos K. Kourtellaris and Ioannis Tzortzis. “Hierarchical optimality of linear controllers-encoders-decoders operating at control-coding Capacity of LQG control systems”, In The 56th IEEE Conference on Decision and Control (CDC), Melbourne, Australia, 12-15 December 2017, pp.3682-3687.

Full Text | External Link

Show Abstract

Randomized control strategies which achieve the Control-Coding Capacity of Linear Quadratic Gaussian Control systems, are transformed, hierarchically into controller-encoder strategies, which (a) stabilize the control system, (b) encode information or command signals, and (c) operate at the control-coding capacity. The predictable part of such controllers-encoders stabilizes the control system, while their non-predictable part encodes information signals. It is shown that among all controllers, encoders and decoders which minimize Mean Square Error (MSE), the conditional mean decoder is optimal and linear controller-encoder-decoders are optimal.

Close Abstract

[16] Charalambos D. Charalambous, Christos K. Kourtellaris and Ioannis Tzortzis. “Hierarchical decomposition of optimal control and information strategies in control-coding capacity of stochastic systems”, In The 56th IEEE Conference on Decision and Control (CDC), Melbourne, Australia, 12-15 December 2017, pp.21-26.

Full Text | External Link

Show Abstract

The Control-Coding Capacity (CC Capacity) is defined as the maximum amount of information in bits/second, which can be encoder into randomized control strategies, transmitted over the control system, and decoder at its outputs, with arbitrary small probability of error. This paper shows that optimal randomized control strategies which achieve the CC Capacity impose a natural hierarchical decomposition into two simpler sub-optimization problems, one related to the control objectives and one related to the information transmission objectives. The hierarchical decomposition states that control signals and communication signals interact in a specific order, optimal strategies are decentralized, and the information transmission rate is zero, unless the power allocated to the overall system is above a certain threshold, which is the minimum cost to achieve the control objectives.

Close Abstract

[15] Christos K. Kourtellaris, Ioannis Tzortzis and Charalambos D. Charalambous. “Two-letter capacity formula for channels with memory and feedback”, In The 2017 IEEE Information Theory Workshop (ITW), Kaohsiung, Taiwan, 6-10 November 2017, pp.96-100.

Full Text | External Link

Show Abstract

For a class of channels with unit memory on previous channels outputs, we identify necessary and sufficient conditions, to test whether the capacity achieving channel input distributions with feedback are time-invariant, and whether feedback capacity is characterized by a two-letter expression, similar to that of memoryless channels. The method is based on showing that a certain dynamic programming equation, which in general, is a nested optimization problem over the sequence of channel input distributions, reduces to a non-nested optimization problem. We then apply these conditions to derive a two-letter expression for the feedback capacity of the Binary State Symmetric Channel and to evaluate explicitly the capacity and the capacity achieving input distribution. Further we derive computationally efficient upper bounds on the probability of error and evaluate the performance in the finite-blocklength regime.

Close Abstract

[14] Ioannis Tzortzis, Christoforos N. Hadjicostis and Laurent Mombaerts. “Reconstruction of gene regulatory networks using an error filtering learning scheme”, at the 55th Annual Allerton Conference on Communication, Control, and Computing, Allerton Park and Retreat Center, Monticello, IL, USA, October 3-6, 2017, pp.244-251.

Full Text | External Link

Show Abstract

One of the fundamental and most challenging problems in system biology is the reconstruction of gene regulatory networks from input-output data based on nonlinear differential equations. This paper presents an approach to estimate the unknown nonlinearities and to identify the true network that generated the data, based on an error filtering learning scheme and a Lyapunov synthesis method. The unknown nonlinearities are modelled by networks using radial basis functions and the model validation is performed by taking advantage of the so-called persistency of excitation of input signals, a condition that turns-out to play a significant role in the problem of uncovering the true network structure. The proposed methodology and the theoretical results are validated through an illustrative example.

Close Abstract

[13] Charalambos D. Charalambous, Christos K. Kourtellaris, Sergey Loyka and Ioannis Tzortzis. “The capacity of unstable dynamical systems-interaction of control and information transmission”, At the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25-30 June 2017, pp.2663-2667.

Full Text | External Link

Show Abstract

Feedback capacity is extended beyond classical communication channels, to stochastic dynamical systems, which may correspond to unstable control systems or unstable communication channels, subject to an average cost constraints of total power  $k\in[0,\infty)$.
It is shown that optimal conditional distributions or randomized strategies, have a dual role, to simultaneously control the output process and to encode information. The dual role is due to the interaction of control and information transmission; it states that encoders in communication channels operate as encoders-controllers, while controllers in control systems operate as controllers-encoders.
Through the analysis of Gaussian linear control systems with randomized strategies, which are equivalent to Additive Gaussian Noise channels, Stable or Unstable, with arbitrary memory on past outputs, with an average constraint of quadratic form, it is shown that unstable dynamical systems have Control-Coding Capacity which is operational, precisely as in Shannon’s operational definition. However, the control-coding capacity is zero, unless the power $k$ allocated to the system, exceeds a threshold $k_{min}$, where $k_{min}$ is the minimum cost of ensuring asymptotic stability and ergodicity. The excess power $k-k_{min}$ is turned into an achievable rate of information transmission over the dynamical system.

Close Abstract

[12] Ioannis Tzortzis, Charalambos D. Charalambous, Themistoklis Charalambous, Christos K. Kourtellaris and Christoforos N. Hadjicostis. “Robust linear quadratic regulator for uncertain systems”, In The 55th IEEE Conference on Decision and Control (CDC), Las Vegas, USA, 12-14 December 2016, pp. 1515-1520.

Full Text | External Link| Presentation

Show Abstract

This paper develops a Linear Quadratic Regulator (LQR), which is robust to disturbance variability, by using the total variation distance as a metric. The robust LQR problem is formulated as a minimax optimization problem, resulting in a robust optimal controller which in addition to minimizing the quadratic cost it also minimizes the level of disturbance variability. A procedure for solving the LQR problem is also proposed and an example is presented which clearly illustrates the effectiveness of our developed methodology.

Close Abstract

[11] Charalambos D. Charalambous, Christos K. Kourtellaris and Ioannis Tzortzis. “Information transfer in stochastic optimal control with randomized strategies and directed information criterion”, In The 55th IEEE Conference on Decision and Control (CDC), Las Vegas, USA, 12-14 December 2016, pp. 2004-2009.

Full Text | External Link

Show Abstract

We show that stochastic dynamical control systems are capable of information transfer from control processes to output processes, with operational meaning as defined by Shannon. Moreover, we show that optimal control strategies have a dual role, specifically, i) to transfer information from the control process to the output process, and ii) to stabilize the output process. We illustrate that information transfer is feasible by considering general Gaussian Linear Decision Models, and relate it to the well-known Linear-Quadratic-Gaussian (LQG) control theory.

Close Abstract

[10] Ioannis Tzortzis, Charalambos D. Charalambous and Themistoklis Charalambous. “Infinite horizon discounted dynamic programming subject to total variation ambiguity on conditional distribution”, In The 55th IEEE Conference on Decision and Control (CDC), Las Vegas, USA, 12-14 December 2016, pp. 2010-2015.

Full Text | External Link

Show Abstract

We analyze the infinite horizon minimax discounted cost Markov Control Model (MCM), for a class of controlled process conditional distributions, which belong to a ball, with respect to total variation distance metric, centered at a known nominal controlled conditional distribution with radius $R\in[0,2]$, in which the minimization is over the control strategies and the maximization is over conditional distributions. Through our analysis (i) we derive a new discounted dynamic programming equation, (ii) we show the associated contraction property, and (iii) we develop a new policy iteration algorithm. Finally, the application of the new dynamic programming and the corresponding policy iteration algorithm are shown via an illustrative example.

Close Abstract

[9] Ioannis Tzortzis, Charalambos D. Charalambous and Themistoklis Charalambous. “Infinite horizon average cost dynamic programming subject to ambiguity on conditional distribution”, In The 54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, 15-18 December 2015, pp. 7171-7176.

Full Text | External Link

Show Abstract

This paper addresses the optimality of stochastic control strategies based on the infinite horizon average cost criterion, subject to total variation distance ambiguity on the conditional distribution of the controlled process. This stochastic optimal control problem is formulated using minimax theory, in which the minimization is over the control strategies and the maximization is over the conditional distributions. Under the assumption that, for every stationary Markov control law the maximizing conditional distribution of the controlled process is irreducible, we derive a new dynamic programming recursion which minimizes the future ambiguity, and we propose a new policy iteration algorithm. The new dynamic programming recursion includes, in addition to the standard terms, the oscillator semi-norm of the cost-to-go. The maximizing conditional distribution is found via a water-filling algorithm. The implications of our results are demonstrated through an example.

Close Abstract

[8] Ioannis Tzortzis, Charalambos D. Charalambous, Themistoklis Charalambous, Christoforos N. Hadjicostis and Mikael Johansson. “Approximation of Markov processes by lower dimensional processes”, In The 53rd IEEE Conference on Decision and Control (CDC), Los Angeles, California, 15-17 December 2014, pp. 4441-4446.

Full Text | External Link

Show Abstract

In this paper, we investigate the problem of aggregating a given finite-state Markov process by another process with fewer states. The aggregation utilizes total variation distance as a measure of discriminating the Markov process by the aggregate process, and aims to maximize the entropy of the aggregate process invariant probability, subject to a fidelity described by the total variation distance ball. An iterative algorithm is presented to compute the invariant distribution of the aggregate process, as a function of the invariant distribution of the Markov process. It turns out that the approximation method via aggregation leads to an optimal aggregate process which is a hidden Markov process, and the optimal solution exhibits a water-filling behavior. Finally, the algorithm is applied to specific examples to illustrate the methodology and properties of the approximations.

Close Abstract

[7] Charalambos D. Charalambous, Ioannis Tzortzis, Sergey Loyka, and Themistoklis Charalambous. “Extremum problems with total variation distance”, In The 52nd IEEE Conference on Decision and Control (CDC), Florence, Italy, 10-13, December 2013, pp. 1204-1209.

Full Text | External Link

Show Abstract

The aim of this paper is to investigate extremum problems with pay-off the total variational distance metric subject to linear functional constraints both defined on the space of probability measures, as well as related problems. Utilizing concepts from signed measures, the extremum probability measures of such problems are obtained in closed form, by identifying the partition of the support set and the mass of these extremum measures on the partition. The results are derived for abstract spaces, specifically, complete separable metric spaces, while the high level ideas are also discussed for denumerable spaces endowed with the discrete topology.

Close Abstract

[6] Charalambos D. Charalambous, Ioannis Tzortzis and Themistoklis Charalambous. “Dynamic programming with total variational distance uncertainty”, In The 51st IEEE Conference on Decision and Control (CDC), Maui, Hawaii, 10-13 December 2012, pp. 1909-1914.

Full Text | External Link

Show Abstract

The aim of this paper is to address optimality of stochastic control strategies via dynamic programming subject to total variational distance uncertainty on the conditional distribution of the controlled process. Utilizing concepts from signed measures, the maximization of a linear functional on the space of probability measures on abstract spaces is investigated, among those probability measures which are within a total variational distance from a nominal probability measure. The maximizing probability measure is found in closed form. These results are then applied to solve minimax stochastic control with deterministic control strategies, under a Markovian assumption on the conditional distributions of the controlled process. The results include: 1) Optimization subject to total variational distance constraints, 2) new dynamic programming recursions, which involve the oscillator seminorm of the value function.

Close Abstract

[5] Charalambos D. Charalambous, Ioannis Tzortzis and Farzad Rezaei. “Stochastic optimal control of discrete-time systems subject to conditional distribution uncertainty”, In The 50th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC), Orlando, Florida, 12-15 December 2011, pp. 6407-6412.

Full Text | External Link

Show Abstract

The aim of this paper is to address optimality of control strategies for stochastic discrete time control systems subject to conditional distribution uncertainty. This type of uncertainty is motivated from the fact that the value function involves expectation with respect to the conditional distribution. The issues which will be discussed are the following. 1) Optimal stochastic control systems subject to conditional distribution uncertainty, 2) optimality criteria for stochastic control systems with conditional distribution uncertainty, including principle of optimality and dynamic programming.

Close Abstract

[4] Charalambos D. Charalambous, Ioannis Tzortzis and Farzad Rezaei. “Stochastic optimal control subject to ambiguity”, In The 18th International Federation of Automatic Control World Congress (IFAC), Milano, Italy, August 28-September 2, 2011, pp. 4459-4464.

Full Text | External Link

Show Abstract

The aim of this paper is to address optimality of control strategies for stochastic control systems subject to uncertainty and ambiguity. Uncertainty corresponds to the case when the true dynamics and the nominal dynamics are di erent but they are de ned on the same state space. Ambiguity corresponds to the case when the true dynamics are de ned on a higher dimensional state space than the nominal dynamics. The paper is motivated by a brief summary of existing methods dealing with optimality of stochastic systems subject to uncertainty, and a discussion on its shortcoming when stochastic systems are ambiguous. The issues which will be discussed are the following. 1) Modeling methods for ambiguous stochastic systems, 2) formulation of optimal stochastic control systems subject to ambiguity, 3) optimality criteria for ambiguous stochastic control systems.

Close Abstract

[3] Charalambos D. Charalambous, Farzad Rezaei and Ioannis Tzortzis. “Optimal stochastic control of discrete-time systems subject to total variation distance uncertainty”, In The 49th IEEE Conference on Decision and Control (CDC), Atlanta, Georgia, USA, 15-17 December 2010, pp. 1442-1447.

Full Text | External Link

Show Abstract

This paper presents another application of the results in [1], [2], where existence of the maximizing measure over the total variation distance constraint is established, while the maximizing pay-off is shown to be equivalent to an optimization of a pay-off which is a linear combination of L1 and L∞ norms. Here emphasis is geared towards to uncertain discrete-time controlled stochastic dynamical system, in which the control seeks to minimize the pay-off while the measure seeks to maximize it over a class of measures described by a ball with respect to the total variation distance centered at a nominal measure. Two types of uncertain classes are considered; an uncertainty on the joint distribution, an uncertainty on the conditional distribution. The solution of the minimax problem is investigated via dynamic programming.

Close Abstract

[2] Ioannis Tzortzis and Charalambos D. Charalambous. “Optimum immigration policies based on linear quadratic theory”, In The 4th International Symposium on Communications, Control and Signal Processing (ISCCSP), Limassol, Cyprus, 3-5 March 2010, pp. 1-5.

Full Text | External Link

Show Abstract

In this paper, it is demonstrated that the Theory of Linear Quadratic is applicable in deriving optimum immigration policies, while maintaining population and immigration levels close to certain pre-specified reference trajectories. An already existed dynamic population model found in literature and statistical data obtained from Cyprus Statistics, are used for our simulation purposes. The numerical results presented illustrate that the applied technique results in optimum immigration policies that can be well formulated for fixed as well as for variable target sets.

Close Abstract

[1] Ioannis Tzortzis and Charalambos D. Charalambous. “Dynamic demographic models and parameter identification: simulations based on statistical data”, In The 10th European Control Conference (ECC), Budapest, Hungary, 23-26 August 2009.

Full Text | External Link

Show Abstract

This paper is concerned with dynamical population models obtained from short and long-term changes in size and age composition due to demographic processes such as births, deaths, migration, etc. Both deterministic and stochastic models are presented. The parameters which are embedded in the models may be either unavailable or noisy, therefore system identification methods are invoked to estimate these parameters. The numerical results presented illustrate that the mathematical models can reproduce the data provided by the Statistic Department of the Republic of Cyprus, and that the unknown and noisy parameters postulated in the models are determined with high accuracy.

Close Abstract


Theses Publications:

[4] Ioannis Tzortzis. “Extremum problems with total variation distance metric on the space of probability measures and applications”, Ph.D thesis, Department of Electrical and Computer Engineering, University of Cyprus, January 2015.


[3] Ioannis Tzortzis. “Mathematical models for demography and its applications to Cyprus population”, Master thesis, Department of Electrical and Computer Engineering, University of Cyprus, May 2009.


[2] Ioannis Tzortzis.“Real time control of the ball and beam system”, Bachelor thesis, Department of Electrical Engineering and Informatics, Budapest University of Technology and Economics, June 2007.


[1] Ioannis Tzortzis. “Analysis of switching converters”, Diploma thesis, Department of Electrical Engineering, Higher Technical Institute, June 2003.