Research

This page is an attempt to give  an overview of my research work. Please see the publications page for a more comprehensive picture.

Extremum Problems with Total Variation Distance:

Water-filling like behaviour of extremum measures

Water-filling like behaviour of extremum measures

Total variation distance metric on the space of probability measures is a fundamental quantity in statistics and probability, which over the years appeared in many diverse applications. In information theory it is used to define strong typicality and asymptotic equipartition of sequences generated by sampling from a given distribution. In decision problems, it arises naturally when discriminating the results of observation of two statistical hypotheses. In studying the ergodicity of Markov Chains, it is used to define the Dobrushin coefficient and establish the contraction property of transition probability distributions. Moreover, distance in total variation of probability measures is related via upper and lower bounds to an anthology of distances and distance metrics. The measure of distance in total variation of probability measures is a strong form of closeness of probability measures, and, convergence with respect to total variation of probability measures implies their convergence with respect to other distances and distance metrics.

In this project we investigate extremum problems with pay-off being the total variation distance metric defined on the space of probability measures, subject to linear functional constraints on the space of probability measures, and vice-versa; that is, with the roles of total variation metric and linear functional interchanged. Utilizing concepts from signed measures, the extremum probability measures of such problems are obtained in closed form, by identifying the partition of the support set and the mass of these extremum measures on the partition. The results are derived for abstract spaces; specifically, complete separable metric spaces known as Polish spaces, while the high level ideas are also discussed for denumerable spaces endowed with the discrete topology. These extremum problems often arise in many areas, such as, approximating a family of probability distributions by a given probability distribution, maximizing or minimizing entropy subject to total variation distance metric constraints, quantifying uncertainty of probability distributions by total variation distance metric, stochastic minimax control, and in many problems of information, decision theory, and minimax theory. The main results of this project include: (a) characterisation of the properties of the extremum problems under investigation; (b) characterisation of extremum measures on abstract spaces and closed form solutions of the extremum measures for finite alphabet spaces; (c) convexity and concavity properties of extremum solutions.

Dynamic Programming:

Realization of the inventory control example under the resulting optimal policy

Dynamic programming recursions are often employed in optimal control and decision theory to establish existence of optimal strategies, to derive necessary and sufficient optimality conditions, and to compute the optimal strategies either in closed form or via algorithms. The cost-to-go and the corresponding dynamic programming recursion, in their general form, are functionals of the conditional distribution of the underlying state process (controlled process) given the past and present state and control processes. Thus, any ambiguity of the controlled process conditional distribution will affect the optimality of the strategies. The term “ambiguity” is used to differentiate from the term “uncertainty” often used in control nomenclature to account for situations in which the true and nominal distribution (induced by models) are absolutely continuous, and hence they are defined on the same state space. This distinction is often omitted from various robust deterministic and stochastic control approaches, including minimax and risk-sensitive formulations.

In this project we investigate the effect on the cost-to-go and dynamic programming of the ambiguity in the controlled process conditional distribution, and hence on the optimal decision strategies. Specifically, we quantify the conditional distribution ambiguity of the controlled process by a ball with respect to the total variation distance metric, centered at a nominal conditional distribution, and then we derive a new dynamic programming recursion using minimax theory, with two players: player I the control process and player II the conditional distribution (controlled process), opposing each other’s actions. In this minimax game formulation, player’s I objective is to minimize the cost-to-go, while player’s II objective is to maximize it. The maximization over the total variation distance ball of player II is addressed by first deriving results related to the maximization of linear functionals on a subset of the space of signed measures. Utilizing these results, a new dynamic programming recursion is presented which, in addition to the standard terms, includes additional terms that codify the level of ambiguity allowed by player II with respect to the total variation distance ball R_i\in[0,2], i.e.,

    \begin{align*} V_n(x)&=\alpha^n h_n(x),\qquad x\in {\cal X}_n,\\ V_i(x)&=\inf_{u\in {\cal U}_i(x)}\Big\{\alpha^if_i(x,u)+\int_{{\cal X}_{i+1}}V_{i+1}(z)Q^o_{i+1}(dz|x,u)+\frac{R_i}{2}\Big(\sup_{z\in {\cal X}_{i+1}}V_{i+1}(z)-\inf_{z\in {\cal X}_{i+1}}V_{i+1}(z)\Big)\Big\},\quad x\in {\cal X}_i, \ \alpha\in (0,1). \end{align*}

Thus, the effect of player I, the control process, is to minimize, in addition to the classical terms, the difference between the maximum and minimum values of the cost-to-go, scaled by the radius of the total variation distance ambiguity set. We treat in a unified way the finite horizon case, under both the Markovian and non-Markovian nominal controlled processes, and the infinite horizon case. For the infinite horizon case, we consider a discounted pay-off and we show that the operator associated with the resulting dynamic programming equation under total variation distance ambiguity is contractive. Consequently, we derive a new policy iteration algorithm to compute the optimal strategies. In summary, the main results of this project include: (a) formulation of finite horizon discounted stochastic optimal control subject to conditional distribution ambiguity described by total variation distance via minimax theory; (b) dynamic programming recursions for i) nominal Discounted-Markov Control Model (D-MCM), and ii) Discounted-Feedback Control Model (D-FCM), under total variation distance ambiguity on the conditional distribution of the controlled process; (c) formulation of the infinite horizon D-MCM and dynamic programming equation under conditional distribution ambiguity described by total variation distance via minimax theory; (d) characterization of the maximizing conditional distribution belonging to the total variation distance set, and the corresponding new dynamic programming recursions; (e) contraction property of the infinite horizon D-MCM dynamic programming and new policy iteration algorithm.

Average Cost Dynamic Programming:

In this project we analyse the infinite horizon minimax average cost Markov Control Model (MCM), for a class of controlled process conditional distributions which belong to a ball, with respect to total variation distance metric, centered at a known nominal controlled conditional distribution with radius R\in [0,2], in which the minimization is over the control strategies and the maximization is over conditional distributions.

For values of R\in[0,R_{\max}]\subset[0,2], we show that if the nominal controlled process distribution is irreducible, then for every stationary Markov control policy the maximizing conditional distribution of the controlled process is irreducible, and we derive a new dynamic programming equation and a new policy iteration algorithm. The new dynamic programming equation includes, in addition to the standard terms the oscillator seminorn of the cost-to-go, and the maximizing conditional distribution is found via a water-filling, i.e.,

    \begin{align*} J^*+V(x)=\min_{u\in {\cal U}}\Big\{f(x,u)+\sum_{z\in {\cal X}}Q^o(z|x,u)V(z)+\frac{R}{2}\big(\sup_{z\in {\cal X}}V(z)-\inf_{z\in {\cal X}}V(z)\big)\Big\}. \end{align*}

For values of R\in[R_{\max},2], for which the irreducibility is violated, we derive a general dynamic programming equation and a new policy iteration algorithm, i.e.,

    \begin{align*} J^*(x)&=\inf_{u\in {\cal U}(x)}\Big\{\int_{{\ \cal X}}Q^o(dz|x,u)J^*(z) +\frac{R}{2}\Big(\sup_{z\in\cal X}J^*(z)-\inf_{z\in\cal X}J^*(z)\Big)\Big\}\\ J^*(x)+V(x)&=\inf_{u\in {\cal U}(x)}\Big\{f(x,u)+\int_{{\ \cal X}}Q^o(dz|x,u)V(z) +\frac{R}{2}\Big(\sup_{z\in\cal X}V(z)-\inf_{z\in\cal X}V(z)\Big)\Big\}. \end{align*}

In summary, the main results of this project include: (a) New dynamic programming recursions; (b) characterisation of the maximizing conditional distribution; (c) derivation of new policy iteration algorithms, in which the policy evaluation and the policy improvement steps are performed by using the maximizing conditional distribution obtained under total variation distance ambiguity constraint.

Approximation of Markov Processes by Lower Dimensional Processes:

Kullback-Leibler Divergence Rate Vs. Total Variation Distance Vs. Number of States

Kullback-Leibler Divergence Rate Vs. Total Variation Distance Vs. Number of States

Finite-State Markov (FSM) processes are often employed to model physical phenomena in diverse areas, such as machine learning, information theory (lossless and lossy compression), control theory, networked control and telecommunication systems, speech processing, systems biology, economics, etc. In many of these applications, the state-space of the Markov process quickly becomes too large to be used in analysis and simulations. One approach often proposed in the literature to overcome the large number of states is to approximate a FSM process by another FSM process with fewer states, by optimizing certain payoffs. Examples of such pay-offs include the relative entropy or Kullback-Leibler (KL) divergence and entropy.

In this project, we propose new approaches for approximating an FSM process by another process, using the total variation distance metric. The practical importance of total variation distance is twofold: (i)  unlike the KL divergence, it is a true distance metric between two distributions defined on different alphabet spaces, which are not necessarily absolutely continuous (i.e., their support set can be different); (ii)  the solution to the optimization problems based on the total variation distance involves the identification of the optimal partition on which the probability distribution of the approximating process is defined, and a corresponding water-filling of how states of the initial FSM process are aggregated to form the state-space of the approximating process. As the total variation distance between the distributions of the original FSM process and the approximating process are allowed to increase, the cardinality of the state-space of the approximating process decreases, i.e. more states are aggregated together to form a reduced state space. In fact, the distribution of the approximating process is defined via water-filling with respect to the distribution of the original FSM process. The above features imply that total variation distance is a natural choice for a pay-off or fidelity criterion of approximating one distribution by another.

In particular, we formulate the approximation problem as an optimization problem using two different approaches. The first approach is based on approximating the transition probability matrix of the FSM process by a lower-dimensional transition probability matrix, resulting in an approximating process which is a Finite-State Hidden Markov (FSHM) process. The second approach is based on approximating the invariant probability vector of the original FSM process by another invariant probability vector defined on a lower-dimensional state space. Going a step further, a method is proposed based on optimizing a Kullback-Leibler divergence to approximate the FSHM processes by FSM processes. The solutions of these optimization problems are described by optimal partition functions which aggregate the states of the FSM process via a corresponding water-filling solution, resulting in lower dimensional approximating processes which are FSHM or FSM processes. In summary, the main results based on the first approach include: (a) an approximation approach based on the transition probabilities of the original FSM process, and show that the optimal solution has a water-filling behaviour; (ii) derivation of the corresponding FSHM/FSM approximating process defined on lower-dimensional state spaces. Based on the second approach the main results include: (a)  novel iterative algorithms to compute the invariant distribution of the approximating process; (ii) complete solutions to the approximation problems; (iii) characterization of the optimal partition functions which aggregate the states of original FSM process to form the invariant distribution of the approximating process.

Mathematical Models for Demography and Optimum Immigration Policies:

Human population of any country grows and shrinks over time, as a consequence of the variability in birth, death, immigration and emigration rates. By integrating these demographic processes and by studying human population short-term and long-term changes in size and age composition, mathematical models can be derived, which can be used by demographic workers and other scientists to better understand both present population state and its future trends. By partitioning the population into different groups (e.g., according to age), and taking into account the interaction of these groups, dynamic population models can be derived which provide information about the dynamic transitions from one age group of population to another. These models can be used to obtain predicted estimates of population growth or decline.

Population of age group G2

Population of age group G2

Optimum number of immigrants

Optimum number of immigrants

In this project we develop and test dynamic population models based on human population short and long-term changes in size and age composition due to demographic processes such as births, deaths, migration, etc. We identify and estimate the parameters embedded in these models, since actual data may be either not available or noisy. In addition, the dynamic models and the identified parameters are used to derive optimum immigration policies. The Log-Normal models developed in this project are new and to the best of our knowledge have not appeared elsewhere. Thus, by following some basic principles of modern systems and optimal Control Theory, deterministic and stochastic models are developed both in discrete and continuous time. The identification and estimation of missing parameters using the deterministic model is carried out by implementing the Theory of Calculus of Variations and the Steepest Descent algorithm while for the stochastic models, an identification method based on the Expectation-Maximization (EM) algorithm together with Kalman Filtering is used. Finally, optimal immigration policies are derived to reach pre-specified population targets using optimal control theory. The role of EM algorithm to demography problem is new and so is the application of LQ tracking.

In summary, the main results of this project revealed that the mathematical models can reproduce the provided data and that the unknown parameters postulated in the models are determined with high accuracy. Also, the results indicate that control theory provides a promising tool for formulating optimum immigration policies. The principal conclusion is that the implementation of such computer based models which show the dynamic transitions from one age group of population to another  and therefore extract information and predict estimates of population growth and decline, is essential for policy formulation, decision making and resource allocation by Government agencies and demographers.