Machine learning (ML) is about computational methods that enable machines to learn concepts from experience. In handling a wide variety of experience ranging from data instances, knowledge, and constraints to rewards, adversaries, and lifelong interaction in an ever-growing spectrum of tasks, contemporary ML/AI (artificial intelligence) research has resulted in a multitude of learning paradigms and methodologies. Despite the continual progresses on all different fronts, the disparate, narrowly focused methods also make standardized, composable, and reusable development of ML approaches difficult, and preclude the opportunity to build AI agents that panoramically learn from all types of experience. This article presents a standardized ML formalism, in particular a 'standard equation' of the learning objective, that offers a unifying understanding of many important ML algorithms in the supervised, unsupervised, knowledge-constrained, reinforcement, adversarial, and online learning paradigms, respectively—those diverse algorithms are encompassed as special cases due to different choices of modeling components. The framework also provides guidance for mechanical design of new ML approaches and serves as a promising vehicle toward panoramic machine learning with all experience.
Keywords: standard model, panoramic learning, experience, composable machine learning, unification
Media Summary
Humans learn from a range of experience, what about a computer? The past decades of AI and Machine Learning (ML) research has resulted in a multitude of paradigms and algorithms, each specialized to train ML models with a certain type of information and experience in a certain type of problem. While pushing the field forward rapidly, the bewildering and ever-growing variety of paradigms and algorithms also makes it extremely difficult to master existing ML techniques and to develop universal, repeatable, and reusable computer programs that can simultaneously learn from diverse experience in the real world. It is a constant desire and aspiration to search for a standardized ML formalism that unifies the distinct learning principles, much like the Standard Model in physics, to gain a more holistic understanding of the diverse paradigms and algorithms, lay out a blueprint permitting fuller and more systematic exploration in the design and analysis of new algorithms, and eventually serves as a vehicle toward panoramic machine learning capable of integrating all available information (data, knowledge, constraints, reward, adversary, etc.) in learning and thus applicable to all problems. This work presents an attempt toward this end. In particular, we establish a standard equation of the learning objective, which subsumes many of the known algorithms as special cases and offers guiding principles of designing new more powerful learning algorithms in a mechanical and composable way.
1. Introduction
Human learning has the hallmark of learning concepts from diverse sources of information. Take the example of learning a language. Humans can benefit from various experience—by observing examples through reading and hearing, studying abstract definitions and grammar, making mistakes and getting correction from teachers, interacting with others and observing implicit feedback, and so on. Knowledge of a prior language can also accelerate the acquisition of a new one. How can we build artificial intelligence (AI) agents that are similarly capable of learning from all types of experience? We refer to the capability of flexibly integrating all available experience in learning as panoramic learning.
In handling different experience ranging from data instances, knowledge, constraints, to rewards, adversaries, and lifelong interplay in an ever-growing spectrum of tasks, contemporary ML and AI research has resulted in a large multitude of learning paradigms (e.g., supervised, unsupervised, active, reinforcement, adversarial learning), models, optimization techniques, not mentioning countless approximation heuristics and tuning tricks, plus combinations of all the above. While pushing the field forward rapidly, these results also make mastering existing ML techniques very difficult, and fall short of reusable, repeatable, and composable development of ML approaches to diverse problems with distinct available experience.
Those fundamental challenges call for a standardized ML formalism that offers a principled framework for understanding, unifying, and generalizing current major paradigms of learning algorithms, and for mechanical design of new approaches for integrating any useful experience in learning. The power of standardized theory is perhaps best demonstrated in physics, which has a long history of pursuing symmetry and simplicity of its principles: exemplified by the famed Maxwell’s equations in the 1800s that reduced various principles of electricity and magnetism into a single electromagnetic theory, followed by General Relativity in the 1910s and the Standard Model in the 1970s, physicists describe the world best by unifying and reducing different theories to a standardized one. Likewise, it is a constant quest in the field of ML to establish a ‘Standard Model’ (Domingos, 2015; Langley, 1989), that gives a holistic view of the broad learning principles, lays out a blueprint permitting fuller and more systematic exploration in the design and analysis of new algorithms, and eventually serves as a vehicle toward panoramic learning that integrates all available sources of experience.
This paper presents an attempt toward this end. In particular, our principal focus is on the learning objective that drives the model training given the experience and thus often lies at the core for designing new algorithms, understanding learning properties, and validating outcomes. We investigate the underlying connections between a range of seemingly distinct ML paradigms. Each of these paradigms has made particular assumptions on the form of experience available. For example, the present most popular supervised learning relies on collections of data instances, often applying a maximum likelihood objective solved with simple gradient descent. Maximum likelihood based unsupervised learning instead can invoke different solvers, such as expectation-maximization (EM), variational inference, and wake-sleep in training for varied degree of approximation to the problem. Active learning(Settles, 2012) manages data instances which, instead of being given all at once, are adaptively selected. Reinforcement learning(Sutton & Barto, 2018) makes use of feedback obtained via interaction with the environment. Knowledge-constrained learning like posterior regularization (Ganchev et al., 2010; Zhu et al., 2014) incorporates structures, knowledge, and rules expressed as constraints. Generative adversarial learning(Goodfellow et al., 2014) leverages a companion model called discriminator to guide training of the model of interest.
In light of these results, we present a standard equation (SE) of the objective function. The SE formulates a rather broad design space of learning algorithms. We show that many of the well-known algorithms of the above paradigms are all instantiations of the general formulation. More concretely, the SE, based on the maximum entropy and variational principles, consists of three principled terms, including the experience term that offers a unified language to express arbitrary relevant information to supervise the learning, the divergence term that measures the fitness of the target model to be learned, and the uncertainty term that regularizes the complexity of the system. The single succinct formula re-derives the objective functions of a large diversity of learning algorithms, reducing them to different choices of the components. The formulation thus sheds new light on the fundamental relationships between the diverse algorithms that were each originally designed to deal with a specific type of experience.
The modularity and generality of the framework is particularly appealing not only from the theoretical point of view, but also because it offers guiding principles for designing algorithmic approaches to new problems in a mechanical way. Specifically, the SE by its nature allows combining together all different experience to learn a model of interest. Designing a problem solution boils down to choosing what experience to use depending on the problem structure and available resources, without worrying too much about how to use the experience in the training. Besides, the standardized ML perspective also highlights that many learning problems in different research areas are essentially the same and just correspond to different specifications of the SE components. This enables us to systematically repurpose successful techniques in one area to solve problems in another.
The remainder of the article is organized as follows. Section 2 gives an overview of relevant learning and inference techniques as a prelude of the standardized framework. Section 3 presents the standard equation as a general formulation of the objective function in learning algorithms. The subsequent two sections discuss different choices of two of the key components in the standard equation, respectively, illustrating that many existing methods are special cases of the formulation: Section 4 is devoted to discussion of the experience function and Section 5 focuses on the divergence function. Section 6 discusses an extended view of the standard equation in dynamic environments. Section 7 focuses on the optimization algorithms for solving the standard equation objective. Section 8 discusses the diverse types of target models. Section 9 discusses the utility of the standardized formalism for mechanical design of panoramic learning approaches. Section 10 reviews related work. Section 11 concludes the article with discussion of future directions—in particular, we discuss the broader aspects of ML not covered in the present work (e.g., more advanced learning such as continual learning in complex evolving environments, theoretical analysis of learnability, generalization and complexity, and automated algorithm composition) and how their unified characterization based on or inspired by the current framework could potentially lead toward a full ‘Standard Model’ of ML and a turnkey approach to panoramic learning with all types of experience.
2. Preliminaries: The Maximum Entropy View of Learning and Inference
Depending on the nature of the task (e.g., classification or regression), data (e.g., labeled or unlabeled), information scope (e.g., with or without latent variables), and form of domain knowledge (e.g., prior distributions or parameter constraints), and so on, different learning paradigms with often complementary (but not necessarily easy to combine) advantages have been developed for different needs. For example, the paradigms built on the maximum likelihood principles, Bayesian theories, variational calculus, and Monte Carlo simulation have led to much of the foundation underlying a wide spectrum of probabilistic graphical models, exact/approximate inference algorithms, and even probabilistic logic programs suitable for probabilistic inference and parameter estimations in multivariate, structured, and fully or partially observed domains, while the paradigms built on convex optimization, duality theory, regularization, and risk minimization have led to much of the foundation underlying algorithms such as support vector machine (SVM), boosting, sparse learning, structure learning, and so on. Historically, there have been numerous efforts in establishing a unified machine learning framework that can bridge these complementary paradigms so that advantages in model design, solver efficiency, side-information incorporation, and theoretical guarantees can be translated across paradigms. As a prelude of our presentation of the ‘standard equation’ framework toward this goal, here we begin with a recapitulation of the maximum entropy view of statistical learning. By naturally marrying the probabilistic frameworks with the optimization-theoretic frameworks, the maximum entropy viewpoint had played an important historical role in offering the same lens to understanding several popular methodologies such as maximum likelihood learning, Bayesian inference, and large margin learning.
2.1. Maximum Likelihood Estimation (MLE)
We start with the maximum entropy perspective of the maximum likelihood learning.
2.1.1. Supervised MLE
We consider an arbitrary probabilistic model (e.g., a neural network or probabilistic graphical model for, say, language generation) with parameters θ∈Θ to be learned. Let pθ(x)∈P(X) denote the distribution defined by the model, where X is the data space (e.g., all language text) and P(X) denotes the set of all probability distributions on X. Given a set of independent and identically distributed (i.i.d.) data examples D={x∗∈X}, the most common method for estimating the parameters θ is perhaps maximum likelihood estimation (MLE). MLE learns the model by minimizing the negative log-likelihood:
θmin−Ex∗∼D[logpθ(x∗)].(2.1)
MLE is known to be intimately related to the maximum entropy principle (Jaynes, 1957). In particular, when the model pθ(x) is in the exponential family of the form:
pθ(x)=exp{θ⋅T(x)}/Z(θ),(2.2)
where T(x) is the sufficient statistics of data x and Z(θ)=∑x∈Xexp{θ⋅T(x)} is the normalization factor, it is shown that MLE is the convex dual of maximum entropy estimation.
In a maximum entropy formulation, rather than assuming a specific parametric from of the target model distribution, denoted as p(x), we instead impose constraints on the model distribution. Specifically, in the supervised setting, the constraints require the expectation of the features T(x) to be equal to the empirical expectation:
Ep[T(x)]=Ex∗∼D[T(x∗)].(2.3)
In general, there exist many distributions p∈P(X) that satisfy the constraint. The principle of maximum entropy resolves the ambiguity by choosing the distribution such that its Shannon entropy, H(p):=−Ep[logp(x)], is maximized. Following this principle, in the supervised setting, we thus have the specific constrained optimization problem:
where θ and μ are Lagrangian multipliers. Setting the derivative w.r.t. p and μ to equal zero implies that p must have the same form as in Equation 2.2:
p(x)=exp{θ⋅T(x)}/Z(θ),(2.6)
where we see the parameters θ in the exponential family parameterization are the Lagrangian multipliers that enforce the constraints. Plugging the solution back into the Lagrangian, we obtain:
L(θ)=Ex∗∼D[θ⋅T(x∗)]−logZ(θ),(2.7)
which is simply the negative of the MLE objective in Equation 2.1.
Thus maximum entropy is dual to maximum likelihood. It provides an alternative view of the problem of fitting a model into data, where the data instances in the training set are treated as constraints, and the learning problem is treated as a constrained optimization problem. This optimization-theoretic view of learning will be revisited repeatedly in the sequel to allow extending machine learning under all experience of which data instances is just a special case.
2.1.2. Unsupervised MLE
Similar to the MLE framework for supervised learning, unsupervised learning via MLE can also be reformulated as a constraint optimization problem with entropy maximization. Consider learning a multivariate model with latent variables, where each data instance is partitioned into observed variables x∈X and latent variables y∈Y. For example, in the problem of image clustering, x∈Rd is the observed image of d pixels and y∈{1,…,K} is the unobserved cluster indicator (where K is the number of clusters). The goal is to learn a model pθ(x,y) that captures the joint distribution of x and y. Since y is unobserved, we minimize the negative log-likelihood with y marginalized out:
θmin−Ex∗∼D⎣⎡logy∈Y∑pθ(x∗,y)⎦⎤.(2.8)
Direct optimization of the marginal log-likelihood is typically intractable due to the summation over y. Earlier work thus developed different solvers with varying levels of approximations.
It can be shown that the intractable negative log-likelihood above can be upper bounded by a more tractable term known as the variational free energy(Neal & Hinton, 1998). Let q(y∣x) represent an arbitrary auxiliary distribution acting as a surrogate of the true posterior p(y∣x), which is known as a variational distribution. Then, for each instance x∗∈D, we have:
where the inequality holds because KL divergence is always nonnegative. The free energy upper bound contains two terms: the first one is the entropy of the variational distribution, which captures the intrinsic randomness (i.e., amount of information carried by an auxiliary distribution); the second term, now written as −Eq(y∣x∗)p~d(x∗)[logpθ(x∗,y)], by taking into account the empirical distribution p~d from which the instance x∗ is drawn, is the cross entropy between the distributions q(y∣x∗)p~d(x∗) and pθ(x∗,y), driving the two to be close and thereby allowing q to approximate p.
The popular expectation maximization (EM) algorithm for unsupervised learning via MLE can be interpreted as minimizing the variational free energy (Neal & Hinton, 1998). In fact, as we discuss subsequently, popular heuristics such as the variational EM and the wake-sleep algorithms, are approximations to the EM algorithm by introducing approximating realizations to either the free energy objective function L or to the solution space of the variational distribution q.
Expectation Maximization (EM). The most common approach to learning with unlabeled data or partially observed multivariate models is perhaps the EM algorithm (Dempster et al., 1977). With the use of the variational free energy as a surrogate objective to the original marginal likelihood as in Equation 2.9, EM can be also understood as an alternating minimization algorithm, where L(q,θ) is minimized with regard to q and θ in two stages, respectively. At each iteration n, the expectation (E) step maximizes L(q,θ(n)) w.r.t. q. From Equation 2.9, this is achieved by setting q to the current true posterior:
E-step:q(n+1)(y∣x∗)=pθ(n)(y∣x∗),(2.10)
so that the KL divergence vanishes and the upper bound is tight. In the subsequent maximization (M) step, L(q(n+1),θ) is minimized w.r.t. θ:
M-step:θmaxEq(n+1)(y∣x∗)[logpθ(x∗,y)],(2.11)
which is to maximize the expected complete data log-likelihood. The EM algorithm has an appealing property that it monotonically decreases the negative marginal log-likelihood over iterations. To see this, notice that after the E-step the upper bound L(q(n+1),θ(n)) is equal to the negative marginal log-likelihood, and the M-step further decreases the upper bound (and thus the negative marginal log-likelihood).
Variational EM. When the model pθ(x,y) is complex (e.g., a neural network or a multilayer graphical model), directly working with the true posterior in the E-step becomes intractable. Variational EM overcomes the difficulty with approximations. It considers a restricted family Q′ of the variational distribution q(y) such that optimization w.r.t. q within the family is tractable:
Variational E-step:q∈Q′minL(q,θ(t)).(2.12)
A common way to restrict the q family is the mean-field methods, which partition the components of y into sub-groups y=(y1,…,yM) and assume that q factorizes w.r.t. the groups: q(y)=∏i=1Mqi(yi). The variational principle summarized in (Wainwright & Jordan, 2008) gives a more principled interpretation of the mean-field and other approximation methods. In particular, in the case where pθ(x,y) is an exponential family distribution with sufficient statistics T(x,y), the exact E-step (Equation 2.10) can be interpreted as seeking the optimal valid mean parameters (i.e., expected sufficient statistics) for which the free energy is minimized. For discrete latent variables y, the set of all valid mean parameters constitutes a marginal polytope M. In this perspective, the mean-field methods (Equation 2.12) correspond to replacing M with an inner approximation M′⊆M. With the restricted set M′ of mean parameters, the E-step generally no longer tightens the bound of the negative marginal log-likelihood, and the algorithm does not necessarily decrease the negative marginal log-likelihood monotonically. However, the algorithm preserves the property that it minimizes the upper bound of the negative marginal log-likelihood. Besides the mean-field methods, there are other approaches for approximation such as belief propagation. These methods correspond to using an outer approximation M′′⊇M of the marginal polytope, and do not guarantee upper bounds on the negative marginal log-likelihood.
Another approach to restrict the family of q is to assume a parametric distribution qω(y∣x) and optimize the parameters ω in the E-step. The approach has been used in black-box variational inference (Ranganath et al., 2014), and variational auto-encoders (VAEs) (Kingma & Welling, 2014) where q is parameterized as a neural network (a.k.a ‘inference network,’ or ‘encoder’).
It is worth mentioning that the variational approach has also been used for approximate Gaussian processes (GPs, as a nonparametric methods; Titsias, 2009; Wilson et al., 2016b), where y is the inducing points and the variational distribution q(y) is parameterized as a Gaussian distribution with a nondiagonal covariance matrix that preserves the structures within the true covariance (and hence is different from the above mean-field approximation, which assumes a diagonal variational covariance matrix). We refer interested readers to Wilson et al. (2016b) for more details.
Wake-Sleep. In some cases when the auxiliary q is assumed to have a certain form (e.g., a deep network), the approximate E-step in Equation 2.12 may still be too complex to be tractable, or the gradient estimator (w.r.t. the parameters of q) can suffer from high variance (Mnih & Gregor, 2014; Paisley et al., 2012). To tackle the challenge, more approximations are introduced. The wake-sleep algorithm (Hinton et al., 1995) is one of such methods. In the E-step w.r.t. q, rather than minimizing KL(q(y)∥pθ(y∣x∗)) (Equation 2.9) as in EM and variational EM, the wake-sleep algorithm makes an approximation by minimizing the Kullback–Leibler (KL) divergence in opposite direction:
which can be optimized efficiently with gradient descent when q is parameterized. Besides wake-sleep, one can also use other methods for low-variance gradient estimation in Equation 2.12, such as reparameterization gradient (Kingma & Welling, 2014) and score gradient (Glynn, 1990; Mnih & Gregor, 2014; Ranganath et al., 2014).
In sum, the entropy maximization perspective has formulated unsupervised MLE as an optimization-theoretic framework that permits simple alternating minimization solvers. Starting from the upper bound of negative marginal log-likelihood (Equation 2.9) with maximum entropy and minimum cross entropy, the originally intractable MLE problem gets simplified, and a series of optimization algorithms, ranging from (variational) EM to wake-sleep, arise naturally as an approximation to the original solution.
2.2. Bayesian Inference
Now we revisit another classical learning framework, Bayesian inference, and examine its intriguing connections with the maximum entropy principle. Interestingly, the the maximum entropy principle can also help to reformulate Bayesian inference as a constraint optimization problem, as for MLE.
Different from MLE, Bayesian approach for statistical inference treats the hypotheses (parameters θ) to be inferred as random variables. Assuming a prior distribution π(θ) over the parameters, and considering a probabilistic model that defines a conditional distribution p(x∣θ), the inference is based on Bayes’ theorem:
p(θ∣D)=p(D)π(θ)∏x∗∈Dp(x∗∣θ),(2.14)
where p(θ∣D) is the posterior distribution after observing the data D (which we assume are i.i.d.); and p(D)=∫θπ(θ)∏x∗p(x∗∣θ)dθ is the marginal likelihood.
Interestingly, the early work by Zellner (1988) showed the relations between Bayesian inference and maximum entropy, by reformulating the statistical inference problem from the perspective of information processing, and rediscovering Bayes’ theorem as the optimal information processing rule. More specifically, statistical inference can be seen as a procedure of information processing, where the system receives input information in the form of prior knowledge and data, and emits output information in the form of parameter estimates and others. An efficient inference procedure should generate an output distribution such that the system retains all input information and not inject any extraneous information. The learning objective is thus to minimize the difference between the input and output information w.r.t. the output distribution:
where the first two terms measure the output information in the output distribution q(θ) and marginal p(D), and the third term measures the input information in the prior π(θ) and data likelihood p(x∗∣θ). Here P(Θ) is the space of all probability distributions over θ.
The optimal solution of q(θ) is precisely the the posterior distribution p(θ∣D) due to Bayes’ theorem (Equation 2.14). The proof is straightforward by noticing that the objective can be rewritten as minqKL(q(θ)∥p(θ∣D)).
Similar to the case of duality between MLE and maximum entropy (Equation 2.4), the same entropy maximization principle can cast Bayesian inference as a constrained optimization problem. As Jaynes (1988) commented, this fresh interpretation of Bayes’ theorem “could make the use of Bayesian methods more attractive and widespread, and stimulate new developments in the general theory of inference” (Jaynes, 1988, p. 280). The next subsection reviews how entropy maximization as a “useful tool in generating probability distributions” (Jaynes, 1988, p.280) has related to and resulted in more general learning and inference frameworks, such as posterior regularization.
2.3. Posterior Regularization
The optimization-based formulation of Bayesian inference in Equation 2.15 offers important additional flexibility in learning by allowing rich constraints on machine learning models to be imposed to regularize the outcome. For example, in Equation 2.15 we have seen the standard normality constraint of a probability distribution being imposed on the posterior q. It is natural to consider other types of constraints that encode richer problem structures and domain knowledge, which can regularize the model to learn desired behaviors.
The idea has led to posterior regularization (Ganchev et al., 2010) or regularized Bayes (Reg-Bayes; Zhu et al., 2014), which augments the Bayesian inference objective with additional constraints:
where we have rearranged the terms and dropped any constant factors in Equation 2.15, and added constraints with ξ being a vector of slack variables, U(ξ) a penalty function (e.g., ℓ1 norm of ξ), and Q(ξ) a subset of valid distributions over θ that satisfy the constraints determined by ξ. The optimization problem is generally easy to solve when the penalty/constraints are convex and defined w.r.t. a linear operator (e.g., expectation) of the posterior q. For example, let T(x∗;θ) be a feature vector of data instance x∗∈D, the constraint posterior set Q can be defined as:
Q(ξ):={q(θ):Eq[T(x∗;θ)]≤ξ,∀x∗∈D},(2.17)
which bounds the feature expectations with ξ.
Max-margin constraint is another expectation constraint that has shown to be widely effective in classification and regression (Vapnik, 1998). The maximum entropy discrimination (MED) by (Jaakkola et al., 2000) regularizes linear regression models with the max-margin constraints, which is latter generalized to more complex models p(x∣θ), such as Markov networks (Taskar et al., 2004) and latent variable models (Zhu et al., 2014). Formally, let y∗∈R be the observed label associated with x∗. The margin-based constraint says that a classification/regression function h(x;θ) should make at most ϵ deviation from the true label y∗. Specifically, consider the common choice of the function h as a linear function: h(x;θ)=θ⊤T(x), where T(x) is, with a slight abuse of notation, the feature of instance x. The constraint is written as:
Alternating optimization for posterior regularization. Having seen EM-style alternating minimization algorithms being applied as a general solver for a number of optimization-theoretic frameworks described above, it is not surprising that the posterior regularization framework can also be solved with an alternating minimization procedure. For example, consider the simple case of linear constraint in Equation 2.17, penalty function U(ξ)=∥ξ∥1, and q factorizing across θ={θc}. At each iteration n, the solution of q(θc) is given as (Ganchev et al., 2010):
where θ\c denotes all components of θ except θc, and Z is the normalization factor. Intuitively, a configuration of θc with a higher expected constraint value E\cT(x∗;θ) will receive a higher probability under q(n+1)(θc). The optimization procedure iterates over all components c of θ.
2.4. Summary
In this section, we have seen that the maximum entropy formalism provides an alternative insight into the classical learning frameworks of MLE, Bayesian inference, and posterior regularization. It provides a general expression of these three paradigms as a constrained optimization problem, with a paradigm-specific loss on the model parameters θ and an auxiliary distribution q, over a properly designed constraint space Q where q must reside:
q,θmins.t.L(q,θ)q∈Q.(2.20)
In particular, the use of the auxiliary distribution q converts the originally highly complex problem of directly optimizing θ against data, to an alternating optimization problem over q and θ, which is algorithmically easier to solve since q often acts as an easy-to-optimize proxy to the target model. The auxiliary q can also be more flexibly updated to absorb influence from data or constraints, offering a teacher-student–style iterative mechanism to incrementally update θ as we will see in the sequel.
By reformulating learning as a constrained optimization problem, the maximum entropy point of view also offers a great source of flexibility for applying many powerful tools for efficient approximation and enhanced learning, such as variational approximation (e.g., by relaxing Q to be easy-to-inference family of q such as the mean field family, Jordan et al., 1999, and Xing et al., 2002), convex duality (e.g., facilitating dual sparsity of support vectors via the complementary slackness in the KKT conditions), and kernel methods as used in (Taskar et al., 2004; Zhu & Xing, 2009).
It is intriguing that, in the dual point of view on the problem of (supervised) MLE, data instances are encoded as constraints (Equation 2.4), much like the structured constraints in posterior regularization. In the following sections, we present the standardized formalism of machine learning algorithms and show that indeed a myriad types of experience besides data instances and constraints can all be encoded in the same generic form and be used in learning.
3. A Standard Model for Objective Function
Generalizing from Equation 2.16, we present the following general formulation for learning a target model via a constrained loss minimization program. We would refer to the formulation as the ‘Standard Equation’ because it presents a general space of learning objectives that encompasses many specific formalisms used in different machine learning paradigms.
Without loss of generality, let t∈T be the variable of interest, for example, the input-output pair t=(x,y) in a prediction task, or the target variable t=x in generative modeling. Let pθ(t) be the target model with parameters θ to be learned. Generally, the SE is agnostic to the specific forms of the target model, meaning that the target model can take an arbitrary form as desired by the problem at hand (e.g., classification, regression, generation, control) and can be of arbitrary types ranging from deep neural networks of arbitrary architectures, prompts for pretrained models, symbolic systems (e.g., knowledge graph), probabilistic graphical models of arbitrary dependence structures, and so on. We discuss more details of the different choices of the target model in Section 8.
Let q(t) be an auxiliary distribution. The SE is written as:
The SE contains three major terms that constitute a learning formalism: the uncertainty functionH(⋅) that controls the compactness of the output model (e.g., as measured by the amount of allowed randomness while trying to fit experience); the divergence functionD(⋅,⋅) that measures the distance between the target model to be trained and the auxiliary model that facilitates a teacher–student mechanism as shown below; and the experience function, which is introduced by a penalty term U(ξ) that draws in the set of ‘experience functions’ fk(θ) that represent external experience of various kinds for training the target model. The hyperparameters α,β≥0 enable trade-offs between these components.
Experience function. Perhaps the most powerful in terms of impacting the learning outcome and utility is the experience functions fk(θ). An experience function f(θ)(t)∈R measures the goodness of a configuration t in light of any given experience. The superscript (θ) highlights that the experience in some settings (e.g., reward experience as in Section 4.3) could depend on or be coupled with the target model parameters θ. In the following, we omit the superscript when there is no ambiguity. As discussed in Section 4, all diverse forms of experience that can be utilized for model training, such as data examples, constraints, logical rules, rewards, and adversarial discriminators, can be encoded as an experience function. The experience function hence provides a unified language to express all exogenous information about the target model, which we consider as an essential ingredient for panoramic learning to flexibly incorporate diverse experience in learning. Based on the uniform treatment of experience, a standardized optimization program as above can be formulated to identify the desired model. Specifically, the experience functions contribute to the optimization objective via the penalty term U(ξ) over slack variables ξ∈RK applied to the expectation Eq[fk]. The effect of maximizing the expectation is such that the auxiliary model q is encouraged to produce samples of high quality in light of the experience (i.e., samples receiving high scores as evaluated by the experience function).
Divergence function. The divergence function D(q,pθ) measures the ‘quality’ of the target model pθ in terms of its distance (divergence) with the auxiliary model q. Intuitively, we want to minimize the distance from pθ to q, which is optimized to fit the experience as above. Section 5 gives a concrete example of how the divergence term would directly impact the model training: with a certain specification of the different components (e.g., the experience function, α/β), the SE in Equation 3.1 would reduce to minθD(pd,pθ). That is, the learning objective is to minimize the divergence between the target model distribution pθ and the data distribution pd, and the divergence function D(⋅,⋅) determines the specific optimization problem. The divergence function can have a variety of choices, ranging from the family of f-divergence (e.g., KL divergence), or Bregman divergence, to optimal transport distance (e.g., Wasserstein distance), and so on. We discuss the divergence term in Section 5 in more detail.
Uncertainty function. The uncertainty function H(q) describes the uncertainty of the auxiliary distribution q and thus controls the complexity of the learning system. It conforms with the maximum entropy principle discussed in Section 2 that one should pick the most uncertain solution among those that fit all experience. Like other components in SE, the uncertainty measure H(⋅) can take different forms, such as the popular Shannon entropy, as well as other generalized ones such as Tsallis entropy. In this article, we assume Shannon entropy by default.
For the discussion in the following sections, it is often convenient to consider a special case of the SE in Equation 3.1. Specifically, we assume a common choice of the penalty U(ξ)=∑kξk, and, with a slight abuse of notations, f=∑kfk. In this case, the SE in Equation 3.1 can equivalently be written in an unconstrained form:
q,θmin−αH(q)+βD(q,pθ)−Eq[f],(3.2)
which can be easily seen by optimizing Equation 3.1 over ξ. In the special unconstrained form, the interplay between the exogenous experience, divergence, and the endogenous uncertainty become more explicit.
Optimization: Teacher-student mechanism. The introduction of the auxiliary distribution q relaxes the learning problem of pθ, originally only over θ, to be now alternating between q and θ. Here q acts as a conduit between the exogenous experience and the target model: it on the one hand subsumes the experience (by maximizing the expected f value), and on the other hand passes it incrementally to the target model (by minimizing the divergence D). The following fixed point iteration between q and θ illustrates this optimization strategy under the SE. Let us plug into Equation 3.2 the popular cross entropy (CE) as the divergence function, that is, D(q,pθ)=−Eq[logpθ], and Shannon entropy as the uncertainty measure, that is, H(q)=−Eq[logq]. We further assume the experience f is independent of the model parameters θ (the assumption is indeed not necessary for the teacher step). We have, at iteration n:
where Z is the normalization factor. The first step embodies a ‘teacher’s update’ where the teacher q ingests experience f and builds on current states of the student pθ(n); the second step is reminiscent of a ‘student’s update’ where the student pθ updates its states by maximizing its alignment (here measured by CE) with the teacher.
Besides, the auxiliary q is an easy-to-manipulate intermediate form in the training that permits rich approximate inference tools for tractable optimization. We have the flexibility of choosing its surrogate functions, ranging from the principled variational approximations for the target distribution in a properly relaxed space (e.g., mean fields) where gaps and bounds can be characterized, to the arbitrary neural network-based ‘inference networks’ that are highly expressive and easy to compute. As can be easily shown (e.g., see Section 4.1.3), popular training heuristics, such as EM, variational EM, wake-sleep, forward and backward propagation, and so on, are all direct instantiations or variants of the above teacher-student mechanism with different choices of the form of q.
More generally, a broad set of sophisticated algorithms, such as the policy gradient for reinforcement learning and the generative adversarial learning, can also be easily derived by plugging in specific designs of the experience function f and divergence D. Table 1 summarizes various specifications of the SE components that recover a range of existing well-known algorithms from different paradigms. As shown in more detail in the subsequent sections, the standard equation (Equations 3.1 and 3.2) offers a unified and universal paradigm for model training under many scenarios based on many types of experience, potentiating a turnkey implementation and a more generalizable theoretical characterization.
Table 1. Example configurations of the components in the standard equation (Equations 3.1 and 3.2), which recover different existing algorithms. Here, ‘CE’ means Cross Entropy; ‘JSD’ is the Jensen-Shannon divergence; ‘W1 dist.’ is the first-order Wasserstein distance; and ‘KL’ is the KL divergence. Refer to Sections 4, 5, and 6 for more details. (Scroll or use slider at bottom of table to see additional columns.)
Experience type
Experience function f
Divergence D
α
β
Algorithm
Data instances
fdata(x;D)
CE
1
1
Unsupervised MLE
fdata(x,y;D)
CE
1
ϵ
Supervised MLE
fdata-self(x,y;D)
CE
1
ϵ
Self-supervised MLE
fdata-w(t;D)
CE
1
ϵ
Data Re-weighting
fdata-aug(t;D)
CE
1
ϵ
Data Augmentation
factive(x,y;D)
CE
1
ϵ
Active Learning (Ertekin et al., 2007)
Knowledge
frule(x,y)
CE
1
1
Posterior Regularization (Ganchev et al., 2010)
frule(x,y)
CE
R
1
Unified EM (Samdani et al., 2012)
Reward
logQθ(x,y)
CE
1
1
Policy Gradient
logQθ(x,y)+Qin,θ(x,y)
CE
1
1
+ Intrinsic Reward
Qθ(x,y)
CE
ρ>0
ρ>0
RL as Inference
Model
fmodelmimicking(x,y;D)
CE
1
ϵ
Knowledge Distillation (Hinton et al., 2015)
Variational
binary classifier
JSD
0
1
Vanilla GAN (Goodfellow et al., 2014)
discriminator
f-divergence
0
1
f-GAN (Nowozin et al., 2016)
1-Lipschitz discriminator
W1 distance
0
1
WGAN (Arjovsky et al., 2017)
1-Lipschitz discriminator
KL
0
1
PPO-GAN (Wu et al., 2020)
Online
fτ(t)
CE
ρ>0
ρ>0
Multiplicative Weights (Freund & Schapire, 1997)
4. Experience Function
The experience function f(t) in the standard equation can be instantiated to encode vastly distinct types of experience. Different choices of f(t) result in learning algorithms applied to different problems. With particular choices, the standard equation rediscovers a wide array of well-known algorithms. The resulting common treatment of the previously disparate algorithms is appealing as it offers new holistic insights into the commonalities and differences of those algorithms. Table 1 shows examples of extant algorithms that are recovered by the standard equation.
4.1. Data Instance Experience
We first consider the most common type of experience, namely, data instances, which are assumed to be independent and identically distributed (i.i.d.). Such data instance experience can appear in a wide range of contexts, including supervised, self-supervised, unsupervised, actively supervised, and other scenarios with data augmentation and manipulation. Figure 2 illustrates the experience functions based on the data instances.
4.1.1. Supervised Data Instances
Without loss of generality and for consistency of notations with the rest of the section, we consider data instances to consist of a pair of input-output variables, namely t=(x,y). For example, in image classification, x represents the input image and y is the object label. In the supervised setting, we observe the full data drawn i.i.d. from the data distribution (x∗,y∗)∼pd(x,y). For an arbitrary configuration (x0,y0), its probability pd(x0,y0) under the data distribution can be seen as measuring the expected similarity between (x0,y0) and true data samples (x∗,y∗), and be written as pd(x0,y0)=Epd(x∗,y∗)[I(x∗,y∗)(x0,y0)]. Here the similarity measure is I(x∗,y∗)(x,y), an indicator function that takes the value 1 if (x,y) equals (x∗,y∗) and 0 otherwise (we will see other similarity measures shortly). In practice, we are given an empirical distribution p~d(x,y) by observing a collection of instances D on which the expected similarity is evaluated:
E(x∗,y∗)∼D[I(x∗,y∗)(x,y)]=Nm(x,y),(4.1)
where N is the size of the data set D, and m(x,y) is the number of occurrences of the configuration (x,y) in D.
The experience function f accommodates the data instance experience straightforwardly as below:
Figure 2 (a)–(b) shows an illustration. In particular, the logarithm of the expected similarity is used as the experience function score, that is, the more ‘similar’ a configuration (x,y) is to the observed data instances, the higher its quality. The logarithm serves to make the subsequent derivations more convenient as can be seen below.
With this from of f, we show that the SE derives the conventional supervised MLE algorithm.
Supervised MLE. In the SE Equation 3.2 (with cross entropy and Shannon entropy), we set α=1, and β to a very small positive value ϵ. As a result, the auxiliary distribution q(x,y) is determined directly by the full data instances (not the model pθ). That is, the solution of q in the teacher-step (Equation 3.3) is:
which reduces to the empirical distribution. The subsequent student-step that maximizes the log-likelihood of samples from q then leads to the supervised MLE updates w.r.t. θ.
4.1.2. Self-Supervised Data Instances
Given an observed data instance t∗∈D in general, one could potentially derive various supervision signals based on the structures of the data and the target model. In particular, one could apply a “split” function that artificially partitions t∗ into two parts (x∗,y∗)=split(t∗) in different, sometimes stochastic ways. Then the two parts are treated as the input and output for the properly designed target model pθ(x,y) for supervised MLE as above, by plugging in the slightly altered experience function:
A key difference from the above standard supervised learning setting is that now the target variable y is not costly obtained labels or annotations, but rather part of the massively available data instances. The paradigm of treating part of observed instance as the prediction target is called ‘self-supervised’ learning (Lecun & Misra, 2021) and has achieved great success in language and vision modeling. For example, in language modeling (Brown et al., 2020;Devlin et al., 2019), the instance t is a piece of text, and the ‘split’ function usually selects from t one or few words to be the target y and the remaining words to be x.
4.1.3. Unsupervised Data Instances
In the unsupervised setting, for each instance t=(x,y), such as (image, cluster index), we only observe the x part. That is, we are given a data set D={x∗} without the associated y∗. The data set defines the empirical distribution p~d(x). The experience can be encoded in the same form as the supervised data (Equation 4.2) but now with only the information of x∗:
f:=fdata(x;D)=logEx∗∼D[Ix∗(x)].(4.5)
Applying the SE to this setting with proper specifications derives the unsupervised MLE algorithm.
Unsupervised MLE. The form of Equation 3.2 is reminiscent of the variational free energy objective in the standard EM for unsupervised MLE (Equation 2.9). We can indeed get exact correspondence by setting α=β=1, and setting the auxiliary distribution q(x,y)=p~d(x)q(y∣x). The reason for β=1, which differs from the specification β=ϵ in the supervised setting, is that the auxiliary distribution q cannot be determined fully by the unsupervised ‘incomplete’ data experience alone. Instead, it additionally relies on pθ through the divergence term. Here q is assumed a specialized decomposition q(x,y)=p~d(x)q(y∣x) where p~d(x) is fixed and thus not influenced by pθ. In contrast, if no structure of q is assumed, we could potentially obtain an extended, instance-weighted version of EM where each instance x∗ is weighted by the marginal likelihood pθ(x∗), in line with the previous weighted EM methods for robust clustering (Gebru et al., 2016; Yu et al., 2011).
4.1.4. Manipulated Data Instances
Data manipulation, such as reweighting data instances or augmenting an existing data set with new instances, is often a crucial step for efficient learning, such as in a low data regime or in presence of low-quality data sets (e.g., imbalanced labels). We show that the rich data manipulation schemes can be treated as experience and be naturally encoded in the experience function (Hu, Tan et al., 2019). This is done by extending the data-instance experience function (Equation 4.2), in particular by enriching the similarity metric in different ways. The discussion here generally applies to data instance t of any structures, for example, t=(x,y) or t=x.
Data reweighting. Rather than assuming the same importance of all data instances, we can associate each instance t∗ with an importance weight w(t∗)∈R, so that the learning pays more attention to those high-quality instances, while low-quality ones (e.g., with noisy labels) are downplayed. This can be done by scaling the above 0/1 indicator function (e.g., Equation 4.2) with the weight (Figure 2[c]):
f:=fdata-w(t;D)=logEt∗∼D[w(t∗)⋅It∗(t)].(4.6)
Plugging fdata-w into the SE (Equation 3.2) with the same other specification of supervised MLE (α=1,β=ϵ), we get the update rule of model parameters θ in the student-step (Equation 3.3):
θmaxEt∗∼D[w(t∗)⋅logpθ(t∗)],(4.7)
which is the familiar weighted supervised MLE. The weights w can be specified a priori based on heuristics, for example, using inverse class frequency. In many cases it is desirable to automatically induce and adapt the weights during the course of model training. In Section 9.2, we discuss how the SE framework can easily enable automated data reweighting by reusing existing algorithms that were designed to solve other seemingly unrelated problems.
Data augmentation. Data augmentation expands existing data by adding synthetically modified copies of existing data instances (e.g., by rotating an existing image at random angles), and is widely used for increasing data size or encouraging invariance in learned representations (e.g., object label is invariant to image rotation). The indicator function I as the similarity metric in Equation 4.2 restrictively requires exact match between the true t∗ and the configuration t. Data augmentation arises as a ‘relaxation’ to the similarity metric. Let at∗(t)≥0 be a distribution that assigns non-zero probability to not only the exact t∗ but also other configurations t related to t∗ in certain ways (e.g., all rotated images t of the observed image t∗). Replacing the indicator function metric in Equation 4.2 with the new at∗(t)≥0 yields the experience function for data augmentation (Figure 2[d]):
f:=fdata-aug(t;D)=logEt∗∼D[at∗(t)].(4.8)
The resulting student-step updates of θ, keeping (α=1,β=ϵ) of supervised MLE, is thus:
θmaxEt∗∼D,t∼at∗(t)[logpθ(t)].(4.9)
The metric at∗(t) can be defined in various ways, leading to different augmentation strategies. For example, setting at∗(t)∝exp{R(t,t∗)}, where R(t,t∗) is a task-specific evaluation metric such as BLEU for machine translation, results in the reward-augmented maximum likelihood (RAML) algorithm (Norouzi et al., 2016). Besides the manually designed strategies, we can also specify at∗(t) as a parameterized transformation process and learn any free parameters thereof automatically (Section 6). Notice the same form of the augmentation experience fdata-aug and the reweighting experience fdata-w, where the similarity metrics both include learnable components (i.e., at∗(t) and w(t∗), respectively). Thus the same approach to automated data reweighting can also be applied for automated data augmentation, as discussed more in Section 9.2.
4.1.5. Actively Supervised Data Instances
Instead of access to data instances x∗ with readily available labels y∗, in the active supervision setting, we are presented with a large pool of unlabeled instances D={x∗} as well as a certain budget for querying an oracle (e.g., human annotators) for labeling a limited set of instances. To minimize the need for labeled instances, we need to strategically select queries from the pool according to an informativeness measure u(x)∈R. For example, u(x) can be the predictive uncertainty on the instance x, quantified by the Shannon entropy of the predictive distribution or the vote entropy based on a committee of predictors (Dagan & Engelson, 1995).
Mapping the standard equation to this setting, we show the informativeness measure u(x) is subsumed as part of the experience. Intuitively, u(x) encodes our heuristic belief about sample ‘informativeness’. This heuristic is a form of information we inject into the learning system. Denote the oracle as o from which we can draw a label y∗∼o(x∗). The active supervision experience function is then defined as:
where the first term is essentially the same as the supervised data experience function (Equation 4.2) with the only difference that now the label y∗ is from the oracle rather than pre-given in D; λ>0 is a trade-off parameter. The formulation of the active supervision is interesting as it is simply a combination of the common supervision experience and the informativeness measure in an additive manner.
We plug factive into the SE and obtain the algorithm to carry out learning. The result turns out to recover classical active learning algorithms.
Active learning. Specifically, in Equation 3.2, setting f=factive, and (α=1,β=ϵ) as in supervised MLE, the resulting student-step in Equation 3.3 for updating θ is written as
If the pool D is large, the update can be carried out by the following procedure: we first pick a random subset Dsub from D, and select a sample from Dsub according to the informativeness distribution proportional to exp{λu(x)} over Dsub. The sample is then labeled by the oracle, which is finally used to update the target model. By setting λ to a very large value (i.e., a near-zero ‘temperature’ 1/λ), we tend to select the most informative sample from Dsub. The procedure rediscovers the algorithm proposed in (Ertekin et al., 2007) and more generally the pooling-based active learning algorithms (Settles, 2012).
4.2. Knowledge-Based Experience
Many aspects of problem structures and human knowledge are difficult if not impossible to be expressed through individual data instances. Examples include the knowledge of expected feature values, maximum margin structures (Section 2.3), logical rules, and so on. The knowledge generally imposes constraints that we want the target model to satisfy. The experience function in the standard equation is a natural vehicle for incorporating such knowledge constraints in learning. Given a configuration t, the experience function f(t) measures the degree to which the configuration satisfies the constraints.
As an example, we consider first-order logic (FOL) rules, which provide an expressive declarative language to encode complex symbolic knowledge (Hu et al., 2016). More concretely, let frule(t) be an FOL rule w.r.t. the variables t. For flexibility, we use soft logic (Bach et al., 2017) to formulate the rule. Soft logic allows continuous truth values from the interval [0,1] instead of {0,1}, and the Boolean logical operators are redefined as:
Here & and ∧ are two different approximations to logical conjunction: & is useful as a selection operator (e.g., A&B=B when A=1, and A&B=0 when A=0), while ∧ is an averaging operator. To give a concrete example, consider the problem of sentiment classification, where given a sentence x, we want to predict its sentiment y∈{negative0,positive1}. A challenge for a sentiment classifier is to understand the contrastive sense within a sentence and capture the dominant sentiment precisely. For example, if a sentence is of structure ‘A-but-B’ with the connective ‘but’, the sentiment of the half sentence after ‘but’ dominates. Let xB be the half sentence after ‘but’ and y~B∈[0,1] the (soft) sentiment prediction over xB by the current model, a possible way to express the knowledge as a logical rule frule(x,y) is:
where I(⋅) is an indicator function that takes 1 when its argument is true, and 0 otherwise. Given an instantiation (a.k.a. grounding) of (x,y,y~B), the truth value of frule(x,y) can be evaluated by definitions in Equation 4.12. Intuitively, the frule(x,y) truth value gets closer to 1 when y and y~B are more consistent.
We then make use of the knowledge-based experience such as frule(t) to drive learning. The standard equation rediscovers classical algorithms for learning with symbolic knowledge.
Posterior regularization and extensions. By setting α=β=1 and f to a constraint function such as frule, the SE with cross entropy naturally leads to a generalized posterior regularization framework (Hu et al., 2016):
which extends the conventional Bayesian inference formulation (Section 2.3) by permitting regularization on arbitrary random variables of arbitrary models (e.g., deep neural networks) with complex rule constraints.
The trade-off hyperparameters can also take other values. For example, by allowing arbitrary α∈R, the objective corresponds to the unified expectation maximization (UEM) algorithm (Samdani et al., 2012) that extends the posterior regularization for added flexibility.
4.3. Reward Experience
We now consider a very different learning setting commonly seen in robotic control and other sequential decision making problems. In this setting, experience is gained by the agent interacting with external environment and collecting feedback in the form of rewards. Formally, we consider a Markov decision process (MDP) as illustrated in Figure 3, where t=(x,y) is the state-action pair. For example, in playing a video game, the state x is the game screen by the environment (the game engine) and y can be any game actions.
At time t, environment is in state xt. The agent draws an action yt according to the policy pθ(y∣x). The state subsequently transitions to xt+1 following certain transition dynamics of the environment, and yields a reward rt=r(xt,yt)∈R. The general goal of the agent is to learn the policy pθ(y∣x) to maximize the reward in the long run. There could be different specifications of the goal. In this section we focus on the one where we want to maximize the expected discounted reward starting from a state drawn from an arbitrary state distribution p0(x), with a discount factor γ∈[0,1] applied to future rewards.
A base concept that plays a central role in characterizing the learning in this setting is the action value function, also known as the Q function, which is the expected discounted future reward of taking action y in state x and continuing with the policy pθ:
Qθ(x,y)=E[t=0∑∞γtrt∣x0=x,y0=y],(4.15)
where the expectation is taken by following the state dynamics induced by the policy (thus the dependence of Qθ on policy parameters θ). We next discuss how Qθ(x,y) can be used to specify the experience function in different ways, which in turn derives various known algorithms in reinforcement learning (RL) (Sutton & Barto, 2018). Note that here we are primarily interested in learning the conditional model (policy) pθ(y∣x). Yet we can still define the joint distribution as pθ(x,y)=pθ(y∣x)p0(x).
4.3.1. Expected Future Reward
The first simple way to use the reward signals as the experience is by defining the experience function as the logarithm of the expected future reward:
f:=freward,1θ(x,y)=logQθ(x,y),(4.16)
which leads to the classical policy gradient algorithm (Sutton et al., 2000).
Policy gradient. With α=β=1, we arrive at policy gradient. To see this, consider the teacher-student optimization procedure in Equation 3.3, where the teacher-step yields the q solution:
q(n)(x,y)=pθ(n)(x,y)Qθ(n)(x,y)/Z,(4.17)
and the student-step updates θ with the gradient at θ=θ(n):
Here the first equation is due to the log-derivative trick g∇logg=∇g; and the second equation is due to the policy gradient theorem (Sutton et al., 2000), where μθ(x)=∑t=0∞γtp(xt=x) is the unnormalized discounted state visitation measure. The final form is exactly the policy gradient up to a multiplication factor 1/Z.
We can also consider a slightly different use of the reward, by directly setting the experience function to the Q function:
f:=freward,2θ(x,y)=Qθ(x,y).(4.19)
This turns out to connect to the known RL-as-inference approach that has a long history of research (Abdolmaleki et al., 2018; Dayan & Hinton, 1997; Deisenroth et al., 2013; Levine, 2018; Rawlik et al., 2012).
RL as inference. We set α=β:=ρ>0. The configuration corresponds to the approach that casts RL as a probabilistic inference problem. To see this, we introduce an additional binary random variable o, with p(o=1∣x,y)∝exp{Q(x,y)/ρ}. Here o=1 is interpreted as the event that maximum reward is obtained, p(o=1∣x,y) is seen as the ‘conditional likelihood’, and ρ is the temperature. The goal of learning is to maximize the marginal likelihood of optimality: logp(o=1), which, however, is intractable to solve. Much like how the standard equation applied to unsupervised MLE provides a surrogate variational objective for the marginal data likelihood (Section 4.1.3), here the standard equation also derives a variational bound for logp(o=1) (up to a constant factor) with the above specification of (f,α,β):
Following the teacher-student procedure in Equation 3.3, the teacher-step produces the q solution:
q(n)(x,y)=pθ(n)(x,y)exp{Qθ(n)(x,y)/ρ}/Z.(4.21)
The subsequent student-step involves approximation by fixing θ=θ(n) in Qθ(x,y) in the above variational objective, and minimizes only Eq(n)(x,y)[logpθ(x,y)] w.r.t. θ.
4.3.2. Intrinsic Reward
Rewards provided by the extrinsic environment can be sparse in many real-world sequential decision problems. Learning in such problems is thus difficult due to the lack of supervision signals. A method to alleviate the difficulty is to supplement the extrinsic reward with dense intrinsic reward that is generated by the agent itself (i.e., the agent is intrinsically motivated). The intrinsic reward can be induced in various ways, such as the ‘curiosity’-based reward that encourages the agent to explore novel or ‘surprising’ states (Houthooft et al., 2016; Pathak et al., 2017; Schmidhuber, 2010), or the ‘optimal reward’, which is designed with the goal of encouraging maximum extrinsic reward at the end (Singh et al., 2010; Zheng et al., 2018). Formally, let rtin=rin(xt,yt)∈R be the intrinsic reward at time t with state xt and action yt. For example, in (Pathak et al., 2017), rtin is the prediction error (i.e., the ‘surprise’) of the next state xt+1. Let Qin,θ(x,y) denote the action-value function for the intrinsic reward, defined in a similar way as the extrinsic Qθ(x,y):
Qin,θ(x,y)=E[t=0∑∞γtrtin∣x0=x,y0=y].(4.22)
It is straightforward to derive the intrinsically motivated variant of the policy gradient algorithm (and other RL algorithms discussed below), by replacing the standard extrinsic-only Qθ(x,y) in the experience function Equation 4.16 with the combined Qθ(x,y)+Qin,θ(x,y). Let freward,ex+inθ(x,y) denote the resulting experience function that incorporates both the extrinsic and the additive intrinsic rewards.
We can notice some sort of symmetry between freward,ex+in<