This thesis poses a general model for optimal control subject to informationconstraint, motivated in part by recent work on information-constraineddecision-making by economic agents.In the average-cost optimal control framework, the general model introducedin this paper reduces to a variant of the linear-programming representationof the average-cost optimal control problem, subject to an additionalmutual information constraint on the randomized stationary policy. The resultingin nite-dimensional convex program admits a decomposition basedon the Bellman error, which is the subject of study in approximate dynamicprogramming.Later, we apply the general theory to an information-constrained variantof the scalar Linear-Quadratic-Gaussian (LQG) control problem. We givean upper bound on the optimal steady-state value of the quadratic performanceobjective and present explicit constructions of controllers that achievethis bound. We show that the obvious certainty-equivalent control policy issuboptimal when the information constraints are very severe, and proposeanother policy that performs better in this low-information regime. In thetwo extreme cases of no information (open-loop) and perfect information,these two policies coincide with the optimum.