We consider the robust stability of a rational expectations equilibrium, which we define as stability under discounted (constant gain) least-squares learning, for a range of gain parameters. We find that for operational forms of policy rules, ie rules that do not depend on contemporaneous values of endogenous aggregate variables, many interest-rate rules do not exhibit robust stability. We consider a variety of interest-rate rules, including instrument rules, optimal reaction functions under discretion or commitment, and rules that approximate optimal policy under commitment. For some reaction functions we allow for an interest-rate stabilization motive in the policy objective. The expectations-based rules proposed in Evans and Honkapohja (2003, 2006) deliver robust learning stability. In contrast, many proposed alternatives become unstable under learning even at small values of the gain parameter.
Introduction: Recently, the conduct of monetary policy in terms of interest rate rules has been examined from the viewpoint of imperfect knowledge and learning by economic agents. In this literature stability of rational expectations equilibrium (REE) is taken as a key desideratum for good monetary policy design. Most of this literature postulates that agents use least squares or related learning algorithms to carry out real-time estimations of the parameters of their forecast functions
as new data becomes available. Moreover, it is usually assumed that the learning algorithms have a decreasing gain; in the most common case the gain is the inverse of the sample size so that all data points have equal weights. Use of such a decreasing-gain algorithm makes it possible for learning to converge exactly at the REE in environments without structural change. Convergence requires that REE satisfies a stability condition, known as E-stability.
Author: George W Evans,Seppo Honkapohja
Source: Research Discussion Papers, Bank of Finland
Download URL 2: Visit Now