# 70. Rational Expectations Equilibrium¶

Contents

“If you’re so smart, why aren’t you rich?”

In addition to what’s in Anaconda, this lecture will need the following libraries:

```
!pip install quantecon
```

```
Requirement already satisfied: quantecon in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (0.6.0)
Requirement already satisfied: numba in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (from quantecon) (0.56.4)
Requirement already satisfied: scipy>=1.5.0 in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (from quantecon) (1.9.1)
Requirement already satisfied: sympy in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (from quantecon) (1.10.1)
Requirement already satisfied: requests in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (from quantecon) (2.28.1)
Requirement already satisfied: numpy>=1.17.0 in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (from quantecon) (1.23.5)
Requirement already satisfied: setuptools in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (from numba->quantecon) (63.4.1)
Requirement already satisfied: llvmlite<0.40,>=0.39.0dev0 in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (from numba->quantecon) (0.39.1)
Requirement already satisfied: idna<4,>=2.5 in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (from requests->quantecon) (3.3)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (from requests->quantecon) (1.26.11)
Requirement already satisfied: certifi>=2017.4.17 in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (from requests->quantecon) (2022.9.14)
Requirement already satisfied: charset-normalizer<3,>=2 in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (from requests->quantecon) (2.0.4)
Requirement already satisfied: mpmath>=0.19 in /__w/lecture-python.myst/lecture-python.myst/3/envs/quantecon/lib/python3.9/site-packages (from sympy->quantecon) (1.2.1)
```

```
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
```

## 70.1. Overview¶

This lecture introduces the concept of a *rational expectations equilibrium*.

To illustrate it, we describe a linear quadratic version of a model due to Lucas and Prescott [LP71].

That 1971 paper is one of a small number of research articles that ignited a *rational expectations revolution*.

We follow Lucas and Prescott by employing a setting that is readily “Bellmanized” (i.e., susceptible to being formulated as a dynamic programming problems.

Because we use linear quadratic setups for demand and costs, we can deploy the LQ programming techniques described in this lecture.

We will learn about how a representative agent’s problem differs from a planner’s, and how a planning problem can be used to compute quantities and prices in a rational expectations equilibrium.

We will also learn about how a rational expectations equilibrium can be characterized as a fixed point of a mapping from a *perceived law of motion* to an *actual law of motion*.

Equality between a perceived and an actual law of motion for endogenous market-wide objects captures in a nutshell what the rational expectations equilibrium concept is all about.

Finally, we will learn about the important “Big \(K\), little \(k\)” trick, a modeling device widely used in macroeconomics.

Except that for us

Instead of “Big \(K\)” it will be “Big \(Y\)”.

Instead of “little \(k\)” it will be “little \(y\)”.

Let’s start with some standard imports:

```
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
import numpy as np
```

We’ll also use the LQ class from `QuantEcon.py`

.

```
from quantecon import LQ
```

### 70.1.1. The Big Y, little y Trick¶

This widely used method applies in contexts in which a **representative firm** or agent is a “price taker” operating within a competitive equilibrium.

The following setting justifies the concept of a representative firm that stands in for a large number of other firms too.

There is a uniform unit measure of identical firms named \(\omega \in \Omega = [0,1]\).

The output of firm \(\omega\) is \(y(\omega)\).

The output of all firms is \(Y = \int_{0}^1 y(\omega) d \, \omega \).

All firms end up choosing to produce the same output, so that at the end of the day \( y(\omega) = y \) and \(Y =y = \int_{0}^1 y(\omega) d \, \omega \).

This setting allows us to speak of a representative firm that chooses to produce \(y\).

We want to impose that

The representative firm or individual firm takes

*aggregate*\(Y\) as given when it chooses individual \(y(\omega)\), but \(\ldots\).At the end of the day, \(Y = y(\omega) = y\), so that the representative firm is indeed representative.

The Big \(Y\), little \(y\) trick accomplishes these two goals by

Taking \(Y\) as beyond control when posing the choice problem of who chooses \(y\); but \(\ldots\).

Imposing \(Y = y\)

*after*having solved the individual’s optimization problem.

Please watch for how this strategy is applied as the lecture unfolds.

We begin by applying the Big \(Y\), little \(y\) trick in a very simple static context.

#### 70.1.1.1. A Simple Static Example of the Big Y, little y Trick¶

Consider a static model in which a unit measure of firms produce a homogeneous good that is sold in a competitive market.

Each of these firms ends up producing and selling output \(y (\omega) = y\).

The price \(p\) of the good lies on an inverse demand curve

where

\(a_i > 0\) for \(i = 0, 1\)

\(Y = \int_0^1 y(\omega) d \omega\) is the market-wide level of output

For convenience, we’ll often just write \(y\) instead of \(y(\omega)\) when we are describing the choice problem of an individual firm \(\omega \in \Omega\).

Each firm has a total cost function

The profits of a representative firm are \(p y - c(y)\).

Using (70.1), we can express the problem of the representative firm as

In posing problem (70.2), we want the firm to be a *price taker*.

We do that by regarding \(p\) and therefore \(Y\) as exogenous to the firm.

The essence of the Big \(Y\), little \(y\) trick is *not* to set \(Y = n y\) *before* taking the first-order condition with respect
to \(y\) in problem (70.2).

This assures that the firm is a price taker.

The first-order condition for problem (70.2) is

At this point, *but not before*, we substitute \(Y = y\) into (70.3)
to obtain the following linear equation

to be solved for the competitive equilibrium market-wide output \(Y\).

After solving for \(Y\), we can compute the competitive equilibrium price \(p\) from the inverse demand curve (70.1).

## 70.2. Rational Expectations Equilibrium¶

Our first illustration of a rational expectations equilibrium involves a market with a unit measure of identical firms, each of which seeks to maximize the discounted present value of profits in the face of adjustment costs.

The adjustment costs induce the firms to make gradual adjustments, which in turn requires consideration of future prices.

Individual firms understand that, via the inverse demand curve, the price is determined by the amounts supplied by other firms.

Hence each firm wants to forecast future total industry output.

In our context, a forecast is generated by a belief about the law of motion for the aggregate state.

Rational expectations equilibrium prevails when this belief coincides with the actual law of motion generated by production choices induced by this belief.

We formulate a rational expectations equilibrium in terms of a fixed point of an operator that maps beliefs into optimal beliefs.

### 70.2.1. Competitive Equilibrium with Adjustment Costs¶

To illustrate, consider a collection of \(n\) firms producing a homogeneous good that is sold in a competitive market.

Each firm sell output \(y_t(\omega) = y_t\).

The price \(p_t\) of the good lies on the inverse demand curve

where

\(a_i > 0\) for \(i = 0, 1\)

\(Y_t = \int_0^1 y_t(\omega) d \omega = y_t\) is the market-wide level of output

#### 70.2.1.1. The Firm’s Problem¶

Each firm is a price taker.

While it faces no uncertainty, it does face adjustment costs

In particular, it chooses a production plan to maximize

where

Regarding the parameters,

\(\beta \in (0,1)\) is a discount factor

\(\gamma > 0\) measures the cost of adjusting the rate of output

Regarding timing, the firm observes \(p_t\) and \(y_t\) when it chooses \(y_{t+1}\) at time \(t\).

To state the firm’s optimization problem completely requires that we specify dynamics for all state variables.

This includes ones that the firm cares about but does not control like \(p_t\).

We turn to this problem now.

#### 70.2.1.2. Prices and Aggregate Output¶

In view of (70.5), the firm’s incentive to forecast the market price translates into an incentive to forecast aggregate output \(Y_t\).

Aggregate output depends on the choices of other firms.

The output \(y_t(\omega)\) of a single firm \(\omega\) has a negligible effect on aggregate output \(\int_0^1 y_t(\omega) d \omega\).

That justifies firms in regarding their forecasts of aggregate output as being unaffected by their own output decisions.

#### 70.2.1.3. Representative Firm’s Beliefs¶

We suppose the firm believes that market-wide output \(Y_t\) follows the law of motion

where \(Y_0\) is a known initial condition.

The *belief function* \(H\) is an equilibrium object, and hence remains to be determined.

#### 70.2.1.4. Optimal Behavior Given Beliefs¶

For now, let’s fix a particular belief \(H\) in (70.8) and investigate the firm’s response to it.

Let \(v\) be the optimal value function for the firm’s problem given \(H\).

The value function satisfies the Bellman equation

Let’s denote the firm’s optimal policy function by \(h\), so that

where

Evidently \(v\) and \(h\) both depend on \(H\).

#### 70.2.1.5. Characterization with First-Order Necessary Conditions¶

In what follows it will be helpful to have a second characterization of \(h\), based on first-order conditions.

The first-order necessary condition for choosing \(y'\) is

An important useful envelope result of Benveniste-Scheinkman [BS79] implies that to differentiate \(v\) with respect to \(y\) we can naively differentiate the right side of (70.9), giving

Substituting this equation into (70.12) gives the *Euler equation*

The firm optimally sets an output path that satisfies (70.13), taking (70.8) as given, and subject to

the initial conditions for \((y_0, Y_0)\).

the terminal condition \(\lim_{t \rightarrow \infty } \beta^t y_t v_y(y_{t}, Y_t) = 0\).

This last condition is called the *transversality condition*, and acts as a first-order necessary condition “at infinity”.

A representative firm’s decision rule solves the difference equation (70.13) subject to the given initial condition \(y_0\) and the transversality condition.

Note that solving the Bellman equation (70.9) for \(v\) and then \(h\) in (70.11) yields a decision rule that automatically imposes both the Euler equation (70.13) and the transversality condition.

#### 70.2.1.6. The Actual Law of Motion for Output¶

As we’ve seen, a given belief translates into a particular decision rule \(h\).

Recalling that in equilbrium \(Y_t = y_t\), the *actual law of motion* for market-wide output is then

Thus, when firms believe that the law of motion for market-wide output is (70.8), their optimizing behavior makes the actual law of motion be (70.14).

### 70.2.2. Definition of Rational Expectations Equilibrium¶

A *rational expectations equilibrium* or *recursive competitive equilibrium* of the model with adjustment costs is a decision rule \(h\) and an aggregate law of motion \(H\) such that

Given belief \(H\), the map \(h\) is the firm’s optimal policy function.

The law of motion \(H\) satisfies \(H(Y)= h(Y,Y)\) for all \(Y\).

Thus, a rational expectations equilibrium equates the perceived and actual laws of motion (70.8) and (70.14).

#### 70.2.2.1. Fixed Point Characterization¶

As we’ve seen, the firm’s optimum problem induces a mapping \(\Phi\) from a perceived law of motion \(H\) for market-wide output to an actual law of motion \(\Phi(H)\).

The mapping \(\Phi\) is the composition of two mappings, the first of which maps a perceived law of motion into a decision rule via (70.9)–(70.11), the second of which maps a decision rule into an actual law via (70.14).

The \(H\) component of a rational expectations equilibrium is a fixed point of \(\Phi\).

## 70.3. Computing an Equilibrium¶

Now let’s compute a rational expectations equilibrium.

### 70.3.1. Failure of Contractivity¶

Readers accustomed to dynamic programming arguments might try to address this problem by choosing some guess \(H_0\) for the aggregate law of motion and then iterating with \(\Phi\).

Unfortunately, the mapping \(\Phi\) is not a contraction.

Indeed, there is no guarantee that direct iterations on \(\Phi\) converge 1.

There are examples in which these iterations diverge.

Fortunately, another method works here.

The method exploits a connection between equilibrium and Pareto optimality expressed in the fundamental theorems of welfare economics (see, e.g, [MCWG95]).

Lucas and Prescott [LP71] used this method to construct a rational expectations equilibrium.

Some details follow.

### 70.3.2. A Planning Problem Approach¶

Our plan of attack is to match the Euler equations of the market problem with those for a single-agent choice problem.

As we’ll see, this planning problem can be solved by LQ control (linear regulator).

Optimal quantities from the planning problem are rational expectations equilibrium quantities.

The rational expectations equilibrium price can be obtained as a shadow price in the planning problem.

We first compute a sum of consumer and producer surplus at time \(t\)

The first term is the area under the demand curve, while the second measures the social costs of changing output.

The *planning problem* is to choose a production plan \(\{Y_t\}\) to maximize

subject to an initial condition for \(Y_0\).

### 70.3.3. Solution of Planning Problem¶

Evaluating the integral in (70.15) yields the quadratic form \(a_0 Y_t - a_1 Y_t^2 / 2\).

As a result, the Bellman equation for the planning problem is

The associated first-order condition is

Applying the same Benveniste-Scheinkman formula gives

Substituting this into equation (70.17) and rearranging leads to the Euler equation

### 70.3.4. Key Insight¶

Return to equation (70.13) and set \(y_t = Y_t\) for all \(t\).

A small amount of algebra will convince you that when \(y_t=Y_t\), equations (70.18) and (70.13) are identical.

Thus, the Euler equation for the planning problem matches the second-order difference equation that we derived by

finding the Euler equation of the representative firm and

substituting into it the expression \(Y_t = y_t\) that “makes the representative firm be representative”.

If it is appropriate to apply the same terminal conditions for these two difference equations, which it is, then we have verified that a solution of the planning problem is also a rational expectations equilibrium quantity sequence.

It follows that for this example we can compute equilibrium quantities by forming the optimal linear regulator problem corresponding to the Bellman equation (70.16).

The optimal policy function for the planning problem is the aggregate law of motion \(H\) that the representative firm faces within a rational expectations equilibrium.

#### 70.3.4.1. Structure of the Law of Motion¶

As you are asked to show in the exercises, the fact that the planner’s problem is an LQ control problem implies an optimal policy — and hence aggregate law of motion — taking the form

for some parameter pair \(\kappa_0, \kappa_1\).

Now that we know the aggregate law of motion is linear, we can see from the firm’s Bellman equation (70.9) that the firm’s problem can also be framed as an LQ problem.

As you’re asked to show in the exercises, the LQ formulation of the firm’s problem implies a law of motion that looks as follows

Hence a rational expectations equilibrium will be defined by the parameters \((\kappa_0, \kappa_1, h_0, h_1, h_2)\) in (70.19)–(70.20).

## 70.4. Exercises¶

Consider the firm problem described above.

Let the firm’s belief function \(H\) be as given in (70.19).

Formulate the firm’s problem as a discounted optimal linear regulator problem, being careful to describe all of the objects needed.

Use the class `LQ`

from the QuantEcon.py package to solve the firm’s problem for the following parameter values:

Express the solution of the firm’s problem in the form (70.20) and give the values for each \(h_j\).

If there were a unit measure of identical competitive firms all behaving according to (70.20), what would (70.20) imply for the *actual* law of motion (70.8) for market supply.

Solution to Exercise 70.1

To map a problem into a discounted optimal linear control problem, we need to define

state vector \(x_t\) and control vector \(u_t\)

matrices \(A, B, Q, R\) that define preferences and the law of motion for the state

For the state and control vectors, we choose

For \(B, Q, R\) we set

By multiplying out you can confirm that

\(x_t' R x_t + u_t' Q u_t = - r_t\)

\(x_{t+1} = A x_t + B u_t\)

We’ll use the module `lqcontrol.py`

to solve the firm’s problem at the
stated parameter values.

This will return an LQ policy \(F\) with the interpretation \(u_t = - F x_t\), or

Matching parameters with \(y_{t+1} = h_0 + h_1 y_t + h_2 Y_t\) leads to

Here’s our solution

```
# Model parameters
a0 = 100
a1 = 0.05
β = 0.95
γ = 10.0
# Beliefs
κ0 = 95.5
κ1 = 0.95
# Formulate the LQ problem
A = np.array([[1, 0, 0], [0, κ1, κ0], [0, 0, 1]])
B = np.array([1, 0, 0])
B.shape = 3, 1
R = np.array([[0, a1/2, -a0/2], [a1/2, 0, 0], [-a0/2, 0, 0]])
Q = 0.5 * γ
# Solve for the optimal policy
lq = LQ(Q, R, A, B, beta=β)
P, F, d = lq.stationary_values()
F = F.flatten()
out1 = f"F = [{F[0]:.3f}, {F[1]:.3f}, {F[2]:.3f}]"
h0, h1, h2 = -F[2], 1 - F[0], -F[1]
out2 = f"(h0, h1, h2) = ({h0:.3f}, {h1:.3f}, {h2:.3f})"
print(out1)
print(out2)
```

```
F = [-0.000, 0.046, -96.949]
(h0, h1, h2) = (96.949, 1.000, -0.046)
```

The implication is that

For the case \(n > 1\), recall that \(Y_t = n y_t\), which, combined with the previous equation, yields

Consider the following \(\kappa_0, \kappa_1\) pairs as candidates for the aggregate law of motion component of a rational expectations equilibrium (see (70.19)).

Extending the program that you wrote for Exercise 70.1, determine which if any satisfy the definition of a rational expectations equilibrium

(94.0886298678, 0.923409232937)

(93.2119845412, 0.984323478873)

(95.0818452486, 0.952459076301)

Describe an iterative algorithm that uses the program that you wrote for Exercise 70.1 to compute a rational expectations equilibrium.

(You are not being asked actually to use the algorithm you are suggesting)

Solution to Exercise 70.2

To determine whether a \(\kappa_0, \kappa_1\) pair forms the aggregate law of motion component of a rational expectations equilibrium, we can proceed as follows:

Determine the corresponding firm law of motion \(y_{t+1} = h_0 + h_1 y_t + h_2 Y_t\).

Test whether the associated aggregate law :\(Y_{t+1} = n h(Y_t/n, Y_t)\) evaluates to \(Y_{t+1} = \kappa_0 + \kappa_1 Y_t\).

In the second step, we can use \(Y_t = n y_t = y_t\), so that \(Y_{t+1} = n h(Y_t/n, Y_t)\) becomes

Hence to test the second step we can test \(\kappa_0 = h_0\) and \(\kappa_1 = h_1 + h_2\).

The following code implements this test

```
candidates = ((94.0886298678, 0.923409232937),
(93.2119845412, 0.984323478873),
(95.0818452486, 0.952459076301))
for κ0, κ1 in candidates:
# Form the associated law of motion
A = np.array([[1, 0, 0], [0, κ1, κ0], [0, 0, 1]])
# Solve the LQ problem for the firm
lq = LQ(Q, R, A, B, beta=β)
P, F, d = lq.stationary_values()
F = F.flatten()
h0, h1, h2 = -F[2], 1 - F[0], -F[1]
# Test the equilibrium condition
if np.allclose((κ0, κ1), (h0, h1 + h2)):
print(f'Equilibrium pair = {κ0}, {κ1}')
print('f(h0, h1, h2) = {h0}, {h1}, {h2}')
break
```

```
Equilibrium pair = 95.0818452486, 0.952459076301
f(h0, h1, h2) = {h0}, {h1}, {h2}
```

The output tells us that the answer is pair (iii), which implies \((h_0, h_1, h_2) = (95.0819, 1.0000, -.0475)\).

(Notice we use `np.allclose`

to test equality of floating-point
numbers, since exact equality is too strict).

Regarding the iterative algorithm, one could loop from a given \((\kappa_0, \kappa_1)\) pair to the associated firm law and then to a new \((\kappa_0, \kappa_1)\) pair.

This amounts to implementing the operator \(\Phi\) described in the lecture.

(There is in general no guarantee that this iterative process will converge to a rational expectations equilibrium)

Recall the planner’s problem described above

Formulate the planner’s problem as an LQ problem.

Solve it using the same parameter values in exercise 1

\(a_0= 100, a_1= 0.05, \beta = 0.95, \gamma=10\)

Represent the solution in the form \(Y_{t+1} = \kappa_0 + \kappa_1 Y_t\).

Compare your answer with the results from exercise 2.

Solution to Exercise 70.3

We are asked to write the planner problem as an LQ problem.

For the state and control vectors, we choose

For the LQ matrices, we set

By multiplying out you can confirm that

\(x_t' R x_t + u_t' Q u_t = - s(Y_t, Y_{t+1})\)

\(x_{t+1} = A x_t + B u_t\)

By obtaining the optimal policy and using \(u_t = - F x_t\) or

we can obtain the implied aggregate law of motion via \(\kappa_0 = -F_1\) and \(\kappa_1 = 1-F_0\).

The Python code to solve this problem is below:

```
# Formulate the planner's LQ problem
A = np.array([[1, 0], [0, 1]])
B = np.array([[1], [0]])
R = np.array([[a1 / 2, -a0 / 2], [-a0 / 2, 0]])
Q = γ / 2
# Solve for the optimal policy
lq = LQ(Q, R, A, B, beta=β)
P, F, d = lq.stationary_values()
# Print the results
F = F.flatten()
κ0, κ1 = -F[1], 1 - F[0]
print(κ0, κ1)
```

```
95.08187459214827 0.9524590627039239
```

The output yields the same \((\kappa_0, \kappa_1)\) pair obtained as an equilibrium from the previous exercise.

A monopolist faces the industry demand curve (70.5) and chooses \(\{Y_t\}\) to maximize \(\sum_{t=0}^{\infty} \beta^t r_t\) where

Formulate this problem as an LQ problem.

Compute the optimal policy using the same parameters as Exercise 70.2.

In particular, solve for the parameters in

Compare your results with Exercise 70.2 – comment.

Solution to Exercise 70.4

The monopolist’s LQ problem is almost identical to the planner’s problem from the previous exercise, except that

The problem can be solved as follows

```
A = np.array([[1, 0], [0, 1]])
B = np.array([[1], [0]])
R = np.array([[a1, -a0 / 2], [-a0 / 2, 0]])
Q = γ / 2
lq = LQ(Q, R, A, B, beta=β)
P, F, d = lq.stationary_values()
F = F.flatten()
m0, m1 = -F[1], 1 - F[0]
print(m0, m1)
```

```
73.4729440350286 0.9265270559649703
```

We see that the law of motion for the monopolist is approximately \(Y_{t+1} = 73.4729 + 0.9265 Y_t\).

In the rational expectations case, the law of motion was approximately \(Y_{t+1} = 95.0818 + 0.9525 Y_t\).

One way to compare these two laws of motion is by their fixed points, which give long-run equilibrium output in each case.

For laws of the form \(Y_{t+1} = c_0 + c_1 Y_t\), the fixed point is \(c_0 / (1 - c_1)\).

If you crunch the numbers, you will see that the monopolist adopts a lower long-run quantity than obtained by the competitive market, implying a higher market price.

This is analogous to the elementary static-case results

- 1
A literature that studies whether models populated with agents who learn can converge to rational expectations equilibria features iterations on a modification of the mapping \(\Phi\) that can be approximated as \(\gamma \Phi + (1-\gamma)I\). Here \(I\) is the identity operator and \(\gamma \in (0,1)\) is a

*relaxation parameter*. See [MS89] and [EH01] for statements and applications of this approach to establish conditions under which collections of adaptive agents who use least squares learning to converge to a rational expectations equilibrium.