34. Job Search II: Search and Separation#

In addition to what’s in Anaconda, this lecture will need the following libraries:

!pip install quantecon
Hide code cell output
Requirement already satisfied: quantecon in /opt/conda/envs/quantecon/lib/python3.11/site-packages (0.7.2)
Requirement already satisfied: numba>=0.49.0 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from quantecon) (0.57.1)
Requirement already satisfied: numpy>=1.17.0 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from quantecon) (1.24.3)
Requirement already satisfied: requests in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from quantecon) (2.31.0)
Requirement already satisfied: scipy>=1.5.0 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from quantecon) (1.11.1)
Requirement already satisfied: sympy in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from quantecon) (1.11.1)
Requirement already satisfied: llvmlite<0.41,>=0.40.0dev0 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from numba>=0.49.0->quantecon) (0.40.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from requests->quantecon) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from requests->quantecon) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from requests->quantecon) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from requests->quantecon) (2023.7.22)
Requirement already satisfied: mpmath>=0.19 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from sympy->quantecon) (1.3.0)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

34.1. Overview#

Previously we looked at the McCall job search model [McCall, 1970] as a way of understanding unemployment and worker decisions.

One unrealistic feature of the model is that every job is permanent.

In this lecture, we extend the McCall model by introducing job separation.

Once separation enters the picture, the agent comes to view

  • the loss of a job as a capital loss, and

  • a spell of unemployment as an investment in searching for an acceptable job

The other minor addition is that a utility function will be included to make worker preferences slightly more sophisticated.

We’ll need the following imports

import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (11, 5)  #set default figure size
import numpy as np
from numba import njit, float64
from numba.experimental import jitclass
from quantecon.distributions import BetaBinomial

34.2. The Model#

The model is similar to the baseline McCall job search model.

It concerns the life of an infinitely lived worker and

  • the opportunities he or she (let’s say he to save one character) has to work at different wages

  • exogenous events that destroy his current job

  • his decision making process while unemployed

The worker can be in one of two states: employed or unemployed.

He wants to maximize

(34.1)#\[{\mathbb E} \sum_{t=0}^\infty \beta^t u(y_t)\]

At this stage the only difference from the baseline model is that we’ve added some flexibility to preferences by introducing a utility function \(u\).

It satisfies \(u'> 0\) and \(u'' < 0\).

34.2.1. The Wage Process#

For now we will drop the separation of state process and wage process that we maintained for the baseline model.

In particular, we simply suppose that wage offers \(\{ w_t \}\) are IID with common distribution \(q\).

The set of possible wage values is denoted by \(\mathbb W\).

(Later we will go back to having a separate state process \(\{s_t\}\) driving random outcomes, since this formulation is usually convenient in more sophisticated models.)

34.2.2. Timing and Decisions#

At the start of each period, the agent can be either

  • unemployed or

  • employed at some existing wage level \(w_e\).

At the start of a given period, the current wage offer \(w_t\) is observed.

If currently employed, the worker

  1. receives utility \(u(w_e)\) and

  2. is fired with some (small) probability \(\alpha\).

If currently unemployed, the worker either accepts or rejects the current offer \(w_t\).

If he accepts, then he begins work immediately at wage \(w_t\).

If he rejects, then he receives unemployment compensation \(c\).

The process then repeats.

Note

We do not allow for job search while employed—this topic is taken up in a later lecture.

34.3. Solving the Model#

We drop time subscripts in what follows and primes denote next period values.

Let

  • \(v(w_e)\) be total lifetime value accruing to a worker who enters the current period employed with existing wage \(w_e\)

  • \(h(w)\) be total lifetime value accruing to a worker who who enters the current period unemployed and receives wage offer \(w\).

Here value means the value of the objective function (34.1) when the worker makes optimal decisions at all future points in time.

Our first aim is to obtain these functions.

34.3.1. The Bellman Equations#

Suppose for now that the worker can calculate the functions \(v\) and \(h\) and use them in his decision making.

Then \(v\) and \(h\) should satisfy

(34.2)#\[v(w_e) = u(w_e) + \beta \left[ (1-\alpha)v(w_e) + \alpha \sum_{w' \in \mathbb W} h(w') q(w') \right]\]

and

(34.3)#\[h(w) = \max \left\{ v(w), \, u(c) + \beta \sum_{w' \in \mathbb W} h(w') q(w') \right\}\]

Equation (34.2) expresses the value of being employed at wage \(w_e\) in terms of

  • current reward \(u(w_e)\) plus

  • discounted expected reward tomorrow, given the \(\alpha\) probability of being fired

Equation (34.3) expresses the value of being unemployed with offer \(w\) in hand as a maximum over the value of two options: accept or reject the current offer.

Accepting transitions the worker to employment and hence yields reward \(v(w)\).

Rejecting leads to unemployment compensation and unemployment tomorrow.

Equations (34.2) and (34.3) are the Bellman equations for this model.

They provide enough information to solve for both \(v\) and \(h\).

34.3.2. A Simplifying Transformation#

Rather than jumping straight into solving these equations, let’s see if we can simplify them somewhat.

(This process will be analogous to our second pass at the plain vanilla McCall model, where we simplified the Bellman equation.)

First, let

(34.4)#\[d := \sum_{w' \in \mathbb W} h(w') q(w')\]

be the expected value of unemployment tomorrow.

We can now write (34.3) as

\[ h(w) = \max \left\{ v(w), \, u(c) + \beta d \right\} \]

or, shifting time forward one period

\[ \sum_{w' \in \mathbb W} h(w') q(w') = \sum_{w' \in \mathbb W} \max \left\{ v(w'), \, u(c) + \beta d \right\} q(w') \]

Using (34.4) again now gives

(34.5)#\[d = \sum_{w' \in \mathbb W} \max \left\{ v(w'), \, u(c) + \beta d \right\} q(w')\]

Finally, (34.2) can now be rewritten as

(34.6)#\[v(w) = u(w) + \beta \left[ (1-\alpha)v(w) + \alpha d \right]\]

In the last expression, we wrote \(w_e\) as \(w\) to make the notation simpler.

34.3.3. The Reservation Wage#

Suppose we can use (34.5) and (34.6) to solve for \(d\) and \(v\).

(We will do this soon.)

We can then determine optimal behavior for the worker.

From (34.3), we see that an unemployed agent accepts current offer \(w\) if \(v(w) \geq u(c) + \beta d\).

This means precisely that the value of accepting is higher than the expected value of rejecting.

It is clear that \(v\) is (at least weakly) increasing in \(w\), since the agent is never made worse off by a higher wage offer.

Hence, we can express the optimal choice as accepting wage offer \(w\) if and only if

\[ w \geq \bar w \quad \text{where} \quad \bar w \text{ solves } v(\bar w) = u(c) + \beta d \]

34.3.4. Solving the Bellman Equations#

We’ll use the same iterative approach to solving the Bellman equations that we adopted in the first job search lecture.

Here this amounts to

  1. make guesses for \(d\) and \(v\)

  2. plug these guesses into the right-hand sides of (34.5) and (34.6)

  3. update the left-hand sides from this rule and then repeat

In other words, we are iterating using the rules

(34.7)#\[d_{n+1} = \sum_{w' \in \mathbb W} \max \left\{ v_n(w'), \, u(c) + \beta d_n \right\} q(w')\]
(34.8)#\[v_{n+1}(w) = u(w) + \beta \left[ (1-\alpha)v_n(w) + \alpha d_n \right]\]

starting from some initial conditions \(d_0, v_0\).

As before, the system always converges to the true solutions—in this case, the \(v\) and \(d\) that solve (34.5) and (34.6).

(A proof can be obtained via the Banach contraction mapping theorem.)

34.4. Implementation#

Let’s implement this iterative process.

In the code, you’ll see that we use a class to store the various parameters and other objects associated with a given model.

This helps to tidy up the code and provides an object that’s easy to pass to functions.

The default utility function is a CRRA utility function

@njit
def u(c, σ=2.0):
    return (c**(1 - σ) - 1) / (1 - σ)

Also, here’s a default wage distribution, based around the BetaBinomial distribution:

n = 60                                  # n possible outcomes for w
w_default = np.linspace(10, 20, n)      # wages between 10 and 20
a, b = 600, 400                         # shape parameters
dist = BetaBinomial(n-1, a, b)
q_default = dist.pdf()

Here’s our jitted class for the McCall model with separation.

mccall_data = [
    ('α', float64),      # job separation rate
    ('β', float64),      # discount factor
    ('c', float64),      # unemployment compensation
    ('w', float64[:]),   # list of wage values
    ('q', float64[:])    # pmf of random variable w
]

@jitclass(mccall_data)
class McCallModel:
    """
    Stores the parameters and functions associated with a given model.
    """

    def __init__(self, α=0.2, β=0.98, c=6.0, w=w_default, q=q_default):

        self.α, self.β, self.c, self.w, self.q = α, β, c, w, q


    def update(self, v, d):

        α, β, c, w, q = self.α, self.β, self.c, self.w, self.q

        v_new = np.empty_like(v)

        for i in range(len(w)):
            v_new[i] = u(w[i]) + β * ((1 - α) * v[i] + α * d)

        d_new = np.sum(np.maximum(v, u(c) + β * d) * q)

        return v_new, d_new

Now we iterate until successive realizations are closer together than some small tolerance level.

We then return the current iterate as an approximate solution.

@njit
def solve_model(mcm, tol=1e-5, max_iter=2000):
    """
    Iterates to convergence on the Bellman equations

    * mcm is an instance of McCallModel
    """

    v = np.ones_like(mcm.w)    # Initial guess of v
    d = 1                      # Initial guess of d
    i = 0
    error = tol + 1

    while error > tol and i < max_iter:
        v_new, d_new = mcm.update(v, d)
        error_1 = np.max(np.abs(v_new - v))
        error_2 = np.abs(d_new - d)
        error = max(error_1, error_2)
        v = v_new
        d = d_new
        i += 1

    return v, d

34.4.1. The Reservation Wage: First Pass#

The optimal choice of the agent is summarized by the reservation wage.

As discussed above, the reservation wage is the \(\bar w\) that solves \(v(\bar w) = h\) where \(h := u(c) + \beta d\) is the continuation value.

Let’s compare \(v\) and \(h\) to see what they look like.

We’ll use the default parameterizations found in the code above.

mcm = McCallModel()
v, d = solve_model(mcm)
h = u(mcm.c) + mcm.β * d

fig, ax = plt.subplots()

ax.plot(mcm.w, v, 'b-', lw=2, alpha=0.7, label='$v$')
ax.plot(mcm.w, [h] * len(mcm.w),
        'g-', lw=2, alpha=0.7, label='$h$')
ax.set_xlim(min(mcm.w), max(mcm.w))
ax.legend()

plt.show()
_images/fe4774b4e5f968c623bd520ef8ce61663aa697ccd1169363e791e69342b3c24c.png

The value \(v\) is increasing because higher \(w\) generates a higher wage flow conditional on staying employed.

34.4.2. The Reservation Wage: Computation#

Here’s a function compute_reservation_wage that takes an instance of McCallModel and returns the associated reservation wage.

@njit
def compute_reservation_wage(mcm):
    """
    Computes the reservation wage of an instance of the McCall model
    by finding the smallest w such that v(w) >= h.

    If no such w exists, then w_bar is set to np.inf.
    """

    v, d = solve_model(mcm)
    h = u(mcm.c) + mcm.β * d

    i = np.searchsorted(v, h, side='right')
    w_bar = mcm.w[i]

    return w_bar

Next we will investigate how the reservation wage varies with parameters.

34.5. Impact of Parameters#

In each instance below, we’ll show you a figure and then ask you to reproduce it in the exercises.

34.5.1. The Reservation Wage and Unemployment Compensation#

First, let’s look at how \(\bar w\) varies with unemployment compensation.

In the figure below, we use the default parameters in the McCallModel class, apart from c (which takes the values given on the horizontal axis)

_images/mccall_resw_c.png

As expected, higher unemployment compensation causes the worker to hold out for higher wages.

In effect, the cost of continuing job search is reduced.

34.5.2. The Reservation Wage and Discounting#

Next, let’s investigate how \(\bar w\) varies with the discount factor.

The next figure plots the reservation wage associated with different values of \(\beta\)

_images/mccall_resw_beta.png

Again, the results are intuitive: More patient workers will hold out for higher wages.

34.5.3. The Reservation Wage and Job Destruction#

Finally, let’s look at how \(\bar w\) varies with the job separation rate \(\alpha\).

Higher \(\alpha\) translates to a greater chance that a worker will face termination in each period once employed.

_images/mccall_resw_alpha.png

Once more, the results are in line with our intuition.

If the separation rate is high, then the benefit of holding out for a higher wage falls.

Hence the reservation wage is lower.

34.6. Exercises#

Exercise 34.1

Reproduce all the reservation wage figures shown above.

Regarding the values on the horizontal axis, use

grid_size = 25
c_vals = np.linspace(2, 12, grid_size)         # unemployment compensation
beta_vals = np.linspace(0.8, 0.99, grid_size)  # discount factors
alpha_vals = np.linspace(0.05, 0.5, grid_size) # separation rate