Rational Expectations (RE)
Macroeconomics (M8674), March 2026
Vivaldo Mendes, ISCTE
vivaldo.mendes@iscte-iul.pt
1. Introduction
The Lucas-Sargent revolution in the 1070s
- In the early 1070s, Thomas Sargent and Robert Lucas launched a revolution in macroeconomics
- They argued that macroeconomics at the time had a big flaw: it was entirely based on … adaptive expectations
- For them, adaptive expectations suffered from two tremendous problems:
- They were not rational: why using only information form the past
- They were not consistent: why using something that will lead to systematic mistakes?
- They proposed a different framework for expectations: rational expectations
- From then onwards, not a single major macroeconomic model was published without RE
What are Rational Expectations (RE)?
- In modern macroeconomics, the term rational expectations means three things:
- Agents efficiently use all publicly available information (past, present, future).
- Agents understand the structure of the model/economy and base their expectations on this knowledge.
- Therefore, agents can forecast everything with no systematic mistakes.
- The only thing they cannot forecast are the exogenous shocks that hit the economy. These shocks are unpredictable.
- Strong assumptions: the economy’s structure is complex, and nobody truly knows how everything works.
2. Some examples
Example 1: a financial investment
Consider a financial asset:
- Bought today at price \(P_t\), and pays a dividend of \(D_t\) per period.
- Assume a close substitute asset (e.g., a bank deposit with interest) that yields a safe rate of return given by \(r\).
A risk neutral investor buys the asset if both returns have the same expected value: \[\frac{D_{t}+ \mathbb{E}_{t} P_{t+1}}{P_{t}}=1+r\]
Solve for \(P_t\), and to simplify define \(\phi = 1/(1 + r)\), to get \[P_{t}= \phi D_{t}+ \phi \mathbb{E}_{t} P_{t+1}\]
How do we solve such an equation? \(P_t\) is called a forward-looking variable.
Example 2: The Cagan Model
In 1956, Phillip Cagan published a very famous paper with the title “The Monetary Dynamics of Hyperinflation”.
The model involves the money demand \((m^d)\): \[p_{t}=\frac{\beta}{1+\beta} r_{t}+\frac{\beta}{1+\beta} \mathbb{E}_{t} p_{t+1}+\frac{1}{1+\beta} m_{t}^{d}\]
\(\{p_t,r_t\}\) are the price level and the real interest rate. The supply of money \((m^s)\) is: \[m_{t}^{s}=\phi+\theta m_{t-1}^{s}+\epsilon_{t} \quad, \quad|\theta|<1\]
The central banks sets the supply of money.
How can we solve such model, having \(\mathbb{E}{_t}p_{t+1}\) in one equation? \(p_t\) is a forward-looking variable.
3. Solving a RE model
The standard idea behind RE
- Lots of models in economics take the form:
\[y_{t}=\alpha +\beta \cdot \mathbb{E}_{t} y_{t+1}+ \theta x_{t} \tag{1}\]
\(\{\alpha, \beta, \theta\}\) are constants; \(\{y_t,x_t\}\) are the two variables of the model.
Eq. (1) says that today’s \(y\) is determined:
- by today’s \(x\)
- and by the expected value of tomorrow’s \(y\),
- where \(\mathbb{E}\) is the expectations operator.
But what determines that expected value of \(y\):
- \(\mathbb{E}y_{t+1}=?\)
RE as the mathematical expectation of the model
- If our model takes the form of eq. (1)
\[y_{t}=\alpha +\beta \cdot \mathbb{E}_{t} y_{t+1}+ \theta x_{t} \tag{1a}\]
- Under the RE hypothesis, the agents understand what happens in that process (equation) and formulate expectations in a way that is consistent with it:
\[ \mathbb{E}_{t} y_{t+1}= \alpha +\beta \cdot \mathbb{E}_{t} y_{t+2} + \theta \cdot \mathbb{E}_{t} x_{t+1} \tag{2}\]
- Eq. (2) is known as the “law of iterated expectations”.
- Notice that as \(\{\alpha, \beta, \theta\}\) are constants, then \(\mathbb{E}_t \alpha = \alpha\), etc.
A solution to Rational Expectations
To solve eq. (1), we must iterate forward by inserting eq. (2) into (1). Jump to Appendix A to see how this is done.
At the \(n\)-th iteration, we will get: \[y_{t} =\color{teal}\sum_{i=0}^{n-1} \beta^{i} \alpha+ \color{blue} \beta^{n} \mathbb{E}_{t} y_{t+n}+ \color{red} \sum_{i=0}^{n-1} \theta \beta^{i} \mathbb{E}_{t} x_{t+i} \tag{3}\]
To avoid explosive behavior (secure a stable equilibrium), impose the condition: \[|\beta|<1\]
Which implies that: \[\quad \lim _{n \rightarrow \infty} {\color{blue}\beta^{n} \mathbb{E}_{t} y_{t+n}}=0 \tag{4}\]
A solution to Rational Expectations (cont.)
By inserting eq. (4) into eq. (3), we finally get the solution to the stable equilibrium: \[y_{t} =\color{teal}\sum_{i=0}^{n-1} \beta^{i} \alpha+ \color{blue} \color{red} \sum_{i=0}^{n-1} \theta \beta^{i} \mathbb{E}_{t} x_{t+i} \tag{5}\]
But what determines \[\color{red}{\mathbb{E}_{t} x_{t+i}}\]
It depends on the nature of the process \(x_t\) and on the type of information we have about \(x_t\)
We discuss this point next.
4. Conditional vs unconditional expectations
What determines \(\mathbb{E}_{t} x_{t+i}\)?
If \(x_t\) is a deterministic process with a steady-sate given by \(\overline{x}\), then, by assumptions:
\[\mathbb{E}_{t} x_{t+i}=\overline{x}\]
- If \(x_t\) is a stochastic process, we can compute \(\mathbb{E}_{t} x_{t+i}\) under two different perspectives:
- Unconditional expectations of \(x_t\)
- We only care about the mean of \(x_t\) when formulating expectations
- Conditional expectations of \(x_t\)
- We have specific information about \(x_t\) and we will use it.
- Unconditional expectations of \(x_t\)
- Next we show how to compute these two expected values.
Unconditional expectations
- Suppose that \(x_t\) is given by the following stochastic process:
\[x_{t}=\phi + \rho x_{t-1}+\varepsilon_{t} \ \ , \quad \varepsilon_t \sim \mathcal{N}\left(0, \sigma^2\right) \tag{6}\]
The expected-unconditional mean is given by the (deterministic) steady-state value of \(x_t\): \[x_{t}= x_{t-1}= \overline{x}\]
Which leads to: \[\overline{x}=\phi + \rho \overline{x}+0 \Rightarrow \overline{x}=\frac{\phi}{ 1-\rho} \quad , \qquad \rho \neq 1\]
Therefore, the expected (unconditional) value of \(\mathbb{E}_t x_{t+i}\) is given by: \[\mathbb{E}_t x_{t+i}=\overline{x}=\frac{\phi}{ 1-\rho} \tag{7}\]
Conditional expectations
- Consider the same stochastic process as in eq. (6):
\[x_{t}=\phi + \rho x_{t-1}+\varepsilon_{t} \ \ , \quad \varepsilon_t \sim \mathcal{N}\left(0, \sigma^2\right) \tag{6a}\]
The expected-conditional mean is given by: (for details Jump to Appendix B) \[\mathbb{E}_t x_{t+i}=\sum_{k=0}^{k-1} \phi \rho^k +\rho^i x_t=\frac{\phi}{1-\rho}+\rho^{i} x_t\tag{8}\]
But as \(\overline{x}=\frac{\phi}{1-\rho}\), assuming that \(|\rho|<1\), then we can rewrite (8) as: \[\mathbb{E}_t x_{t+i}=\underbrace{\frac{\phi}{1-\rho}}_{\overline{x}}+\underbrace{\rho^i x_t}_{x^\varepsilon}=\overline{x}+ \rho^i x_t^\varepsilon \tag{9}\]
Where \(x_t^\varepsilon\) is the random component affecting \(x_t\).
5. RE solution with unconditional expectations
\(y_t\) solution with conditional expectations
- Eq. (5) gives the general solution for \(y_t\) under the RE hypothesis:
\[y_{t} =\color{teal}\sum_{i=0}^{n-1} \beta^{i} \alpha+ \color{blue} \color{red} \sum_{i=0}^{n-1} \theta \beta^{i} \mathbb{E}_{t} x_{t+i} \tag{5a}\]
- Equ (9) gives us the solution to \(x_t\) under conditional expectations:
\[\mathbb{E}_t x_{t+i}=\underbrace{\frac{\phi}{1-\rho}}_{\overline{x}}+\underbrace{\rho^i x_t}_{x^\varepsilon}=\overline{x}+ \rho^i x_t^\varepsilon \tag{9a}\]
- The solution of \(y_t\) under conditional expectations is given by inserting (9a) into (5a):
- Details in the next slide
\(y_t\) solution with conditional expectations (cont.)
- Inserting (9a) into (5a):
\[y_{t} = {\color{teal}\sum_{i=0}^{n-1} \beta^{i} \alpha} + \color{blue} \color{red} \sum_{i=0}^{n-1} \theta \beta^{i} \mathbb{E}_{t} x_{t+i} \tag{5a}\]
\[y_{t}= {\color{teal}{\frac{\alpha}{1-\beta}}} + \color{red}{ \sum_{i=0}^{n-1} \theta \beta^{i}\left(\overline{x}+ \rho^i x_t^\varepsilon\right) } \]
\[y_{t}= {\color{teal}{\frac{\alpha}{1-\beta}}} + \color{red}{ \sum_{i=0}^{n-1} \left( \theta \beta^{i}\overline{x}+ \theta \beta^{i} \rho^i x_t^\varepsilon \right) } = {\color{teal}{\frac{\alpha}{1-\beta}}} + \frac{\theta}{1-\beta} \overline{x} + \frac{\theta}{1- \beta \rho}x_t^\varepsilon \ \tag{10}\]
Summary: RE with conditional expectations
- The solution of \(y_t\) under conditional expectations is given by:
\[y_{t}= \underbrace{\frac{\alpha}{1-\beta} + \frac{\theta}{1-\beta} \overline{x}}_{\overline{y}} + \frac{\theta}{1- \beta \rho}x_t^\varepsilon \tag{10a}\]
- In the solution, the value of \(y_t\) is affected by three elements:
- The constant term: \(\frac{\alpha}{1-\beta}\)
- The deterministic steady state of \(x_t\): \(\frac{\theta}{1-\beta} \overline{x}\)
- the shocks that \(x_t\) may suffer, given by the term: \(\color{red}{\frac{\theta}{1- \beta \rho}x_t^\varepsilon}\)
6. Empirical Relevance of RE
Empirical Relevance of RE
- Rational expectations implies that people do not make systematic mistakes.
- How well does such a concept perform when confronted with evidence?
- Let’s see if people, by using all relevant information, make systematic mistakes.
- We will use the two most used surveys in the USA on inflation expectations:
Michigan Survey on Inflation Expectations
\(~\)
The most cited survey in macroeconomics: Michigan Survey
Michigan Survey on Inflation Expectations (cont.)
\(~\)
Most people make systematic mistakes about inflation expectations.
.
MICH performs quite poorly.
Survey of Professional Forecasters (SPF)
\(~\)
The SPF is another major survey on inflation expectations.
.
Data is collected by the Philadelphia Fed.
Survey of Professional Forecasters (cont.)
\(~\)
The SPF produces unbiased expectations, and gives support to RE.
People who use all relevant information do not make systematic mistakes.
7. RE and Stability Conditions
RE models: stability and the computer
As already seen above, a forward looking process like this:
\[y_{t}= \alpha +\beta \cdot \mathbb{E}_{t} y_{t+1}+ \theta x_t\]Has its dynamics expressed at the \(n\)-th iteration by: \[y_{t} =\color{teal}\sum_{i=0}^{n-1} \beta^{i} \alpha+ \color{blue} \beta^{n} \mathbb{E}_{t} y_{t+n}+ \color{red} \sum_{i=0}^{n-1} \theta \beta^{i} \mathbb{E}_{t} x_{t+i} \tag{15}\]
Stability requires that \(\beta^{n} \mathbb{E}_{t} y_{t+n} \to 0\) as \(n \to \infty\). This is true only if:
- \(|\beta|<1\).
Models with RE are difficult (if not impossible) to solve by pencil and paper:
- We have to use a computer to solve them.
RE variables: a twist for the computer
- To use a computer, we must have a state-space representation of the model.
- We have to write the model with all variables at \(t+1\) on the left-hand side of the system’s state-space representation, and those at \(t\) on its right-hand side.
- In this case, instead of using the equation:
\[y_{t}=\alpha +\beta \cdot \mathbb{E}_{t} y_{t+1} + \theta x_{t}\]
- We should use instead the equation:
\[\mathbb{E}_{t} y_{t+n} = -(\alpha/\beta) + (1/\beta)y_t - (\theta/\beta)x_t\]
- So, if \(|\beta|<1 \Rightarrow |1/\beta| >1\): the stability condition becomes the inverse.
- If the model is written in this way, stability requires: \(|1/\beta| >1.\)
8. Readings
- There is no compulsory reading for this session. We hope that the slides will be sufficient to provide a good grasp of the rational expectations approach in macroeconomics.
- Many textbooks deal with this subject in a way that is not very useful for our course. They treat this subject in an elementary way or offer a very sophisticated presentation, usually extremely mathematical but short on content.
- There is a textbook that feels quite good for our level: Patrick Minford and David Peel (2019). Advanced Macroeconomics: Primer, Second Edition, Edward Elgar, Cheltenham.
- Chapter 2 deals extensively with adaptive and rational expectations. However, this chapter is quite long (40 pages), making it more suitable to be used as complementary material rather than as compulsory reading. But it is by far the best treatment of this subject at this level.
Another excellent treatment of rational expectations can be found in the textbook:
Ben J. Heijdra (2017). Foundations of Modern Macroeconomics. Third Edition, Oxford UP, Oxford.
- Chapter 5 deal with this topic at great length (40 pages), but the subject is discussed at a relatively more advanced level than the one we follow in our course.
Another source of information is the book by:
- George W. Evans and Seppo Honkapohja (2009). Learning and Expectations in Macroeconomics, Second Edition, Princeton UP, Princeton.
- Chapter 1 (Expectations and the learning approach) deals with this topic at an elementary level, because their idea is to focus on what they call “learning”.
Appendix A
A step-by-step derivation of equation (3) in the next slide
Solution: forward iteration
We will solve the following equation by forward iteration: \[y_{t} = \alpha+\beta \mathbb{E}_{t} y_{t+1}+\theta x_{t}\]
Like this, when \(n \rightarrow \infty\):
\(\underbrace{t \rightarrow (t+1)}_{1\text{st iteration}} \rightarrow\) \(\qquad \qquad \qquad \underbrace{(t+1) \rightarrow (t+2)}_{2\text{nd iteration}} \rightarrow\) \(\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \underbrace{(t+2) \rightarrow (t+3)}_{3\text{rd iteration}} \rightarrow ...\) \(\qquad \qquad \qquad \qquad\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \underbrace{(t+(n-1)) \rightarrow (t+n)}_{n\text{th iteration}}\)
Let’s start: 1st iteration
\(y_{t} = \alpha+\beta \mathbb{E}_{t} y_{t+1}+\theta x_{t}\) \(\ \qquad \qquad \qquad \qquad \qquad \qquad\) 1st iteration: \(t \rightarrow t+1\)
Going 1 period forward
\(y_{t} = \alpha+\beta \mathbb{E}_{t} y_{t+1}+\theta x_{t}\) \(\ \qquad \qquad \qquad \qquad \qquad \qquad\) 1st iteration: \(\quad t \rightarrow t+1\)
\(\color{gray} \downarrow \qquad \qquad \qquad \nwarrow \mathbb{E}_{t} y_{t+1} = \alpha+\beta \mathbb{E}_{t} y_{t+2}+\theta \mathbb{E}_{t} x_{t+1} \qquad \qquad\) 1 period forward
Get the result in the 2nd iteration
\(y_{t} = \alpha+\beta \mathbb{E}_{t} y_{t+1}+\theta x_{t}\) \(\ \qquad \qquad \qquad \qquad \qquad \qquad\) 1st iteration: \(t \rightarrow t+1\)
\(\color{gray} \downarrow \qquad \qquad \qquad \nwarrow \mathbb{E}_{t} y_{t+1} = \alpha+\beta \mathbb{E}_{t} y_{t+2}+\theta \mathbb{E}_{t} x_{t+1} \qquad \qquad\) 1 period forward
\(y_{t} =\alpha+\beta\left[\alpha+\beta \mathbb{E}_{t} y_{t+2}+\theta \mathbb{E}_{t} x_{t+1}\right]+\theta x_{t}\)
Simplify the result in the 2nd iteration
\(y_{t} = \alpha+\beta \mathbb{E}_{t} y_{t+1}+\theta x_{t}\) \(\ \qquad \qquad \qquad \qquad \qquad \qquad\) 1st iteration: \(t \rightarrow t+1\)
\(\color{gray} \downarrow \qquad \qquad \qquad \nwarrow \mathbb{E}_{t} y_{t+1} = \alpha+\beta \mathbb{E}_{t} y_{t+2}+\theta \mathbb{E}_{t} x_{t+1} \qquad \qquad\) 1 period forward
\(y_{t} =\alpha+\beta\left[\alpha+\beta \mathbb{E}_{t} y_{t+2}+\theta \mathbb{E}_{t} x_{t+1}\right]+\theta x_{t}\)
\(y_{t} = \alpha+\beta \alpha+\beta^{2} \mathbb{E}_{t} y_{t+2}+\beta \theta \mathbb{E}_{t} x_{t+1}+\theta x_{t} \ \ \quad \quad \quad\) 2nd iteration: \(\ t+1 \rightarrow t+2\)
Going 2 periods forward
\(y_{t} = \alpha+\beta \mathbb{E}_{t} y_{t+1}+\theta x_{t}\) \(\ \qquad \qquad \qquad \qquad \qquad \qquad\) 1st iteration: \(t \rightarrow t+1\)
\(\color{gray} \downarrow \qquad \qquad \qquad \nwarrow \mathbb{E}_{t} y_{t+1} = \alpha+\beta \mathbb{E}_{t} y_{t+2}+\theta \mathbb{E}_{t} x_{t+1} \qquad \qquad\) 1 period forward
\(y_{t} =\alpha+\beta\left[\alpha+\beta \mathbb{E}_{t} y_{t+2}+\theta \mathbb{E}_{t} x_{t+1}\right]+\theta x_{t}\)
\(y_{t} = \alpha+\beta \alpha+\beta^{2} \mathbb{E}_{t} y_{t+2}+\beta \theta \mathbb{E}_{t} x_{t+1}+\theta x_{t} \ \ \quad \quad \quad\) 2nd iteration: \(\ t+1 \rightarrow t+2\)
\(\color{gray} \downarrow \qquad \qquad \qquad \qquad \quad \nwarrow \mathbb{E}_{t} y_{t+2}=\alpha+\beta \mathbb{E}_{t} y_{t+3}+\theta \mathbb{E}_{t} x_{t+2} \quad\) 2 periods forward
Get the result in the 3rd iteration
\(y_{t} = \alpha+\beta \mathbb{E}_{t} y_{t+1}+\theta x_{t}\) \(\ \qquad \qquad \qquad \qquad \qquad \qquad\) 1st iteration: \(t \rightarrow t+1\)
\(\color{gray} \downarrow \qquad \qquad \qquad \nwarrow \mathbb{E}_{t} y_{t+1} = \alpha+\beta \mathbb{E}_{t} y_{t+2}+\theta \mathbb{E}_{t} x_{t+1} \qquad \qquad\) 1 period forward
\(y_{t} =\alpha+\beta\left[\alpha+\beta \mathbb{E}_{t} y_{t+2}+\theta \mathbb{E}_{t} x_{t+1}\right]+\theta x_{t}\)
\(y_{t} = \alpha+\beta \alpha+\beta^{2} \mathbb{E}_{t} y_{t+2}+\beta \theta \mathbb{E}_{t} x_{t+1}+\theta x_{t} \ \ \quad \quad \quad\) 2nd iteration: \(\ t+1 \rightarrow t+2\)
\(\color{gray} \downarrow \qquad \qquad \qquad \qquad \quad \nwarrow \mathbb{E}_{t} y_{t+2}=\alpha+\beta \mathbb{E}_{t} y_{t+3}+\theta \mathbb{E}_{t} x_{t+2} \quad\) 2 periods forward
\(y_{t} = \color{teal} \quad \alpha +\beta \alpha+ \beta^{2} \alpha+ \color{blue}\beta^{3} \mathbb{E}_{t} y_{t+3}+ \color{red} \theta \beta^{2} \mathbb{E}_{t} x_{t+2}+ \theta \beta^{1} \mathbb{E}_{t} x_{t+1}+\theta \mathbb{E}_{t} x_{t}\color{black}\)
\(y_{t} = \color{teal}\beta^0 \alpha+\beta^{1} \alpha+ \beta^{2} \alpha+ \color{blue}\beta^{3} \mathbb{E}_{t} y_{t+3}+ \color{red} \theta \beta^{2} \mathbb{E}_{t} x_{t+2}+ \theta \beta^{1} \mathbb{E}_{t} x_{t+1}+\theta \beta^{0} \mathbb{E}_{t} x_{t}\color{black}\)
Simplify the result in the 3rd iteration
\(y_{t} = \alpha+\beta \mathbb{E}_{t} y_{t+1}+\theta x_{t}\) \(\ \qquad \qquad \qquad \qquad \qquad \qquad\) 1st iteration: \(t \rightarrow t+1\)
\(\color{gray} \downarrow \qquad \qquad \qquad \nwarrow \mathbb{E}_{t} y_{t+1} = \alpha+\beta \mathbb{E}_{t} y_{t+2}+\theta \mathbb{E}_{t} x_{t+1} \qquad \qquad\) 1 period forward
\(y_{t} =\alpha+\beta\left[\alpha+\beta \mathbb{E}_{t} y_{t+2}+\theta \mathbb{E}_{t} x_{t+1}\right]+\theta x_{t}\)
\(y_{t} = \alpha+\beta \alpha+\beta^{2} \mathbb{E}_{t} y_{t+2}+\beta \theta \mathbb{E}_{t} x_{t+1}+\theta x_{t} \ \ \quad \quad \quad\) 2nd iteration: \(\ t+1 \rightarrow t+2\)
\(\color{gray} \downarrow \qquad \qquad \qquad \qquad \quad \nwarrow \mathbb{E}_{t} y_{t+2}=\alpha+\beta \mathbb{E}_{t} y_{t+3}+\theta \mathbb{E}_{t} x_{t+2} \quad\) 2 periods forward
\(y_{t} = \color{teal} \quad \alpha +\beta \alpha+ \beta^{2} \alpha+ \color{blue}\beta^{3} \mathbb{E}_{t} y_{t+3}+ \color{red} \theta \beta^{2} \mathbb{E}_{t} x_{t+2}+ \theta \beta^{1} \mathbb{E}_{t} x_{t+1}+\theta \mathbb{E}_{t} x_{t}\color{black}\)
\(y_{t} = \color{teal}\beta^0 \alpha+\beta^{1} \alpha+ \beta^{2} \alpha+ \color{blue}\beta^{3} \mathbb{E}_{t} y_{t+3}+ \color{red} \theta \beta^{2} \mathbb{E}_{t} x_{t+2}+ \theta \beta^{1} \mathbb{E}_{t} x_{t+1}+\theta \beta^{0} \mathbb{E}_{t} x_{t}\color{black}\)
\[y_{t} = \color{teal}\sum_{i=0}^{3-1} \beta^{i} \alpha+ \color{blue}\beta^{3} \mathbb{E}_{t} y_{t+3}+ \color{red} \sum_{i=0}^{3-1} \theta \beta^{i} \mathbb{E}_{t} x_{t+i} \color{black} \qquad \qquad\text{3rd iteration:} \ {t+2} \rightarrow {t+3}\]
Generalize to the \(n\)-th iteration
In the previous slide, we iterated forward 3 times.
The result was: \[y_{t} = \color{teal}\sum_{i=0}^{3-1} \beta^{i} \alpha+ \color{blue}\beta^{3} \mathbb{E}_{t} y_{t+3}+ \color{red} \sum_{i=0}^{3-1} \theta \beta^{i} \mathbb{E}_{t} x_{t+i}\]
Now, it is easy to see that if we iterate \(n\)-times forward, instead of 3, we will get:
\[y_{t} =\color{teal}\sum_{i=0}^{n-1} \beta^{i} \alpha+ \color{blue} \beta^{n} \mathbb{E}_{t} y_{t+n}+ \color{red} \sum_{i=0}^{n-1} \theta \beta^{i} \mathbb{E}_{t} x_{t+i}\]
Appendix B
A step-by-step derivation of equation (8)
Conditional expectations: 1st iteration
Apply the expectations operator up to third iteration to:
\(\qquad x_t= \phi+\rho x_{t-1}+\varepsilon_t\)
\(\mathbb{E}_t x_{t+1}=\phi+\rho \mathbb{E}_t x_t+\mathbb{E}_t \varepsilon_{t+1}=\phi+\rho x_t+0=\phi+\rho x_t\)
Conditional expectations: 2nd iteration
Apply the expectations operator up to third iteration to:
\(\qquad x_t= \phi+\rho x_{t-1}+\varepsilon_t\)
\(\mathbb{E}_t x_{t+1}=\phi+\rho \mathbb{E}_t x_t+\mathbb{E}_t \varepsilon_{t+1}=\phi+\rho x_t+0=\phi+\rho x_t\)
\(\mathbb{E}_t x_{t+2}=\phi+\rho \mathbb{E}_t x_{t+1}+\mathbb{E}_t \varepsilon_{t+2}=\phi+\rho\left[\phi+\rho x_t\right]+0=\phi+\rho \phi+\rho^2 x_t\)
Conditional expectations: 3rd iteration
Apply the expectations operator up to third iteration to:
\(\qquad x_t= \phi+\rho x_{t-1}+\varepsilon_t\)
\(\mathbb{E}_t x_{t+1}=\phi+\rho \mathbb{E}_t x_t+\mathbb{E}_t \varepsilon_{t+1}=\phi+\rho x_t+0=\phi+\rho x_t\)
\(\mathbb{E}_t x_{t+2}=\phi+\rho \mathbb{E}_t x_{t+1}+\mathbb{E}_t \varepsilon_{t+2}=\phi+\rho\left[\phi+\rho x_t\right]+0=\phi+\rho \phi+\rho^2 x_t\)
\(\mathbb{E}_t x_{t+3}=\phi+\rho \mathbb{E}_t x_{t+2}+\mathbb{E}_t \varepsilon_{t+3}=\phi+\rho\left[\phi+\rho \phi+\rho^2 x_t\right] +0 = \underbrace{\phi+\rho \phi+\rho^2 \phi}_{=\sum_{k=0}^{3-1} \phi \rho^k}+ \rho^3x_t\)
Conditional expectations: \(i\)-th iteration
Apply the expectations operator up to third iteration to:
\(\qquad x_t= \phi+\rho x_{t-1}+\varepsilon_t\)
\(\mathbb{E}_t x_{t+1}=\phi+\rho \mathbb{E}_t x_t+\mathbb{E}_t \varepsilon_{t+1}=\phi+\rho x_t+0=\phi+\rho x_t\)
\(\mathbb{E}_t x_{t+2}=\phi+\rho \mathbb{E}_t x_{t+1}+\mathbb{E}_t \varepsilon_{t+2}=\phi+\rho\left[\phi+\rho x_t\right]+0=\phi+\rho \phi+\rho^2 x_t\)
\(\mathbb{E}_t x_{t+3}=\phi+\rho \mathbb{E}_t x_{t+2}+\mathbb{E}_t \varepsilon_{t+3}=\phi+\rho\left[\phi+\rho \phi+\rho^2 x_t\right] +0 = \underbrace{\phi+\rho \phi+\rho^2 \phi}_{=\sum_{k=0}^{3-1} \phi \rho^k}+ \rho^3x_t\)
Then, generalize to the \(i\)th iteration \[\mathbb{E}_t x_{t+i}=\sum_{k=0}^{k-1} \phi \rho^k +\rho^i x_t=\frac{\phi}{1-\rho}+\rho^{i} x_t\]