Skip to content

Commit

Permalink
Added week 4 material
Browse files Browse the repository at this point in the history
  • Loading branch information
wmutschl committed Nov 4, 2024
1 parent d954b40 commit 8c7b2da
Show file tree
Hide file tree
Showing 11 changed files with 245 additions and 146 deletions.
10 changes: 10 additions & 0 deletions .github/workflows/dynare-6.2-matlab-r2024b-macos.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,3 +64,13 @@ jobs:
lawOfLargeNumbers;
lawOfLargeNumbersAR1;
centralLimitDependentData;
- name: Run week 4 codes
uses: matlab-actions/run-command@v2
with:
command: |
addpath("Dynare-6.2-arm64/matlab");
cd("progs/matlab");
AR4OLS;
AR4ML;
AR1MLLaplace;
10 changes: 10 additions & 0 deletions .github/workflows/dynare-6.2-matlab-r2024b-ubuntu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -93,3 +93,13 @@ jobs:
lawOfLargeNumbers;
lawOfLargeNumbersAR1;
centralLimitDependentData;
- name: Run week 4 codes
uses: matlab-actions/run-command@v2
with:
command: |
addpath("dynare/matlab");
cd("progs/matlab");
AR4OLS;
AR4ML;
AR1MLLaplace;
10 changes: 10 additions & 0 deletions .github/workflows/dynare-6.2-matlab-r2024b-windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,3 +55,13 @@ jobs:
lawOfLargeNumbers;
lawOfLargeNumbersAR1;
centralLimitDependentData;
- name: Run week 4 codes
uses: matlab-actions/run-command@v2
with:
command: |
addpath("D:\hostedtoolcache\windows\dynare-6.0\matlab");
cd("progs/matlab");
AR4OLS;
AR4ML;
AR1MLLaplace;
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,6 @@ Please feel free to use this for teaching or learning purposes; however, taking

</details>

<!---

<details>
<summary>Week 4: Ordinary Least Squares (OLS) and Maximum Likelihood (ML) estimation of the autoregressive process</summary>
Expand All @@ -101,13 +100,17 @@ Please feel free to use this for teaching or learning purposes; however, taking

* [x] review the solutions of [last week's exercises](https://github.com/wmutschl/Quantitative-Macroeconomics/releases/latest/download/week_3.pdf) and write down all your questions
* [x] read Lütkepohl (2004); make note of all the aspects and concepts that you are not familiar with or that you find difficult to understand
* [x] TRY (!!!) to do exercise sheet 4; particularly, create your own ARpOLS.m and ARpML.m functions
* [x] do exercise 1; particularly, create your own ARpOLS.m function; feel free to sent it to me via Mattermost for review
* [x] we will do exercises 2 and 3 in class
* [x] participate in the Q&A sessions with all your questions and concerns
* [x] for immediate help: [schedule a meeting](https://schedule.mutschler.eu)
* [x] (optionally) checkout the short [Advanced Git Video Tutorials from GitKraken](https://www.gitkraken.com/learn/git/tutorials#advanced)

</details>


<!---
<details>
<summary>Week 5: Information Criteria, Specification Tests, and Bootstrap</summary>
Expand Down
69 changes: 37 additions & 32 deletions exercises/ml_ar_p.tex
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
\section[Maximum Likelihood Estimation Of Gaussian AR(p)]{Maximum Likelihood Estimation Of Gaussian AR(p)\label{ex:MaximumLikelihoodEstimationGaussianARp}}
Consider an AR(p) model with a constant and linear trend:
\section[Maximum Likelihood Estimation of Gaussian AR{(p)}]{Maximum Likelihood Estimation of Gaussian AR{(p)}\label{ex:MaximumLikelihoodEstimationGaussianARp}}
Consider an AR{(p)} model with a constant and linear trend:
\begin{align*}
y_t = c + d\cdot t + \phi_1 y_{t-1} + \cdots + \phi_p y_{t-p} +u_{t}=Y_{t-1}\theta + u_t
\end{align*}
where \(Y_{t-1}=(1,t, y_{t-1},\ldots,y_{t-p})\) is the matrix of regressors,
\(\theta = (c,d,\phi_1,\ldots,\phi_p)\) the parameter vector
and the error terms \(u_t\) are white noise and normally distributed,
i.e.\ \( u_t\sim N(0,\sigma_u^2) \) and \(E[u_t u_s]=0\) for \(t\neq s\).
and the error terms \(u_t\) are white noise and normally distributed, i.e.\
\( u_t\sim N(0,\sigma_u^2) \) and \(E[u_t u_s]=0\) for \(t\neq s\).
If the sample distribution is known to have probability density function \(f(y_1,\ldots,y_T)\),
an estimation with Maximum Likelihood (ML) is possible.
To this end, we decompose the joint distribution by
Expand All @@ -27,13 +27,15 @@
\begin{align*}
I_a(\theta,\sigma_u^2) = \lim\limits_{T\rightarrow \infty}-\frac{1}{T} E
\begin{pmatrix}
\frac{\partial^2 \log l}{\partial \theta^2} & \frac{\partial^2 \log l}{\partial \theta \partial \sigma_u^2} \\
\frac{\partial^2 \log l}{\partial \theta^2} & \frac{\partial^2 \log l}{\partial \theta \partial \sigma_u^2}
\\
\frac{\partial^2 \log l}{\partial \sigma_u^2 \partial \theta} & \frac{\partial^2 \log l}{\partial {(\sigma_u^2)}^2}
\end{pmatrix}
\end{align*}

\begin{enumerate}
\item First consider the case of \(p=1\)
\item
First consider the case of \(p=1\)
\begin{enumerate}
\item Derive the exact log-likelihood function for the \(AR(1)\) model with \(|\theta|<1\) and \(d=0\):
\begin{align*}
Expand All @@ -42,47 +44,50 @@
\item Why do we often look at the log-likelihood function instead of the actual likelihood function?
\item Regard the value of the first observation as deterministic or, equivalently,
note that its contribution to the log-likelihood disappears asymptotically.
Maximize analytically the conditional log-likelihood to get the ML estimators for \(\theta\) and \(\sigma_u\).
Maximize analytically the conditional log-likelihood to get the ML estimators for \(\theta \) and \(\sigma_u\).
Compare these to the OLS estimators.
\end{enumerate}
\item Now consider the general \(AR(p)\) model.
\begin{enumerate}
\item Write a function \texttt{logLikeARpNorm(\(x\),\(y\),\(p\),\(const\))}
that computes the value of the log-likelihood
conditional on the first \(p\) observations of a Gaussian \(AR(p)\) model, i.e.
\begin{align*}
\log l(\theta,\sigma_u)= -\frac{T-p}{2}\log(2\pi)-\frac{T-p}{2}\log(\sigma_u^2)-\sum_{t=p+1}^{T}\frac{u_t^2}{2\sigma_u^2}
\end{align*}
where \(x=(\theta',\sigma_u)'\), \(y\) denotes the data vector,
\(p\) the number of lags and \(const\) is equal to 1 if there is a constant,
and equal to 2 if there is a constant and linear trend in the model.
\item Write a \texttt{function ML = ARpML(\(y\),\(p\),\(const\),\( \alpha \))}
\item
Now consider the general \(AR(p)\) model.
\begin{enumerate}
\item Write a function \texttt{logLikeARpNorm{(\(x\),\(y\),\(p\),\(const\))}}
that computes the log-likelihood value
conditional on the first \(p\) observations of a Gaussian \(AR(p)\) model:
\begin{align*}
\log l(\theta,\sigma_u)= -\frac{T-p}{2}\log(2\pi)-\frac{T-p}{2}\log(\sigma_u^2)-\sum_{t=p+1}^{T}\frac{u_t^2}{2\sigma_u^2}
\end{align*}
where \(x=(\theta',\sigma_u)'\), \(y\) denotes the data vector,
\(p\) the number of lags and \(const\) is equal to 1 if there is a constant,
and equal to 2 if there are both a constant and linear trend in the model.
\item Write a \texttt{function ML = ARpML{(\(y\),\(p\),\(const\),\( \alpha \))}}
that takes as inputs a data vector \(y\), number of lags \(p\)
and \(const=1\) if the model has a constant term
or \(const=2\) if the model has a constant term and linear trend.
\(\alpha\) denotes the significance level.
The function computes
\begin{itemize}
\item the maximum likelihood estimates of an AR(p) model by numerically minimizing the negative conditional log-likelihood function using e.g. \texttt{fminunc}
\item the standard errors by means of the asymptotic covariance matrix, i.e.\ the inverse of the hessian of the negative log-likelihood function
or \(const=2\) if the model has both a constant term and linear trend.
\(\alpha \) denotes the significance level.
The function computes
\begin{itemize}
\item the maximum likelihood estimates of an AR{(p)} model
by numerically minimizing the negative conditional log-likelihood function using e.g.\ \texttt{fminunc}
\item the standard errors by means of the asymptotic covariance matrix, i.e.\
the inverse of the hessian of the negative log-likelihood function
(hint: gradient-based optimizers also output the hessian)
\end{itemize}
Save all results into a structure \enquote{ML} containing the estimates of \(\theta\),
\end{itemize}
Save all results into a structure \enquote{ML} containing the estimates of \(\theta \),
its standard errors, t-statistics and p-values as well as the ML estimate of \(\sigma_u\).

\item Load simulated data given in the CSV file \texttt{AR4.csv} and estimate an AR(4) model with constant term.
Compare your results with the OLS estimators from the previous exercise.
\end{enumerate}
\item Load simulated data given in the CSV file \texttt{AR4.csv} and estimate an AR{(4)} model with constant term.
Compare your results with the OLS estimators from the previous exercise.
\end{enumerate}
\end{enumerate}

\paragraph{Readings}
\begin{itemize}
\item \textcite{Lutkepohl_2004_UnivariateTimeSeries}.
\item \textcite{Lutkepohl_2004_UnivariateTimeSeries}
\end{itemize}


\begin{solution}\textbf{Solution to \nameref{ex:MaximumLikelihoodEstimationGaussianARp}}
\ifDisplaySolutions
\ifDisplaySolutions%
\input{exercises/ml_ar_p_solution.tex}
\fi
\newpage
Expand Down
33 changes: 19 additions & 14 deletions exercises/ml_ar_p_laplace.tex
Original file line number Diff line number Diff line change
@@ -1,34 +1,39 @@
\section[Maximum Likelihood Estimation Of Laplace AR(p)]{Maximum Likelihood Estimation Of Laplace AR(p)\label{ex:MaximumLikelihoodEstimationLaPlaceARp}}
Consider the AR(1) model with constant
\section[Maximum Likelihood Estimation of Laplace AR{(p)}]{Maximum Likelihood Estimation of Laplace AR{(p)}\label{ex:MaximumLikelihoodEstimationLaPlaceARp}}
Consider the AR{(1)} model with constant
\begin{align*}
y_t = c + \phi y_{t-1} + u_t
\end{align*}
Assume that the error terms \(u_t\) are i.i.d. Laplace distributed with known density
Assume that the error terms \(u_t\) are {i.i.d.}\ Laplace distributed with known density
\begin{align*}
f_{u_{t}}(u)=\frac{1}{2}\exp \left( -|u|\right)
f_{u_{t}}(u) = \frac{1}{2} \exp{\left( -|u|\right)}
\end{align*}
Note that for the above parametrization of the Laplace distribution
we have that \(E(u_t)=0\) and \(Var(u_t)=2\),
so we are only interested in estimating \(c\) and \(\phi\)
and not the standard deviation of \(u_t\) as it is fixed.
so we are only interested in estimating \(c\) and \(\phi \)
and not the standard deviation of \(u_t\) as it is fixed to 2.
\begin{enumerate}
\item Derive the log-likelihood function conditional on the first observation.
\item Write a function that calculates the conditional log-likelihood of \(c\) and \(\phi\).
\item Load the dataset given in the CSV file \texttt{LaPlace.csv}.
Numerically find the maximum likelihood estimates of \(c\) and \(\phi\)
\item
Derive the log-likelihood function conditional on the first observation.
\item
Write a function that calculates the conditional log-likelihood of \(c\) and \(\phi \).
\item
Load the dataset given in the CSV file \texttt{LaPlace.csv}.
Numerically find the maximum likelihood estimates of \(c\) and \(\phi \)
by minimizing the negative conditional log-likelihood function.
\item Compare your results with the maximum likelihood estimate under the assumption of Gaussianity.
That is, redo the estimation by minimizing the negative Gaussian log-likelihood function.
\item
Compare your results with the maximum likelihood estimate under the assumption of Gaussianity.
That is, redo the estimation by minimizing the negative Gaussian log-likelihood function
using the dataset given in the CSV file \texttt{LaPlace.csv}.
\end{enumerate}

\paragraph{Readings}
\begin{itemize}
\item \textcite{Lutkepohl_2004_UnivariateTimeSeries}
\item \textcite{Lutkepohl_2004_UnivariateTimeSeries}
\end{itemize}


\begin{solution}\textbf{Solution to \nameref{ex:MaximumLikelihoodEstimationLaPlaceARp}}
\ifDisplaySolutions
\ifDisplaySolutions%
\input{exercises/ml_ar_p_laplace_solution.tex}
\fi
\newpage
Expand Down
30 changes: 19 additions & 11 deletions exercises/ml_ar_p_laplace_solution.tex
Original file line number Diff line number Diff line change
@@ -1,9 +1,12 @@
\begin{enumerate}
\item Computation of the conditional expectation and variance:
\begin{itemize}
\item \(E[y_{t}|y_{t-1}] = c + \phi y_{t-1} \)
\item \(Var[y_{t}|y_{t-1}] = var(u_t) = 2\)
\end{itemize}

\item
Computation of the conditional expectation and variance:
\begin{gather*}
E[y_{t}|y_{t-1}] = c + \phi y_{t-1}
\\
Var[y_{t}|y_{t-1}] = var(u_t) = 2
\end{gather*}
Hence the conditional density is
\begin{align*}
f_t(y_{t}|y_{t-1}; c, \phi) = \frac{1}{2} \cdot e^{-|y_{t} -(c + \phi y_{t-1})|} = \frac{1}{2} \cdot e^{-|u_t|}
Expand All @@ -13,11 +16,16 @@
\log L(y_{2}, \dots, y_{T};c, \phi) =-(T-1) \cdot \log(2) - \sum_{t=2}^{T} |u_{t}|
\end{align*}

\item \lstinputlisting[style=Matlab-editor,basicstyle=\mlttfamily,title=\lstname]{progs/matlab/logLikeARpLaplace.m}
\item \lstinputlisting[style=Matlab-editor,basicstyle=\mlttfamily,title=\lstname]{progs/matlab/ARpMLLaPlace.m}
\lstinputlisting[style=Matlab-editor,basicstyle=\mlttfamily,title=\lstname]{progs/matlab/AR1MLLaPlace.m}
\item Note that the values are very close to each other.
Maximizing the Gaussian likelihood, even though the underlying distribution is not Gaussian,
\item
\lstinputlisting[style=Matlab-editor,basicstyle=\mlttfamily,title=\lstname]{progs/matlab/logLikeARpLaplace.m}

\item
\lstinputlisting[style=Matlab-editor,basicstyle=\mlttfamily,title=\lstname]{progs/matlab/ARpMLLaPlace.m}
\lstinputlisting[style=Matlab-editor,basicstyle=\mlttfamily,title=\lstname]{progs/matlab/AR1MLLaPlace.m}

\item
Note that the values are very close to each other.
Maximizing the Gaussian likelihood, even though the underlying distribution is not Gaussian,
is also known as pseudo-maximum likelihood or quasi-maximum likelihood.
It usually performs surprisingly well if you cannot pin down the underlying distribution.
It usually performs surprisingly well if you cannot pin down the underlying distribution.
\end{enumerate}
Loading

0 comments on commit 8c7b2da

Please sign in to comment.