From 5df75bd802ef078ef1bd2084c29f94dc4e16dd63 Mon Sep 17 00:00:00 2001 From: nickzachariou Date: Thu, 7 Sep 2023 00:09:40 +0200 Subject: [PATCH] Add Lecture 9 on Extracting Physical Observables (#2) --- .gitignore | 1 + _toc.yml | 1 + lecture9.ipynb | 3186 ++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 3188 insertions(+) create mode 100644 lecture9.ipynb diff --git a/.gitignore b/.gitignore index e6eed19..568d385 100644 --- a/.gitignore +++ b/.gitignore @@ -1,3 +1,4 @@ *.csv .ipynb_checkpoints/ _build/ +ExtendedLLexample.txt diff --git a/_toc.yml b/_toc.yml index 830a65d..d54d058 100644 --- a/_toc.yml +++ b/_toc.yml @@ -5,3 +5,4 @@ chapters: - file: lecture6-nbarp - file: lecture6-gammap - file: lecture6-gammap-solution + - file: lecture9 diff --git a/lecture9.ipynb b/lecture9.ipynb new file mode 100644 index 0000000..1b21180 --- /dev/null +++ b/lecture9.ipynb @@ -0,0 +1,3186 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "# Lecture 9 - Observables" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [ + "remove-cell" + ] + }, + "outputs": [], + "source": [ + "%pip install -q gdown lmfit" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "mystnb": { + "code_prompt_show": "Import Python libraries" + }, + "tags": [ + "hide-cell" + ] + }, + "outputs": [], + "source": [ + "import math\n", + "import random\n", + "\n", + "import gdown\n", + "import matplotlib\n", + "import matplotlib.cm as cm\n", + "import matplotlib.pyplot as plt\n", + "import numpy as np\n", + "import pylab as py\n", + "import scipy.optimize as opt\n", + "import scipy.stats as stats\n", + "from lmfit import Model\n", + "from lmfit.models import ExpressionModel\n", + "from matplotlib.colors import BoundaryNorm\n", + "from matplotlib.ticker import MaxNLocator\n", + "from numpy import exp, loadtxt, pi, sqrt\n", + "from scipy.stats import chi2, norm" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "## Estimator Definition" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "An estimator is a procedure applied to the data sample which gives a numerical value for a property of the parent population or a property/parameter of the parent distribution. Suppose that the quantity we want to measure is called $a$. $\\hat{a}$ is an estimator." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Let's consider an estimator $\\hat{a}$ to find the average height $a$ of all students of the University on the basis of a sample N.\n", + "Let's consider the different estimators:\n", + "\n", + "\n", + "1. Add up all the heigths and divide by N\n", + "2. Add up the first 10 heights and divide by 10. Ignore the rest\n", + "3. Add up all the heigths and divide by N-1\n", + "4. Trow away the data and give 1.8 as answer\n", + "5. Add up the second, fourth, sixth,... heights and divide by N/2 for N even and (N-1)/2 for N odd\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Consistent" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Consistent: $\\lim_{N\\to\\infty} \\hat{a} = a$, that is you can get as close to the true value as you want, as long as you have a large enough data set.\n", + "In the previous example 1 is consistent:\n", + "\n", + "$$\n", + "\\hat{\\mu}=\\frac{x_1...x_N}{N}=\\bar{x}\n", + "$$\n", + "\n", + "for $N$ going to infinity $\\bar{x}\\rightarrow\\mu$: law of big numbers.\n", + "3 and 5 are also consistent since N-1 or N/2 make little difference when $N\\rightarrow\\infty$.\n", + "On the contrary 2 and 4 are not consistent." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Unbiased" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Unbiased: $\\langle\\hat{a}\\rangle = a\\ \\forall\\ N$, that is however large or small your data set may be, you should on average expect to get the right answer.\n", + "The expectation value of the estimator is equal to the true value.\n", + "For 1 we have\n", + "\n", + "$$\n", + "\\langle \\hat{\\mu} \\rangle =\\langle \\frac{x_1\\dots x_N}{N}\\rangle =\\frac{1}{N}\\left( \\langle x_1 \\rangle +\\cdots+ \\langle x_N \\rangle \\right)=\\frac{N \\langle x \\rangle }{N}=\\mu\n", + "$$\n", + "\n", + "For 3 we have\n", + "\n", + "$$\n", + "\\langle \\hat{\\mu} \\rangle =\\langle \\frac{x_1\\dots x_N}{N-1}\\rangle =\\frac{1}{N-1}\\left( \\langle x_1 \\rangle +\\cdots+ \\langle x_N \\rangle \\right)=\\frac{N \\langle x \\rangle }{N-1}\n", + "$$\n", + "\n", + "so 3 is biased.\n", + "\n", + "While for 5\n", + "\n", + "$$\n", + "\\langle \\hat{\\mu} \\rangle =\\langle \\frac{x_2\\dots x_N}{N/2}\\rangle =\\frac{1}{N/2}\\left( \\langle x_2 \\rangle +\\cdots+ \\langle x_N \\rangle \\right)=\\frac{N/2 \\langle x \\rangle }{N/2}=\\mu\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Efficient" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "$V(\\hat{a})$ is small, that is the fluctuations around the true value is for a given size of the data set smaller than for less efficient estimators.\n", + "In general if the variance is smaller we prefer the estimator, so the main difference between 1 and 5 is that 5 uses only half of the data set thus its variance is $\\sqrt{2}$ larger." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Example: Estimating the variance" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "We consider the ideal case where the true mean is known $\\mu$. The estimator of the variance is thus\n", + "\n", + "$$\n", + "\\begin{eqnarray}\n", + "\\widehat{V(x)} & = & \\frac{1}{N}\\sum_i \\left(x_i-\\mu\\right)^2\n", + "\\end{eqnarray}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can show that it is consistent and unbiased\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\left\\langle\\widehat{V(x)}\\right\\rangle & = & \\frac{1}{N}N\\left\\langle \\left(x-\\mu\\right)^2 \\right\\rangle \\\\\n", + "\t\t& = & \\left\\langle \\left(x-\\mu\\right)^2 \\right\\rangle = V(x)\n", + "\\end{eqnarray*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's consider the case where $\\mu$ is not known.\n", + "An obvious remedy is to use $\\bar{x}$ so that\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\widehat{V(x)}\t& = & \\frac{1}{N}\\sum_i \\left(x_i-\\hat{x}\\right)^2\n", + "\\end{eqnarray*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can prove that such an estimator is biased\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\widehat{V(x)}\t& = & \\frac{1}{N}\\sum_i \\left(x_i-\\overline{x}\\right)^2= \\frac{1}{N}\\sum_i \\left(x_i^2-2x_i\\overline{x}+\\overline{x}^2\\right) \\\\\n", + "\t\t\t& = & \\frac{1}{N}\\left(\\sum_i x_i^2-2\\overline{x}\\sum_i x_i+\\sum_i\\overline{x}^2\\right) \\\\\n", + "\t\t\t& = & \\frac{1}{N}\\left(\\sum_i x_i^2-2\\sum_i\\overline{x}^2+\\sum_i\\overline{x}^2\\right) \\\\\n", + "\t\t\t& = & \\frac{1}{N}\\sum_i\\left( x_i^2-\\overline{x}^2\\right) \\\\\n", + "\\end{eqnarray*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We now take the expectation value of the estimator\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\left\\langle\\widehat{V(x)}\\right\\rangle & = & \\left\\langle x^2 - \\overline{x}^2 \\right\\rangle \\\\\n", + "\t\t& = & \\left\\langle x^2 \\right\\rangle - \\left\\langle\\overline{x}^2 \\right\\rangle \\\\\n", + "\t\t& = & \\left\\langle x^2 \\right\\rangle - \\left\\langle x \\right\\rangle^2 + \\left\\langle \\overline{x} \\right\\rangle^2 - \\left\\langle\\overline{x}^2 \\right\\rangle\\hspace{0.5cm} [\\mathrm{CLT}] \\\\\n", + "\\end{eqnarray*}\n", + "$$\n", + "\n", + "Where we used\n", + "\n", + "$$\n", + "\\langle x \\rangle =\\langle \\overline{x} \\rangle\n", + "$$\n", + "\n", + "thanks to the central limit theorem (CLT)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "So we have\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\left\\langle\\widehat{V(x)}\\right\\rangle \t\t& = & V(x) - V(\\overline{x}) \\\\\n", + "\t\t& = & V(x) - \\frac{1}{N}V(x)\\hspace{0.5cm} [\\mathrm{CLT}] \\\\\n", + "\t\t& = & \\frac{N-1}{N}V(x)\n", + "\\end{eqnarray*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The bias fall as $1/N$ so for large data set this can be neglected.\n", + "A way to correct the bias is to defined\n", + "\n", + "$$\n", + "s^2 \\equiv \\widehat{V(x)} \\equiv \\frac{1}{N-1}\\sum_i\\left(x_i-\\overline{x}\\right)^2\n", + "$$\n", + "\n", + "where the multiplication factor $\\frac{N}{N-1}$ is known as Bessel's correction." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Example: Gaussian" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Let's consider the variance estimator to evaluate the variance of a given population.\n", + "\n", + "We assume that the PDF governing the distribution of ages of York student is a Gaussian with a given mean and variance.\n", + "\n", + "To estimate the mean and variance of the population we take samples of 5 students.\n", + "For each sample we calculate mean and variance." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# Selected properties of the population M1-> Mean S1**2 is the variance\n", + "M1 = 25\n", + "S1 = 5" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "random.seed(42)\n", + "students = 5\n", + "sample = []\n", + "for i in range(students):\n", + " sample.append(random.gauss(M1, S1))\n", + "\n", + "sample = np.array(sample)\n", + "print(sample)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def variance1(array, m):\n", + " som = 0\n", + " l = len(array)\n", + " for i in range(l):\n", + " som = som + (array[i] - m) ** 2 / l\n", + "\n", + " return som\n", + "\n", + "\n", + "def variance2(array, m):\n", + " som = 0\n", + " l = len(array)\n", + " for i in range(l):\n", + " som = som + (array[i] - m) ** 2 / (l - 1)\n", + "\n", + " return som" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "av = sample.mean()\n", + "\n", + "print(\"The mean of the sample is =\", av)\n", + "print(\"The Variance (with bias) is=\", variance1(sample, av))\n", + "print(\"The Variance (without bias) is=\", variance2(sample, av))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### How can we check if estimator is biased?" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "In order for an estimator to be unbiased, its expected value must exactly equal the value of the population parameter. The bias of an estimator is the difference between the expected value of the estimator and the actual parameter value. Thus, if this difference is non-zero, then the estimator has bias." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "**Monte Carlo (MC) simulation methods are commonly to test the performance of an estimator or its test statistic.** The steps of MC are as follows:\n", + "1. Use a data generating process, to replicate population estimator and its properties.\n", + "2. Set the sample size of estimation, to generate sample estimators\n", + "3. Set the number of simulations to generate several sample estimators\n", + "4. Compare the properties of sample estimators with population values\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Lets redo our previous example by repeating our previous test 10000 times. Each time we calculate the mean and variances (using both definitions we have seen)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "NNN = 100000\n", + "\n", + "ave = []\n", + "var1 = []\n", + "var2 = []\n", + "\n", + "for j in range(NNN):\n", + " students = 5\n", + " sample = []\n", + " for i in range(students):\n", + " sample.append(random.gauss(M1, S1))\n", + "\n", + " sample = np.array(sample)\n", + " av = sample.mean()\n", + " ave.append(av)\n", + " var1.append(variance1(sample, av))\n", + " var2.append(variance2(sample, av))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# plotting a graph\n", + "plt.hist(ave)\n", + "plt.ylabel(\"Counts\")\n", + "plt.axvline(M1, color=\"k\", linestyle=\"dashed\", linewidth=1)\n", + "plt.xlabel(\"Average\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The mean (as expected) works very well to estimate the 'mean' of the population.\n", + "what about the 2 definitions of variance?\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plt.hist(var1, bins=30)\n", + "plt.hist(var2, bins=30, alpha=0.6)\n", + "plt.ylabel(\"Counts\")\n", + "plt.axvline(S1**2, color=\"k\", linestyle=\"dashed\", linewidth=1)\n", + "plt.xlabel(\"Average\")\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "var1 = np.array(var1)\n", + "var2 = np.array(var2)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We see that the variance follows a $\\chi^2$ distribution. We can calculate the average values of these 2 distributions" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\"The average value of the variance (biased)is \", var1.mean())\n", + "print(\"The average value of the variance (unbiased)is \", var2.mean())" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "**Monte Carlo techniques allow us to study in detail given estimators characteristics and evaluate whether they are consistent, biased, and efficient.**" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "## Method of Moments (MoM)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The method of moments involves equating sample moments with theoretical moments. So, let's start by making sure we recall the definitions of theoretical moments, as well as learn the definitions of sample moments." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Definitions" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "- **Theoretical moment**: calculated from theoretical distributions
\n", + " E($X^k$) is the $k^{th}$ (theoretical) moment of the distribution (about the origin), for $k=1,2,3...$. This is given by E($X^k$)=$\\int_{-\\infty}^\\infty x^k f(x) dx$, where $f(x)$ is the probability density distribution of our population.\n", + "\n", + "- **Sample moment**: calculated from sampled data
\n", + " $M_k=\\frac{1}{n}\\sum_{i=1}^{n} x_i^k$ is the $k^{th}$ sample moment for $k=1,2,3...$\n", + "\n", + "- **One Form of the Method**
\n", + " The basic idea behind this form of the method is to:\n", + " 1. Equate the first sample moment about the origin $M_1=\\frac{1}{n}\\sum_{i=1}^{n} X_i$ to the first theoretical moment E($X$)\n", + " 2. Equate the second sample moment about the origin $M_2=\\frac{1}{n}\\sum_{i=1}^{n} X_i^2$ to the second theoretical moment E($X^2$)\n", + " 3. Continue equating sample moments about the origin, $M_k=\\frac{1}{n}\\sum_{i=1}^{n} X_i^k$ with the corresponding theoretical moments E($X^k$) until you have as many equations as you have parameters.\n", + " 4. Solve for the parameters.\n", + "\n", + " The resulting values are called method of moments estimators. It seems reasonable that this method would provide good estimates, since the empirical distribution converges in some sense to the probability distribution. Therefore, the corresponding moments should be about equal." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Example 1" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "> Let $X_1, X_2, X_3... X_n$ be normal random variables with mean and variance. What are the method of moments estimators of the mean $\\mu$ and variance $\\sigma$?" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Recall that\n", + "\n", + "$$\n", + "E(X)=\\int_{-\\infty}^\\infty X \\frac{1}{\\sigma\\sqrt{2\\pi}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}} dx=\\mu\n", + "$$\n", + "\n", + "and\n", + "\n", + "$$\n", + "E(X^2)=\\int_{-\\infty}^\\infty X^2 \\frac{1}{\\sigma\\sqrt{2\\pi}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}} dx=\\sigma^2+\\mu^2\n", + "$$\n", + "\n", + "from this we have\n", + "\n", + "$$\n", + "M_1=\\frac{1}{n}\\sum_{i=1}^{n} X_i=E(X)=\\mu\n", + "$$\n", + "\n", + "and\n", + "\n", + "$$\n", + "M_2=\\frac{1}{n}\\sum_{i=1}^{n} X_i^2=E(X^2)=\\sigma^2+\\mu^2.\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Solving form $\\mu$ and $\\sigma$ we get that\n", + "\n", + "$$\n", + "\\mu=M_1=\\frac{1}{n}\\sum_{i=1}^{n} X_i\n", + "$$\n", + "\n", + "and\n", + "\n", + "$$\n", + "\\sigma^2=M_2-M_1^2=\\frac{1}{n}\\sum_{i=1}^{n} X_i^2-\\left(\\frac{1}{n}\\sum_{i=1}^{n} X_i\\right)^2\n", + "$$\n", + "\n", + "which is the definition of **variance**\n", + "\n", + "$$\n", + "\\sigma^2=\\frac{1}{n}\\sum_{i=1}^{n} (X_i-\\bar{X})^2.\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Example 2" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "> The decay of a muon into a positron ($e^+$), an electron neutrino ($\\nu_e$), and a muon antineutrino ($\\bar{\\nu}_\\mu$)\n", + "> \n", + "> $$\\mu^+\\to e^+ +\\nu_e+\\bar{\\nu}_\\mu$$\n", + "> \n", + "> has a distribution angle $t$ with density given by\n", + "> \n", + "> $$f(t|\\alpha) = \\frac{1}{2\\pi}(1 + \\alpha \\cos(t)),$$\n", + "> \n", + "> where $-\\pi with $t$ the angle between the positron trajectory and the $\\mu^+$-spin. The anisometry parameter $\\alpha \\in [1/3, 1/3]$ depends the polarization of the muon beam and positron energy. Based on the measurement $t_1,...t_n$, give the method of moments estimate $\\hat{\\alpha}$ for $\\alpha$. (Note: In this case the mean is 0 for all values of $\\alpha$, so we will have to compute the second moment to obtain an estimator.)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The theoretical moment 1 is given by\n", + "\n", + "$$\n", + "E(T)=\\int_{-\\pi}^{\\pi}t\\frac{1}{2\\pi}(1 + \\alpha cos(t)) dt=0,\n", + "$$\n", + "\n", + "so it doesnt provide any info.\n", + "The theoretical moment 2 is given by\n", + "\n", + "$$\n", + "E(T^2)=\\int_{-\\pi}^{\\pi}t^2\\frac{1}{2\\pi}(1 + \\alpha cos(t)) dt=\\frac{\\pi^2}{3}-2\\alpha.\n", + "$$\n", + "\n", + "Equating the second sample moment $M_2=\\frac{1}{n}\\sum_{i=1}^{n} t_i^2$ with the theoretical moment $E(T^2)$ and solving for $\\alpha$ we get the MoM estimator $\\hat{\\alpha}$\n", + "\n", + "$$\n", + "\\hat{\\alpha}=\\frac{1}{2}\\left(M_2-\\frac{\\pi^2}{3}\\right)\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The method of moments is fairly simple and yields consistent estimators (under very weak assumptions), though these estimators are often biased.\n", + "\n", + "It is an alternative to the method of maximum likelihood. Due to easy computability, method-of-moments estimates may be used as the first approximation to the solutions of the likelihood equations!\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "## Maximum Likelihood" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data.\n", + "\n", + "This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let’s start with the Probability Density function (PDF) for the Normal distribution\n", + "\n", + "\n", + "\\begin{equation}\n", + "P(x;\\mu,\\sigma)=\\frac{1}{\\sqrt{2\\pi \\sigma^2}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}\n", + "\\end{equation}\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let’s say we take one sample from a population that follows this distribuition and our sample is 5. What is the probability it comes from a distribution of μ = 5 and σ = 3?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + } + }, + "outputs": [], + "source": [ + "print(\"the probability is=\", norm.pdf(5, 5, 3))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "What if it came from a distribution with μ = 7 and σ = 3?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\"the probability is=\", norm.pdf(5, 7, 3))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Consider this sample: `x = [4, 5, 7, 8, 8, 9, 10, 5, 2, 3, 5, 4, 8, 9]` and let’s compare these values to both PDF ~ N(5, 3) and PDF ~ N(7, 3). Our sample could be drawn from a variable that comes from these distributions, so let’s take a look." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "data = [4, 5, 7, 8, 8, 9, 10, 5, 2, 3, 5, 4, 8, 9]\n", + "\n", + "x = np.linspace(0, 20, 100)\n", + "plt.subplot(2, 1, 1)\n", + "plt.plot(x, stats.norm.pdf(x, 5, 3), label=\"N(5,3)\")\n", + "plt.plot(x, stats.norm.pdf(x, 7, 3), label=\"N(7,3)\")\n", + "plt.legend()\n", + "plt.subplot(2, 1, 2)\n", + "plt.hist(data, bins=20, range=(0, 20), label=\"data\")\n", + "plt.legend()\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Which is the value of $\\mu$ and $\\sigma$ that most likely give rise to our data?" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Definition" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Given a data sample $X = \\{x_1,x_2,\\dots,x_N\\}$ one applies an estimator $\\hat{a}$ for the quantity $a$.\n", + "The data values $x_i$ are drawn from some probability density function $P(x,a)$ which depends on $a$. The form of $P$ is given and $a$ specified.\n", + "The probability of a data set is the product of the individual probabilities.\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "L(x_1,x_2,\\dots,x_N;a)&=&P(x_1;a)P(x_2;a)\\dots P(x_N;a)\\\\\n", + "&=&\\Pi_i P(x_i;a)\n", + "\\end{eqnarray*}\n", + "$$\n", + "\n", + "This product is called **likelihood**." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The ML estimator for a parameter $a$ is the procedure which evaluates the parameter value $\\hat{a}$ which makes the actual observations $X$ as likely as possible, that is the set of parameters $\\hat{a}$ which maximises $L(X;a)$. In practice, the logarithm of $L$ is more practical to work with computationally (numerical stability):\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\ln L(X;a)\t& = & \\ln \\left( \\prod_i P(x_i;a) \\right) \\\\\n", + "\t\t& = & \\sum_i \\ln P(x_i;a),\n", + "\\end{eqnarray*}\n", + "$$\n", + "\n", + "The ML estimator $\\hat{a}$ is then the value of $a$ which maximises $\\ln L(X;a)$ (or minimises $-\\ln L(X;a)$). This can be found (in some cases analytically) by:\n", + "\n", + "$$\n", + "\\left.\\frac{\\mathrm{d}\\,\\ln L}{\\mathrm{d}\\,a}\\right|_{a=\\hat{a}}=0\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "**Why log?** Simply because it helps with numerical stability, i.e. multiplying thousands of small values (probabilities, likelihoods, etc..) can cause an underflow in the system’s memory, and the log is a perfect solution because it transforms multiplications to additions and transforms small positive numbers into non-small negative numbers." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "We now apply the definition of log(L) to the previous example. For simplicity we assume that $\\sigma=3$ is known.\n", + "We will see later on more complex examples." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "data = [4, 5, 7, 8, 8, 9, 10, 5, 2, 3, 5, 4, 8, 9]\n", + "\n", + "\n", + "def likelihood(mu, x):\n", + " # Compare the likelihood of the random samples to the two distributions\n", + " ll = 0\n", + " sd = 3\n", + " for i in x:\n", + " ll += np.log(norm.pdf(i, mu, sd))\n", + " return ll" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "mu = np.linspace(0, 12, 100)\n", + "fun = likelihood(mu, data)\n", + "\n", + "plt.plot(mu, fun, label=\"Likelihood\")\n", + "plt.xlabel(\"mu\")\n", + "plt.legend()\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It is better to look for minima than for maxima so let's put a - to the likelihood\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# Compare the likelihood of the random samples to the two # distributions\n", + "data = [4, 5, 7, 8, 8, 9, 10, 5, 2, 3, 5, 4, 8, 9]\n", + "\n", + "\n", + "def llog(mu, x):\n", + " ll = 0\n", + " sd = 3\n", + " for i in x:\n", + " ll += -np.log(norm.pdf(i, mu, 3))\n", + " return ll" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "So the routine found that the best value for mu is. actually 6.21 !!!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "opt.fmin(llog, 1, args=(data,))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### (Even more) Examples" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "#### Lifetime" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "For decays with a lifetime $\\tau$, the (normalised) probability distribution as a function of time $t$ is: $P(t;\\tau)=\\frac{1}{\\tau}\\exp(-t/\\tau)$. We calculate the function $\\ln L$ for this distribution in analytic way\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\ln L & = & \\sum_i \\ln\\left(\\frac{1}{\\tau}\\exp(-t_i/\\tau)\\right) \\\\\n", + "\t& = & \\sum_i \\left( -\\ln \\tau - t_i/\\tau \\right) \\\\\n", + "\\end{eqnarray*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Differentiating with respect to $\\tau$ and setting it to zero we obtain the estimator $\\hat{\\tau}$\n", + "The maximum can be found as:\n", + "\n", + "\\begin{equation}\n", + "\\left.\\frac{\\mathrm{d}\\,\\ln L}{\\mathrm{d}\\,\\tau}\\right|_{\\tau=\\hat\\tau} = 0 \\Leftrightarrow \\sum_i\\left(t_i-\\hat\\tau\\right) = 0 \\Leftrightarrow \\hat\\tau = \\frac{1}{N}\\sum_i t_i\n", + "\\end{equation}\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "#### Discrete variable" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Suppose that $X$ is a discrete random variable following the pdf\n", + "\n", + "$$\n", + "\\begin{eqnarray}\n", + "P(X,\\theta)=\\left\\{ \\begin{array}{cc}\n", + "2\\theta/3 & x=0\\\\\n", + "\\theta/3 & x=1\\\\\n", + "2(1-\\theta)/3 & x=2\\\\\n", + "(1-\\theta)/3 & x=3\n", + "\\end{array}\\right.\n", + "\\end{eqnarray}\n", + "$$\n", + "\n", + "where $0\\le \\theta \\le1$." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "\n", + "First of all we check that the PDF is normalised\n", + "\n", + "$$\n", + "\\begin{eqnarray}\n", + "\\sum_i P(x_i)&=& 2\\frac{\\theta}{3}+\\frac{\\theta}{3}+\\frac{2}{3}(1-\\theta)+\\frac{1}{3}(1-\\theta)=1\n", + "\\end{eqnarray}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Suppose we have the following sequence : $(3,0,2,1,3,2,1,0,2,1)$. What is the value of $\\hat{\\theta}$?\n", + "\n", + "We calculate $\\ln L$:\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\ln L&=& \\sum_i \\ln P\\\\\n", + "&=& \\ln P(x=3)+ \\ln P(x=0)+ \\ln P(x=2)+ \\ln P(x=1)+ \\ln P(x=3)+ \\ln P(x=2)\\\\\n", + "&&+ \\ln P(x=1)+ \\ln P(x=0)+ \\ln P(x=2)+ \\ln P(x=1)\\\\\n", + "\\end{eqnarray*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + ".... continue....\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "&=&2 \\ln P(x=3)+3 \\ln P(x=2)+3 \\ln P(x=1)+2 \\ln P(x=0)\\\\\n", + "&=&2\\ln \\left( \\frac{2}{3}\\theta\\right)+3\\ln \\left( \\frac{1}{3}\\theta\\right)+3\\ln \\left( \\frac{2}{3}(1-\\theta)\\right)+2\\ln \\left( \\frac{1}{3}(1-\\theta)\\right)\\\\\n", + "&=&2 \\ln \\frac{2}{3}+2 \\ln \\theta+3\\ln \\frac{1}{3}+3\\ln \\theta\\\\\n", + "&&+3\\ln \\frac{2}{3}+3\\ln(1-\\theta)+2\\ln\\frac{1}{3}+2\\ln(1-\\theta)\n", + "\\end{eqnarray*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We need to calculate $\\frac{d \\ln L}{d\\theta}=0$ so all terms nodepending on $\\theta$ can be neglected.\n", + "\n", + "$$\n", + "\\frac{d \\ln L}{d\\theta}=5\\frac{1}{\\theta}-\\frac{5}{1-\\theta}=0\\\\\n", + "1-\\theta -\\theta=0\\\\\n", + "\\theta=1/2\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "#### Multiple dimensions" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Let's go back to the example involving the Gaussian PDF\n", + "\n", + "\n", + "\\begin{equation}\n", + "P(x;\\mu,\\sigma)=\\frac{1}{\\sqrt{2\\pi \\sigma^2}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}\n", + "\\end{equation}\n", + "\n", + "Now assume that we want to esimtate both $\\mu$ and $\\sigma$ from a given data-set" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# Same data as before\n", + "data = [4, 5, 7, 8, 8, 9, 10, 5, 2, 3, 5, 4, 8, 9]" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "We define the loglikelihood function (with the - sign!)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def llog2(mu, sigma, x):\n", + " ll = 0\n", + " sd = sigma\n", + " for i in x:\n", + " ll += -np.log(norm.pdf(i, mu, sd))\n", + " return ll" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "mu = np.linspace(1, 10, 100)\n", + "sigma = np.linspace(1, 4, 100)\n", + "\n", + "X, Y = np.meshgrid(mu, sigma)\n", + "Z = llog2(X, Y, data)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "fig, ax = plt.subplots(constrained_layout=True)\n", + "levels = MaxNLocator(nbins=55).tick_values(Z.min(), Z.max())\n", + "cmap = plt.get_cmap(\"RdGy\")\n", + "normB = BoundaryNorm(levels, ncolors=cmap.N, clip=True)\n", + "pc = ax.contourf(X, Y, Z, cmap=cmap, norm=normB)\n", + "CS = plt.contour(X, Y, Z, 100, linewidths=0.5, colors=\"k\")\n", + "cbar = fig.colorbar(CS)\n", + "cbar.ax.set_ylabel(\"LL\")\n", + "\n", + "plt.ylabel(\"$\\sigma$\")\n", + "plt.xlabel(\"$\\mu$\")\n", + "ax.grid()\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def llog3(params):\n", + " ll = 0\n", + " a, b = params\n", + " data = [4, 5, 7, 8, 8, 9, 10, 5, 2, 3, 5, 4, 8, 9]\n", + " for i in data:\n", + " ll += -np.log(norm.pdf(i, a, b))\n", + " return ll" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "x0 = [6, 3]\n", + "\n", + "res = opt.minimize(llog3, x0)\n", + "print(res.x)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "## Properties of the ML estimator" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "**So how good are ML estimators?**\n", + "\n", + "1. Consistency: Usually, ML estimators are consistent. That is for increasing data sets, the estimator approaches the true value of the parameter.\n", + "2. Efficiency\n", + "3. Biaseness" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Efficiency" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "No such thing as a generally efficent estimator. Efficiency depends on the case considered." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "**Limit of Accuracy: Minimum Variance Bound (MVB)**\n", + "\n", + "$$\n", + "V(\\hat{a})\\ge\\frac{-1}{d^2 \\log{L}/da^2}\n", + "$$\n", + "\n", + "For efficient estimators $V(\\hat{a})=MVB$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "**For a Gaussian distribution:**\n", + "\n", + "$$\n", + "LL=-\\sum\\frac{(x_i-\\mu)^2}{2\\sigma^2}-N\\ln{\\sigma\\sqrt{2\\pi}}\n", + "$$\n", + "\n", + "_What is the MVB?_" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The MVB for $\\mu$ is given by\n", + "\n", + "$$\n", + "V(\\hat{\\mu})=\\frac{-1}{d^2\\log L/d\\mu^2}.\n", + "$$\n", + "\n", + "The first derivative of $LL$ with respct to $\\mu$ is just\n", + "\n", + "$$\n", + "\\frac{d LL}{d\\mu}=\\sum\\frac{(x_i-\\mu)}{\\sigma^2},\n", + "$$\n", + "\n", + "and the second derivative\n", + "\n", + "$$\n", + "\\frac{d^2LL}{d\\mu^2}=-\\sum\\frac{1}{\\sigma^2}=-\\frac{N}{\\sigma^2}.\n", + "$$\n", + "\n", + "From this the standard error for the ML estimator of $\\mu$ is\n", + "\n", + "$$\n", + "\\sigma_\\mu=\\sqrt(V(\\hat{\\mu}))=\\frac{\\sigma}{\\sqrt{N}}.\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Biaseness" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "However, the ML estimator is often biased. This can be seen by calculating the likelihood function for a data set $\\{x_i\\}$ with a common mean $\\mu$ and uncertainty $\\sigma$:\n", + "\n", + "$$\n", + "P(x_i;\\mu,\\sigma_i) = \\frac{1}{\\sigma\\sqrt{2\\pi}}\\exp\\left(-\\frac{(x_i-\\mu)^2}{2\\sigma^2}\\right)\n", + "$$\n", + "\n", + "and find the maximum, where the partial derivatives are zero:\n", + "\n", + "$$\n", + "\\frac{\\partial\\,\\ln L}{\\partial\\,\\mu} = 0\\ \\frac{\\partial\\,\\ln L}{\\partial\\,\\sigma} = 0\n", + "$$\n", + "\n", + "The ML estimator for the spread of a gaussian data set is $\\hat\\sigma^2 = \\frac{1}{N}\\sum_i(x_i-\\hat{\\mu})^2$ which as we know is biased (does not include Bessel's correction)." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Uncertainty on ML estimators" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The uncertainty on a ML estimator can be found as discussed in relation to the efficiency:\n", + "\n", + "$$\n", + "\\widehat{\\sigma}_{\\hat{a}}^2 = V(\\hat{a}) = \\mathrm{MVB} = \\frac{-1}{\\left\\langle \\left(\\frac{\\mathrm{d}^2\\,\\ln L}{\\mathrm{d}a^2}\\right) \\right\\rangle}\n", + "$$\n", + "\n", + "for unbiased, efficient, ML estimators." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "For several variables ($N$), the vector of estimated parameters $\\widehat{\\vec{a}}$ is found by minimising the $N$-dimensional function $-\\ln\\,L$. The inverse of the covariance matrix is then given by:\n", + "\n", + " \n", + "$\\mathrm{cov}^{-1}(a_i,a_j) = -\\left.\\frac{\\partial^2\\,\\ln\\,L}{\\partial\\,a_i\\partial\\,a_j}\\right|_{\\vec{a}=\\widehat{\\vec{a}}}$\n", + "\n", + "Inverse of the [Hessian matrix](https://en.wikipedia.org/wiki/Hessian_matrix)." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "There is some intuition in the plot above. The precision of the\n", + "estimate $\\hat{a}$ can be measured by the curvature of the $\\ln L(a)$ function\n", + "around its peak.\n", + "\n", + " The easy way to think about this is to recognize\n", + "that the curvature of the likelihood function tells us how certain we are about our estimate of our\n", + "parameters. A flatter curve has more uncertainty. The second derivative of the likelihood function is a measure of the\n", + "likelihood function’s curvature - this is why it provides our estimate of the uncertainty with which\n", + "we have estimated our parameters." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Exercise" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "> Show that the uncertainty of the estimator of $\\mu$ in the example above can be obtained by taking the difference between $\\hat{\\mu}-\\mu_1$, where $\\mu_1$ is the value of the parameter at the point where the log likelihood differs by 0.5 from its maximum point. $\\hat{\\mu}$ is the value of $\\mu$ that maximises the log likelyhood." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "$$LL(\\hat{\\mu})-LL(\\mu_1)=0.5.$$\n", + "\n", + "For the Gaussian PDF we have shown before that $LL(\\mu)=-\\sum\\frac{(x_i-\\mu)^2}{2\\sigma^2}-N\\ln{\\sigma\\sqrt{2\\pi}}$. From this, we get that\n", + "\n", + "$$\n", + "LL(\\hat{\\mu})-LL(\\mu_1)=-\\sum\\frac{(x_i-\\hat{\\mu})^2}{2\\sigma^2}+\\sum\\frac{(x_i-\\mu_1)^2}{2\\sigma^2}=0.5.\n", + "$$\n", + "\n", + "This can be rewritten as\n", + "\n", + "$$\n", + "\\sum(x_i-\\mu_1)^2-\\sum(x_i-\\hat{\\mu})^2=\\sigma^2.\n", + "$$\n", + "\n", + "$$\n", + "\\sum(x_i^2+\\mu_1^2-2x_i\\mu_1-x_i^2-\\hat{\\mu}^2+2x_i\\hat{\\mu})=\\sigma^2.\n", + "$$\n", + "\n", + "Using the fact that $\\sum(x_i)=N\\hat{\\mu}$, we get\n", + "\n", + "$$\n", + "N \\mu_1^2 -2 \\mu_1 \\hat{\\mu}-N\\hat{\\mu}^2+2N\\hat{\\mu} \\hat{\\mu}=\\sigma^2\\\\=N(\\mu_1^2+\\hat{\\mu}^2-2 \\mu_1 \\hat{\\mu})=\\sigma^2\\\\=N(\\hat{\\mu}-\\mu_1)^2=\\sigma^2.\n", + "$$\n", + "\n", + "Therefore\n", + "\n", + "$$\n", + "\\hat{\\mu}-\\mu_1=\\frac{\\sigma}{\\sqrt{N}},\n", + "$$\n", + "\n", + "which is consistend with the MVB." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "_MLE provides us with efficient and unbiased estimation when N is large. There is no loss of information through binning as all experimental information is used. It provides errors on its estimates. At small N, however, estimators CAN be biased. You need to make assumptions about the parent PDF and there is no way of estimating a \"Goodness of fit\"._" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "## Fitting data – Method of least squares" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Introduction" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "For a function $f(x_i;a)$ and a data set $\\{x_i,y_i,\\sigma_i\\}$, assuming for each data point $y_i$ is drawn from a Gaussian distribution of mean $f(x_i;a)$ and spread $\\sigma_i$, we know that the likelihood function must obey:\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\ln L \t& = & \\sum_i \\ln \\left( \\frac{1}{\\sigma_i\\sqrt{2\\pi}}\\exp\\left(-\\frac{\\left(y_i-f(x_i;a)\\right)^2}{2\\sigma_i^2}\\right)\\right)\\\\\n", + "\t\t& = & -\\sum_i \\ln \\left(\\sigma_i\\sqrt{2\\pi}\\right) -\\sum_i \\frac{\\left(y_i-f(x_i;a)\\right)^2}{2\\sigma_i^2}\\\\\n", + "\\end{eqnarray*}\n", + "$$\n", + "\n", + "The first part does not depent on our parameters." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Instead of minimising $-\\ln L$ we may therefore equivalently define and minimise:\n", + "\n", + "$$\n", + "\\begin{eqnarray}\n", + "\\chi^2 & \\equiv & \\sum_i \\frac{\\left(y_i-f(x_i;a)\\right)^2}{\\sigma_i^2}\n", + "\\end{eqnarray}\n", + "$$\n", + "\n", + "I.e.$\\chi$-squared minimisation, as you know it, is in fact the maximum-likelihood estimator of the function parameters $a$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This means that it comes with the nice properties of the ML estimator:\n", + "\n", + " \n", + "1. It is consistent (at least typically).\n", + "2. The bias is most often small.\n", + "3. It is efficient asymptotically ($N\\to\\infty$), and $\\pm1\\sigma$ can be found by identifying where $\\chi^2$ changes by 1 from it's minimum:\n", + "\n", + "$$\n", + "V\\left(\\hat{a}\\right) = -\\frac{1}{\\frac{\\partial^2\\ln L}{\\partial a^2}} = \\frac{2}{\\frac{\\partial^2\\chi^2}{\\partial a^2}}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### χ² PDF" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The $\\chi^2$ is a statistical distribution which is built as a sum of squares of $k$-gaussian (random) variables\n", + "\n", + "$$\n", + " \\chi^2 =\\sum_{i=1}^k z_i^2\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Chi-square ($\\chi^2$) distributions are a family of continuous probability distributions. They’re widely used in hypothesis tests, including the chi-square goodness of fit test and the chi-square test of independence.\n", + "\n", + "The shape of a chi-square distribution is determined by the parameter k, which represents the degrees of freedom.\n", + "\n", + "Very few real-world observations follow a chi-square distribution. The main purpose of chi-square distributions is hypothesis testing, not describing real-world distributions." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "\n", + "The corresponding PDF for the case of a continuous variable can be expressed as\n", + "\n", + "$$\n", + "\\chi^2=\\frac{1}{2^{k/2}\\Gamma \\left( \\frac{k}{2}\\right)}e^{-x/2}x^{k/2-1}\n", + "$$\n", + "\n", + "We can calculate the mean $\\mu$ and the variance $V$ and we get\n", + "\n", + "$$\n", + "\\begin{eqnarray}\n", + "\\mu=k\\\\\n", + "V=2k\n", + "\\end{eqnarray}\n", + "$$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# ------------------------------------------------------------\n", + "# Define the distribution parameters to be plotted\n", + "# We define 4 distributions with 4 different means\n", + "k_values = [1, 2, 5, 7]\n", + "linestyles = [\"-\", \"--\", \":\", \"-.\"]\n", + "mu = 0\n", + "x = np.linspace(-1, 20, 1000)\n", + "# ------------------------------------------------------------" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "fig, ax = plt.subplots(figsize=(5, 3.75))\n", + "fig.subplots_adjust(bottom=0.12)\n", + "for k, ls in zip(k_values, linestyles):\n", + " dist = chi2(k)\n", + " res = plt.plot(x, dist.pdf(x), ls=ls, c=\"black\", label=r\"$k=%i$\" % k)\n", + "plt.xlim(0, 10)\n", + "plt.ylim(0, 0.5)\n", + "plt.xlabel(\"$x$\")\n", + "plt.title(r\"$\\chi^2\\ \\mathrm{Distribution}$\")\n", + "plt.legend()\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Linear least squares" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "#### Example: Fitting a straight line" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "For a straight line fit to a data set $\\{x_i,y_i\\}$ with common uncertainty $\\sigma$ we have $f(x_i;m,c)=m\\cdot x_i + c$, and:\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\chi^2 & = & \\sum_i \\frac{\\left(y_i-f(x_i;\\vec{a})\\right)^2}{\\sigma^2} = \\sum_i \\frac{\\left(y_i-mx_i-c\\right)^2}{\\sigma^2}\n", + "\\end{eqnarray*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "We now differentiate and equate to zero as\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\left.\\frac{\\partial \\chi^2}{\\partial c}\\right|_{m=\\widehat{m},c=\\widehat{c}} & = & \\frac{1}{\\sigma^2}\\sum_i -2\\left(y_i - \\widehat{m}x_i -\\widehat{c}\\right) = 0 \\\\\n", + "&\\Rightarrow& \\\\\n", + "0 & = & \\overline{y} - \\widehat{m}\\overline{x} -\\widehat{c}\n", + "\\end{eqnarray*}\n", + "$$\n", + "\n", + "and\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\left.\\frac{\\partial \\chi^2}{\\partial m}\\right|_{m=\\widehat{m},c=\\widehat{c}} & = & \\frac{1}{\\sigma^2}\\sum_i -2\\left(y_i - \\widehat{m}x_i -\\widehat{c}\\right)x_i = 0 \\\\\n", + "&\\Rightarrow& \\\\\n", + "0 & = & \\overline{xy} - \\widehat{m}\\overline{x^2} -\\widehat{c}\\overline{x}\\\\\n", + "\\end{eqnarray*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Resolving the 2 equations we get\n", + "\n", + "$$\n", + "\\begin{eqnarray}\n", + "\\hat{m}&=&\\frac{\\bar{xy}-\\bar{x}\\bar{y}}{\\bar{x^2}-\\bar{x}^2}=\\frac{cov(x,y)}{V(x)}\\\\\n", + "\\hat{c}&=&\\bar{y}-\\hat{m}\\bar{x}\n", + "\\end{eqnarray}\n", + "$$\n", + "\n", + "We thereby have established that $\\widehat{m}$ and $\\widehat{c}$ are linear in $y_i$\n", + "\n", + "Covariance is a measure of the relationship between two random variables. The metric evaluates how much – to what extent – the variables change together (but not the strength of the relationship)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Select a random seed so we can all generate the same sample\n", + "np.random.seed(123)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Choose the \"true\" parameters.\n", + "m_true = -1\n", + "c_true = 4\n", + "\n", + "# Generate some synthetic data from the model.\n", + "N = 50\n", + "x = np.sort(10 * np.random.rand(N))\n", + "y = m_true * x + c_true\n", + "y += np.random.randn(N)\n", + "plt.errorbar(x, y, fmt=\".k\", capsize=0)\n", + "x0 = np.linspace(0, 10, 500)\n", + "plt.xlim(0, 10)\n", + "plt.xlabel(\"x\")\n", + "plt.ylabel(\"y\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We apply the formulas:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "xave = x.mean()\n", + "yave = y.mean()\n", + "cov = np.cov(x, y)[0][1]\n", + "var = np.cov(x, y)[0][0]\n", + "\n", + "mfit = cov / var\n", + "cfit = yave - mfit * xave\n", + "\n", + "print(\"Least-squares estimates:\")\n", + "print(\"m = \", mfit)\n", + "print(\"c = \", cfit)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plt.errorbar(x, y, fmt=\".k\", capsize=0)\n", + "plt.plot(x0, m_true * x0 + c_true, \"k\", alpha=0.3, lw=3, label=\"truth\")\n", + "plt.plot(x0, mfit * x0 + cfit, \"r\", lw=3, label=\"LS\")\n", + "plt.xlabel(\"x\")\n", + "plt.ylabel(\"y\")\n", + "plt.legend(fontsize=14)\n", + "plt.show()\n", + "# plt.xlim(0, 10)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "#### Uncertainties and covariances for straight-line fit" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "We can now calculate the errors on previous quantities by acting with derivatives over the ML.\n", + "\n", + "$$\n", + "cov^{-1}(a_i,a_j)=\\left. \\frac{1}{2}\\frac{\\partial^2 \\chi^2}{\\partial a_i \\partial a_j }\\right|_{\\vec{a}=\\hat{a}}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{eqnarray*}\n", + "\\frac{1}{2}\\left.\\frac{\\partial^2 \\chi^2}{\\partial c^2}\\right|_{m=\\widehat{m},c=\\widehat{c}}\n", + " & = & \\frac{1}{\\sigma^2}\\sum_i 1 = \\frac{N}{\\sigma^2}\\\\\n", + "\\frac{1}{2}\\left.\\frac{\\partial^2 \\chi^2}{\\partial m \\partial c}\\right|_{m=\\widehat{m},c=\\widehat{c}}\n", + " & = & \\frac{1}{\\sigma^2}\\sum_i x_i = \\frac{N}{\\sigma^2}\\overline{x}\\\\\n", + "\\frac{1}{2}\\left.\\frac{\\partial^2 \\chi^2}{\\partial m^2}\\right|_{m=\\widehat{m},c=\\widehat{c}}\n", + " & = & \\frac{1}{\\sigma^2}\\sum_i x_i^2 = \\frac{N}{\\sigma^2}\\overline{x^2}.\n", + "\\end{eqnarray*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We therefore get the inverse of the covariance matrix for $(m,c)$ to be:\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "\\left(V_{cm}\\right)^{-1} &=&\n", + "\\frac{N}{\\sigma^2} \\left(\n", + "\\begin{array}{ccc}\n", + "\\overline{x^2} & \\overline{x} \\\\[0.2cm]\n", + "\\overline{x} & 1 \\\\[0.2cm]\n", + "\\end{array}\n", + "\\right).\n", + "\\end{eqnarray*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "By inverting the matrix we get\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "V_{cm} &=&\n", + "\\frac{\\sigma^2}{N\\cdot \\left(\\overline{x^2} - \\overline{x}^2\\right)} \\left(\n", + "\\begin{array}{ccc}\n", + "1 & -\\overline{x} \\\\[0.2cm]\n", + "-\\overline{x} & \\overline{x^2} \\\\[0.2cm]\n", + "\\end{array}\n", + "\\right) =\n", + "\\frac{\\sigma^2}{N\\cdot V(x)} \\left(\n", + "\\begin{array}{ccc}\n", + "1 & -\\overline{x} \\\\[0.2cm]\n", + "-\\overline{x} & \\overline{x^2} \\\\[0.2cm]\n", + "\\end{array}\n", + "\\right),\n", + "\\end{eqnarray*}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "$\\sigma$ is the (common) uncertainty for the \\{$y_i$\\} measurements and $V(x)$ is the variance in the \\{$x_i$\\} data. This means the variances (and covariance) in the fitted parameters $c$ and $m$ scales with the square of the uncertainty in \\{$y_i$\\}, as it should, and inversely with both $N$ and $V(x)$, such that a data set with many measurements and an extended measurement range has a reduced uncertainty (and variance) on the fitted parameters.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "xave = x.mean()\n", + "xave2 = (x * x).mean()\n", + "var = np.cov(x, y)[0][0]\n", + "N = len(x)\n", + "\n", + "Vcm = [[1, -xave], [-xave, xave2]]\n", + "Vcm = np.array(Vcm / (N * var))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Having access to the error matrix we can now read on the diagonal the error on m and c. While on the off diagonal we can read the correlation coefficiet\n", + "\n", + "$$\n", + "\\rho_{m,c}=\\frac{cov(m,c)}{\\sqrt{V_cV_m}}\n", + "$$\n", + "\n", + "In statistics, the Pearson correlation coefficient (PCC) is a value that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "rho = Vcm[0][1] / (np.sqrt(Vcm[0][0] * Vcm[1][1]))\n", + "\n", + "print(\"Least-squares estimates:\")\n", + "print(\"m = \", mfit, \"+/-\", Vcm[0][0])\n", + "print(\"b = \", cfit, \"+/-\", Vcm[1][1])\n", + "print(\"rho=\", rho)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Linear models" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "In this section, we discuss models that are linear in parameter space, i.e. they can be written as\n", + "\n", + "\\begin{eqnarray}\n", + "f(x;a) = \\sum_r c_r(x)a_r\n", + "\\end{eqnarray}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For example given the two functions\n", + "\n", + "$$\n", + "\\begin{eqnarray}\n", + "f(x,\\mathbf{a})=&a_1+a_2x^4+a_3\\cos(x)+a_4\\exp(-x^4)\\\\\n", + "g(x,\\mathbf{a})=&a_1+\\cos(a_2+x)+a_3x+a_4x^3\n", + "\\end{eqnarray}\n", + "$$\n", + "\n", + "the function $g(x,\\mathbf{a})$ is not linear in its parameters!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can write a linear model in a matrix form as\n", + "\n", + "$$\n", + "\\begin{eqnarray*}\n", + "C\\cdot a &=&\n", + " \\left(\n", + "\\begin{array}{ccc}\n", + " c_1(x_1) & c_2(x_1) & c_3(x_1) \\\\[0.2cm]\n", + " c_1(x_2) & c_2(x_2) & c_3(x_2) \\\\[0.2cm]\n", + "\\vdots & \\vdots & \\vdots \\\\[0.2cm]\n", + " c_1(x_N) & c_2(x_N) & c_3(x_N)\n", + "\\end{array}\n", + "\\right)\n", + "\\cdot\n", + " \\left(\n", + "\\begin{array}{c}\n", + " a_1 \\\\[0.2cm]\n", + " a_2 \\\\[0.2cm]\n", + " a_3\n", + "\\end{array}\n", + "\\right)\n", + "=\n", + " \\left(\n", + "\\begin{array}{c}\n", + " f(x_1) \\\\[0.2cm]\n", + " f(x_2) \\\\[0.2cm]\n", + "\\vdots \\\\[0.2cm]\n", + " f(x_N)\n", + "\\end{array}\n", + "\\right).\n", + "\\end{eqnarray*}\n", + "$$\n", + "\n", + "Here the function $f$ has been evaluated over the $x_N$ points that we use to fit the coupling constants." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The general form of $\\chi^2$ is\n", + "\n", + "$$\n", + "\\chi^2=\\sum_i \\frac{(y_i-\\sum_j c_j (x_i)a_j)^2}{\\sigma_i^2}\n", + "$$\n", + "\n", + "To obtain $a_j$ we need to take the respective derivatives to them and set to 0 as standard MLE\n", + "\n", + "$$\n", + "\\sum_i c_j(x_i) \\left[\\frac{y_i-\\sum_j c_j (x_i)a_j}{\\sigma_i^2} \\right]=0\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "if $V_y$ is the (known) covariance matrix for the measured values of $y$ (assuming no error on $x$), we can therefore express previous form in matrix form as\n", + "\n", + "$$\n", + "\\begin{eqnarray}\n", + " 0=C^T V^{-1}y-C^TV^{-1}C a\\\\\n", + " C^T V^{-1}y=C^TV^{-1}C a\\\\\n", + " a=(C^TV^{-1}C)^{-1}C^T V^{-1}y\n", + "\\end{eqnarray}\n", + "$$\n", + "\n", + "which gives the following $\\chi^2$ estimator (also ML estimator) of the parameter vector $a$\n", + "\n", + "$$\n", + "\\widehat{a} = \\left(C^T V(y)^{-1}C\\right)^{-1}C^T V_y^{-1}\\cdot y = M\\cdot y\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The $M$-matrix described above is used to calculate the covariance matrix for the estimator\n", + "\n", + "$$\n", + "V(\\widehat{a}) = MV_yM^T\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "> _**Important remark on Unweighted Fitting**_
\n", + "> _For an unweighted least-squares function, the covariance matrix should be multiplied by the variance of the residuals about the best-fit to give the variance-covariance matrix . This estimates the statistical error on the best-fit parameters from the scatter of the underlying data._" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Non-linear model" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Consider the model $f(x)=\\sin(ax)$, where $a$ is the parameter. If we fit on 5 data points. The number of degrees of freedom is 4." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We build some noisy data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "def func(x, a):\n", + " return np.sin(a * x)\n", + "\n", + "\n", + "np.random.seed(123)\n", + "x = np.linspace(0, 20, 50)\n", + "\n", + "y = (np.sin(x / 3) + np.random.normal(np.sin(x / 3), 0.2)) / 2\n", + "ery = []\n", + "for i in range(len(x)):\n", + " ery.append(np.random.normal(0.1, 0.02))\n", + "plt.errorbar(x, y, ery, ls=\"\", marker=\".\")\n", + "plt.xlabel(\"x\")\n", + "plt.ylabel(\"Data\")\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "gmodel = Model(func)\n", + "result = gmodel.fit(y, x=x, a=1)\n", + "\n", + "print(result.fit_report())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def chi2Unc(x, y, ey, a):\n", + " z = func(x, a)\n", + " return sum(((y - z) / ey) ** 2)\n", + "\n", + "\n", + "av = np.linspace(0.1, 4, 100)\n", + "\n", + "z = np.zeros(len(av))\n", + "for i in range(len(av)):\n", + " a = av[i]\n", + " z[i] = chi2Unc(x, y, ery, a)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "plt.plot(av, z, \"-\")\n", + "plt.xlabel(\"a\")\n", + "plt.ylabel(\"chi2Unc\")\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "eryrec = []\n", + "for i in range(len(x)):\n", + " eryrec.append(1 / ery[i])\n", + "\n", + "gmodel = Model(func)\n", + "result = gmodel.fit(y, a=1.2, x=x, weights=eryrec)\n", + "print(result.fit_report())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "plt.errorbar(x, y, ery, ls=\"\", marker=\".\")\n", + "plt.plot(x, result.best_fit, \"r-\", label=\"best fit\")\n", + "plt.legend(loc=\"best\")\n", + "plt.xlabel(\"x\")\n", + "plt.ylabel(\"y\")\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "gmodel = Model(func)\n", + "result = gmodel.fit(y, a=0.2, x=x, weights=eryrec)\n", + "print(result.fit_report())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "plt.errorbar(x, y, ery, ls=\"\", marker=\".\")\n", + "plt.plot(x, result.best_fit, \"r-\", label=\"best fit\")\n", + "plt.legend(loc=\"best\")\n", + "plt.xlabel(\"x\")\n", + "plt.ylabel(\"y\")\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "k = len(x) - 1\n", + "print(\"number of degrees of freedom=\", k)\n", + "\n", + "r1 = np.sqrt(2 * chi2Unc(x, y, ery, 1.41084859)) / np.sqrt(2 * k - 1)\n", + "r2 = np.sqrt(2 * chi2Unc(x, y, ery, 0.33495704)) / np.sqrt(2 * k - 1)\n", + "\n", + "print(r1, r2)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Residuals" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Residuals are the difference between the fitted model and the data.\n", + "\n", + "Residuals are important when determining the quality of a model. You can examine residuals in terms of their magnitude and/or whether they form a pattern.\n", + "\n", + "We use the previous example. We take the best fit and we calculate the residuals" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "residual = y - result.best_fit\n", + "# plt.plot(x, residual, 'bo', label='Residuals')\n", + "fig, ax = plt.subplots()\n", + "ax.stem(x, residual, markerfmt=\" \", use_line_collection=True)\n", + "plt.xlabel(\"x\")\n", + "plt.ylabel(\"residual\")\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "residualUnc = (y - result.best_fit) / ery\n", + "# plt.plot(x, residual, 'bo', label='Residuals')\n", + "fig, ax = plt.subplots()\n", + "ax.stem(x, residualUnc, markerfmt=\" \", use_line_collection=True)\n", + "plt.xlabel(\"x\")\n", + "plt.ylabel(\"Normalise residual (Pull distributions)\")\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "counts, xbins, patches = plt.hist(residualUnc, bins=10, range=(-5, 5))\n", + "plt.xlabel(\"residual\")\n", + "plt.ylabel(\"counts\")\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "# One can get the bin content and xvalues from the hist -- This would be necessary when one\n", + "# might want to fit the histogram\n", + "\n", + "print(counts)\n", + "errcounts = []\n", + "for i in range(len(xbins) - 1):\n", + " if counts[i] > 0:\n", + " errcounts.append(np.sqrt(counts[i]))\n", + " else:\n", + " errcounts.append(10)\n", + "## Why do we do the above, i.e. setting zero values with large uncertainties?\n", + "xvals = []\n", + "for i in range(len(xbins) - 1):\n", + " xvals.append((xbins[i] + xbins[i + 1]) / 2)\n", + "plt.errorbar(xvals, counts, errcounts, ls=\"\", marker=\".\")\n", + "plt.xlabel(\"residual\")\n", + "plt.ylabel(\"counts\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The residuals should be Gaussianly distributed with mean=0 and std=1. Deviations from this indicate issues with underfitting/overfitting/problems with uncertainties.\n", + "\n", + "Where the average residual is not 0, it implies that the model is systematically biased (i.e., consistently over- or under-predicting).\n", + "\n", + "Where residuals contain patterns, it implies that the model is qualitatively wrong, as it is failing to explain some property of the data. The existence of patterns invalidates most statistical tests." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Fitting Binned data: why a LSE might be biased here..." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Lets consider an example of measuring the lifetime of a particle using a PDF $f(t;A, \\tau)=A e^{t/\\tau}$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "np.random.seed(32)\n", + "\n", + "x = np.linspace(1, 10, 20)\n", + "y = np.linspace(0, 0, 20)\n", + "erry = []\n", + "for i in range(len(x)):\n", + " y[i] = np.random.poisson((13 * np.exp(-x[i] / 3)))\n", + " if y[i] < 0:\n", + " y[i] = 0\n", + "y = y.astype(int)\n", + "datax = []\n", + "datay = []\n", + "dataey = []\n", + "eryrec = []\n", + "for i in range(len(x)):\n", + " if y[i] > 0:\n", + " datax.append(x[i])\n", + " datay.append(y[i])\n", + " dataey.append(np.sqrt(y[i]))\n", + " eryrec.append(1.0 / np.sqrt(y[i]))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "datax = np.array(datax)\n", + "datay = np.array(datay)\n", + "dataey = np.array(dataey)\n", + "eryrec = np.array(eryrec)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# We generate noisy data on integer values of x to mimick the idea of bins and low-statistics...\n", + "plt.errorbar(datax, datay, dataey, ls=\"\", marker=\".\")\n", + "# plt.plot(x,y,'o',label=\"data\")\n", + "plt.xlabel(\"t\")\n", + "plt.ylabel(\"counts\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's start as usual with the $\\chi^2$ function exactly as we did before" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def func(x, a, b):\n", + " return a * np.exp(b * x)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "gmodel = Model(func)\n", + "gmodel.nan_policy = \"propagate\"\n", + "result = gmodel.fit(datay, x=datax, weights=eryrec, a=12, b=-0.3)\n", + "\n", + "print(result.fit_report())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plt.errorbar(datax, datay, dataey, ls=\"\", marker=\".\")\n", + "plt.plot(datax, result.best_fit, \"-\", label=\"Fit\")\n", + "plt.legend()\n", + "plt.xlabel(\"t\")\n", + "plt.ylabel(\"counts\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "At this stage we just want to plot the $\\chi^2$ surface to see how it looks like. We can do a better analysis of the fit, but I leave it to you." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def chi2(aaa, bbb):\n", + " ll = 0\n", + " for i in range(len(datax)):\n", + " ll += ((datay[i] - func(datax[i], aaa, bbb)) ** 2) * eryrec[i] ** 2\n", + " # print(datay[i])\n", + " return ll" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Ax = np.linspace(0, 16, 500)\n", + "Bx = np.linspace(-0.7, -0.15, 500)\n", + "\n", + "Axgrid, Bxgrid = np.meshgrid(Ax, Bx)\n", + "Zgrid = chi2(Axgrid, Bxgrid)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "fig, ax = plt.subplots(constrained_layout=True)\n", + "levels = MaxNLocator(nbins=55).tick_values(Zgrid.min(), Zgrid.max())\n", + "cmap = plt.get_cmap(\"PiYG\")\n", + "normB = BoundaryNorm(levels, ncolors=cmap.N, clip=True)\n", + "pc = ax.contourf(Axgrid, Bxgrid, Zgrid, cmap=cmap, norm=normB)\n", + "CS = plt.contour(Axgrid, Bxgrid, Zgrid, 100, linewidths=0.5, colors=\"k\")\n", + "\n", + "plt.ylabel(\"-tau\")\n", + "plt.xlabel(\"A\")\n", + "ax.grid()\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We have used here the standard $\\chi^2$ to perform the fit, but remeber this is based on the hypothesis that the underlying distribution is a Gaussian.\n", + "What happens if we change the underlying distribution? What if instead of a Gaussian we use a Poisson, since here we are dealing with binned data and low-statistics?\n", + "\n", + "Let's now start again from the likelihood assuming a Poisson" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "$$\n", + "{L}=\\Pi_i P_{Poisson}(y_i,f(x_i|a))=\\Pi_i \\frac{f(x_i|a)^{y_i}}{y_i!}exp(-f(x_i|a))\n", + "$$\n", + "\n", + "or after some maths\n", + "\n", + "$$\n", + "-2\\ln L=2\\sum_i (f(x_i|a)-y_i \\ln f(x_i|a) +\\ln y_i!)\n", + "$$\n", + "\n", + "The term $+\\ln y_i!$ does not depend on parameters so during minimisation we can forget it!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def like(param):\n", + " aaa, bbb = param\n", + " ll = 0\n", + " for i in range(len(datax)):\n", + " ll += 2 * (\n", + " func(datax[i], aaa, bbb) - datay[i] * np.log(func(datax[i], aaa, bbb))\n", + " )\n", + " return ll" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Ax = np.linspace(0, 16, 500)\n", + "Bx = np.linspace(-0.7, -0.15, 500)\n", + "\n", + "Axgrid, Bxgrid = np.meshgrid(Ax, Bx)\n", + "paramz = [Axgrid, Bxgrid]\n", + "Zgrid = like(paramz)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "fig, ax = plt.subplots(constrained_layout=True)\n", + "levels = MaxNLocator(nbins=55).tick_values(Zgrid.min(), Zgrid.max())\n", + "cmap = plt.get_cmap(\"PiYG\")\n", + "normB = BoundaryNorm(levels, ncolors=cmap.N, clip=True)\n", + "pc = ax.contourf(Axgrid, Bxgrid, Zgrid, cmap=cmap, norm=normB)\n", + "CS = plt.contour(Axgrid, Bxgrid, Zgrid, 100, linewidths=0.5, colors=\"k\")\n", + "\n", + "plt.ylabel(\"-tau\")\n", + "plt.xlabel(\"$A$\")\n", + "ax.grid()\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "p0 = [10, -0.3]\n", + "opt.fmin(like, p0, args=())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you compare the results we get different values for A and $\\tau$ when the underlying distribution is Gaussian or a Poisson." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plt.errorbar(datax, datay, dataey, ls=\"\", marker=\".\")\n", + "plt.plot(datax, result.best_fit, \"-\", label=\"Fit Gaussian\")\n", + "plt.plot(datax, func(datax, 10.35008492, -0.26507234), \"-\", label=\"Fit Poisson\")\n", + "plt.legend()\n", + "plt.xlabel(\"t\")\n", + "plt.ylabel(\"counts\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We see that the fit both reproduce \"reasonably\" the data but depending on which underlying distribution you assume you may get very different results!\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Exercise" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The photoproduction of pions $\\gamma p \\to p \\pi^0$ using linearly polarised photons is associated with a probability function:\n", + "\n", + "$$\n", + "f(\\phi;A,\\Sigma)=A(1-\\Sigma\\cos(2\\phi)),\n", + "$$\n", + "\n", + "where A encompasses the overall normalisation, and $\\phi$ is the azimuthal distribution of the outgoing pion (the variable we measure in the laboratory), and takes values in the range from $-\\pi$ to $\\pi$. $\\Sigma$ is the observable of interest that we are asked to determine from a data se of measurements of $\\phi_i=[\\phi_1,\\phi_2,\\phi_3,...,\\phi_N]$\n", + "\n", + " (a) i. What are the three criteria we need to fulfill for the function above to be classified as a PDF? What is the value of A and the allaowed values of $\\Sigma$ for these criteria to be met?\n", + "\n", + " (a) ii. Show that the MLE for $\\Sigma$ is\n", + "\n", + "$$\n", + "\\hat{\\Sigma}=-\\frac{\\sum_i^N\\cos(2\\phi_i)}{\\sum_i^N\\cos^2(2\\phi_i)}\n", + "$$\n", + "\n", + "(Hint: You can assume that $\\Sigma$ is small enough and apply the Taylor expansion $\\frac{1}{1-x}=1+x$)\n", + "\n", + " (a) iii. Show that the MLE corresponds to a maximum point and show that the uncertainty of $\\hat{\\Sigma}$ is\n", + "\n", + "$$\n", + "\\sigma_\\Sigma=\\frac{1}{\\sqrt{\\sum_i^N\\cos^2(2\\phi_i)}}\n", + "$$\n", + "\n", + " (b) i. Given the data in Sigma.dat (these correspond to different measurements of $\\phi$ determine the $\\hat{\\Sigma}$ and its uncertainty using your answers from part (a).\n", + "\n", + " (b) ii. Plot the $\\phi$ distribution of the data and discuss your choice of binning. Discuss the uncertainties associated with your yields (or counts) per bin and ensure these are illustrated in your plot.\n", + "\n", + " (b) iii. Determine $\\Sigma$, its uncertainty, and the reduced $\\chi^2$ using a least squares fit. Discuss the values you obtained.\n", + "\n", + " (b) iv. Plot the $\\chi^2$ surface." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "(a) i." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "(a) ii." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "(a) iii.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "(b) i." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "(b) ii." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "(b) iii." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "(b) iv." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "## Extended Max Likelihood" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "\"This differs from the standard method of maximum likelihood in that the normalisation of the probability distribution function is allowed to vary. It is thus applicable to problems in which the number of samples obtained is itself a relevant measurement. If the function is such that its size and shape can be independently varied, then the estimates given by the extended method are identical to the standard maximum likelihood estimators, though the errors require care of interpretation. If the function does not have this property, then extended maximum likelihood can give better results.\" Barlow https://doi.org/10.1016/0168-9002(90)91334-8" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In conventional sampling theory an experiment usually consists of taking a predetermined number of samples.\n", + "\n", + "In experimental physics, the collected number of events fluctuate following a Poisson distribution.\n", + "\n", + "The number of events observed may be relevant to the quantities being estimated, and the incorporation of the\n", + "fact that the number observed has the actual value $\\nu$ improves the estimates of the parameters $\\hat{\\theta}$. To account for this, we \"extend\" the maximum likelihood by relaxing the normalisation of the PDF:" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The likelihood function:\n", + "\n", + "$$\n", + "L=\\prod_{i=1}^n f(x_i;\\theta)\n", + "$$\n", + "\n", + "becomes\n", + "\n", + "$$\n", + "L=Pois(n;\\nu)\\prod_{i=1}^n f(x_i;\\theta)=\\frac{\\nu^n e^{-\\nu}}{n!}\\prod_{i=1}^n f(x_i;\\theta),\n", + "$$\n", + "\n", + "where $n\\sim Pois(n|\\nu)$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "**If $\\nu$ is independent of $\\theta$**, the likelihood function becomes:\n", + "\n", + "$$\n", + "L=\\frac{\\nu^n e^{-\\nu}}{n!}\\prod_{i=1}^n f(x_i;\\theta)\\\\\n", + "= \\frac{e^{-\\nu}}{n!}\\prod_{i=1}^n \\nu f(x_i;\\theta),\n", + "$$\n", + "\n", + "and the Log likelihood\n", + "\n", + "$$\n", + "\\ln L=-\\nu - \\ln n! +n\\ln\\nu+\\sum_{i=1}^n \\left(\\ln f(x_i;\\theta)\\right).\n", + "$$\n", + "\n", + "The constant $\\ln n!$ does not depend on our observables, so it wont affect the maximum position and can thus be neglected." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "To find the estimator for $\\nu$ we look for the minimum position in the negative loglikelihood:\n", + "\n", + "$$\n", + "\\frac{\\partial (-\\ln L)}{\\partial \\nu}=1-\\frac{n}{\\nu}=0\n", + "$$\n", + "\n", + "which gives the maximum likelihood estimator $\\hat{\\nu}=n$. The estimator for $\\theta$ results in the same estimator from our normal likelihood function\n", + "\n", + "$$\n", + "\\frac{\\partial (-\\ln L)}{\\partial \\theta}=-\\sum_{i=1}^n\\frac{\\partial f(x_i;\\theta) }{\\partial \\theta}\\frac{1}{f(x_i;\\theta)}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "**if $\\nu$ is dependent of $\\theta$ in the instance that $\\nu$ is a function of $\\theta$**, $\\nu(\\theta)$, then\n", + "\n", + "$$\n", + "-\\ln L=\\nu(\\theta)-\\sum_{i=1}^n\\ln\\left(\\nu(\\theta)f(x_i;\\theta)\\right)\n", + "$$\n", + "\n", + "and the resultand estimators exploit information from both $n$ and $x$, which will look to smaller variations in $\\hat{\\theta}$." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### (trivial) Example" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "We are interested in determining the amount of events observed alongside with the characteristics of the PDF that describe our signal shape. Since the amount of signal collected follows a Poisson distribution, we can use the extended likelihood approach.\n", + "\n", + "Lets first look at the data provided in ExtendedLLexample.txt:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "values = []\n", + "url = \"https://drive.google.com/uc?id=1p3kLa9XZAZXiLIszq7SxMmoiNZ_qLYXf\"\n", + "output = \"ExtendedLLexample.txt\"\n", + "gdown.download(url, output, quiet=False)\n", + "f = open(output)\n", + "for line in f.readlines():\n", + " values.append(float(line))\n", + "f.close()\n", + "\n", + "bins = int(math.sqrt(len(values))) - 1\n", + "minim = min(values)\n", + "maxim = max(values)\n", + "print(\"Number of bins:\", bins, \" with bounds: \", minim, \" and \", maxim)\n", + "binnedData, lowboundBin, patches = plt.hist(values, bins, range=(minim, maxim))\n", + "plt.show()\n", + "print(\"number of entries\", len(values))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "From this our Signal seems to be described by a Gaussian function.\n", + "\n", + "Using the equation from the previous slide the negative log likelihood is the given by:\n", + "\n", + "$$\n", + "-\\ln L=\\nu(\\theta)-\\sum_{i=1}^n\\ln\\left(\\nu(\\theta)f(x_i;\\theta)\\right)\n", + "$$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def llog(params):\n", + " ll = 0\n", + " s, mu, sigma = params\n", + " n = len(values)\n", + " for i in values:\n", + " ll += -np.log(s * norm.pdf(i, mu, sigma))\n", + " ll = ll + s\n", + " return ll" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "x0 = [2000, 1.12, 0.01]\n", + "res = opt.minimize(llog, x0)\n", + "print(res.x)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Using the extended maximum likelihood we get that the number of signal events is 500." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "## We can plot the Gaussian function on top of our histogram. We need to scale the Gaussian function\n", + "## to our bin width to properly vizualise it.\n", + "x = np.linspace(1.05, 1.20, 1000)\n", + "\n", + "\n", + "def Gaus(x, params):\n", + " s, mu, sigma = params\n", + " Gausfun = s * norm.pdf(x, mu, sigma)\n", + " return Gausfun * (1.2 - 1.05) / 50" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "params = res.x\n", + "plt.hist(values, 50, range=(1.05, 1.2))\n", + "plt.plot(x, Gaus(x, params), \"-\", label=\"Fit Poisson\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "This was a trivial example since one can also get the number of events from the entries. However, this approach can be very useful when we are dealing with signal and background contributions. In this case, we can use a two component fit, with a PDF that describes the background and a PDF that described our signal:\n", + "\n", + "Normalised PDF:\n", + "\n", + "$$\n", + "f(x;r,\\vec{\\theta})=rf_s(\\vec{\\theta})+(1-r)f_b(\\vec{\\theta}),\n", + "$$\n", + "\n", + "where $r=\\frac{s}{s+b}$ is the ratio of signal to total. The prediction of the total number of events also follows a Poisson distribution with $\\nu=s+b$. From this we can write the negative log likelihood:\n", + "\n", + "$$\n", + "-LL=s+b-n\\ln(s+b)-\\sum_i^n\\ln(f(x;r,\\vec{\\theta})).\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The extended ML method provides reliable values for obtained signal and background events and their corresponding uncertainties (Poisson fluctuations and uncertainty in proportion of signal to background)\n", + "\n", + "\n", + "\n", + "Alternatively, one can also fit the normalised PDF:\n", + "\n", + "$$\n", + "f(x;r,\\vec{\\theta})=rf_s(\\vec{\\theta})+(1-r)f_b(\\vec{\\theta}),\n", + "$$\n", + "\n", + "and estimate the signal events by $s=r\\cdot n$. This however, does not provide us with reliable determination of uncertainties as $\\sigma_r\\cdot n$ is not a good estimated of the variation of signal events as it ignores fluctuations in n." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "**Extended ML method is very useful when the determination of yield is needed -- for example determination of cross section**\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Cross section describes the probability that two particles will collide and interact in a certain way, or for a specific reaction to occur. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus.\n", + "\n", + "When a cross section is specified as the differential limit of a function of some final-state variable, such as particle angle or energy, it is called a differential cross section. When a cross section is integrated over all scattering angles (and possibly other variables), it is called a total cross section or integrated total cross section" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The probability that a reaction takes places can be written as:\n", + "\n", + "$$\n", + "\\sigma=\\frac{Y}{L},\n", + "$$\n", + "\n", + "where $Y$ denotes the Yield of produced events and $L$ denotes the luminosity." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "\n", + "The luminosity $L$ is a measure of the colliding frequency between beam and target, and thus accounts for characteristics of the initial state of the reaction. Specifically, (for a fixed target experiments) the luminosity is proportional to the number of target centers and the incident number of beam particles. Lets take as an example the photoproduction of single pions $\\gamma p \\to p \\pi^0$. Here the luminosity is:\n", + "\n", + "$$\n", + "L=\\frac{\\Phi \\rho l N_A}{A_r},\n", + "$$\n", + "\n", + "where $\\Phi$ is the number of photons incident on the target, $\\rho$ is the target density, $l$ is the target length, $A_r$ is the atomic weight of the target, and $N_A$ is Avocadrons number.\n", + "\n", + "(Note: Things can get more complicated in collider experiments. See https://cds.cern.ch/record/941318/files/p361.pdf for a summary on luminosity determination).\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "The yield of produced events $Y$ can be determined in experiments by identifying the final state particles of the reaction (using for example detectors). In this approach, one needs to acount for limitations in the detector acceptance (detector blind spots) and detector efficiency:\n", + "\n", + "$$\n", + "Y=\\frac{Y_{det}}{A\\times \\epsilon},\n", + "$$\n", + "\n", + "where $Y_{det}$ is the number of events for our reaction of interest that were detected in our system, $A$ is the detector acceptance, and $epsilon$ is the detector efficiency. The denominator, here which is a correction factor for the detected yields can be established using sophistigated Monte Carlo approaches, and realistic detector simulations. This approach also allows us to account for inefficiencies\n", + "in our analysis resulting in lost events in addition to detector inneficiencies." + ] + } + ], + "metadata": { + "colab": {}, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +}