Skip to content

Commit

Permalink
Update tutorial notebook; Add imgs
Browse files Browse the repository at this point in the history
  • Loading branch information
cr-xu committed Feb 4, 2024
1 parent 0a8a21d commit 5c6f8a3
Show file tree
Hide file tree
Showing 3 changed files with 45 additions and 22 deletions.
Binary file added img/learn_to_learn.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/mdp_distribution.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
67 changes: 45 additions & 22 deletions tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,19 @@
"<h3 style=\"color: #b51f2a\">Actions</h3>\n",
"The actuators are the strengths of 10 corrector magnets that can steer the beam.\n",
"They are normalized to [-1, 1]. \n",
"In this tutorial, we apply the action by adding a delta change $\\Delta a$ to the current magnet strengths .\n",
"In this tutorial, we apply the action by adding a delta change $\\Delta a$ to the current magnet strengths.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"<h2 style=\"color: #b51f2a\">Formulating the RL problem</h2>\n",
"\n",
"\n",
"<h3 style=\"color: #b51f2a\">States/Observations</h3>\n",
"The observations are the readings of ten beam position monitors (BPMs), which read the position of the beam at a particular point in the beamline. The states are also normalized to [-1,1], corresponding to $\\pm$ 100 mm in the real accelerator.\n",
Expand All @@ -210,10 +222,10 @@
"The reward is the negative RMS value of the distance to the target trajectory. \n",
"\n",
"$$\n",
"r(x) = - \\sqrt{ \\frac{1}{10} \\sum_{i=1}^{10} (x_{i} - x^{\\text{target}}_{i})^2},\n",
"r(x) = - \\sqrt{ \\frac{1}{10} \\sum_{i=1}^{10} \\Delta x_{i}^2} \\,, \\ \\ \\ \\Delta x_{i} = x_{i} - x^{\\text{target}}_{i}\n",
"$$\n",
"\n",
"where $x^{\\text{target}}=\\vec{0}$ for a centered orbit.\n",
"where $x^{\\text{target}}=\\vec{0}$ for a centered trajectory.\n",
"\n",
"<center>\n",
"<img src=\"img/steering_problem.png\" style=\"width:70%; margin:auto;\"/>\n",
Expand All @@ -230,17 +242,21 @@
"source": [
"<h2 style=\"color: #b51f2a\">Formulating the RL problem</h2>\n",
"\n",
"<h3 style=\"color: #b51f2a\">Convergence condition</h3>\n",
"<h3 style=\"color: #b51f2a\">Successful termination condition</h3>\n",
"\n",
"If a threshold RMS (-10 mm in our case, 0.1 in normalized scale) is surpassed,\n",
"the episode ends successfully. \n",
"the episode ends successfully. We cannot measure _exactly_ 0 because of the resolution of the BPMs.\n",
"\n",
"<h3 style=\"color: #b51f2a\">Unsucessful termination (safety) condition</h3>\n",
"\n",
"<h3 style=\"color: #b51f2a\">Termination (safety) condition</h3>\n",
"If the beam hits the wall (any state ≤ -1 or ≥ 1 in normalized scale, 10 cm), the episode is terminated unsuccessfully. \n",
"If the beam hits the wall (any state ≤ -1 or ≥ 1 in normalized scale, 10 cm), the episode is terminated unsuccessfully. In this case, the agent receives a large negative reward (all BPMs afterwards are set to the largest value) to discourage the agent.\n",
"\n",
"<h3 style=\"color: #b51f2a\">Episode initialization</h3>\n",
"\n",
"All episodes are initialised such that the RMS of the distance to the target trajectory is large. This ensures that the task is not too easy and relatively close to the boundaries to probe the safety settings.\n",
"\n",
"<h3 style=\"color: #b51f2a\">Agents</h3>\n",
"\n",
"In this tutorial we will use:\n",
"\n",
"- PPO (Proximal Policy Optimization)\n",
Expand All @@ -261,21 +277,24 @@
"\n",
"<p>\n",
"<center> \n",
" <font size=\"8\"> \n",
" 1 task / 1 environment = 1 set of fixed quadrupole strengths\n",
" <font size=\"4\"> \n",
" 1 task or 1 environment = 1 set of fixed quadrupole strengths = 1 MDP\n",
" </font>\n",
"</center>\n",
"</p>\n",
"\n",
"<img src=\"img/learn_to_learn.png\" style=\"float: left; width: 30%; margin-right: 8%; margin-bottom: 0.5em;\">\n",
"\n",
"\n",
"<p> <br></p>\n",
"\n",
"In this tutorial we will use a variety of environments or tasks:\n",
"- <p style=\"color:blue\">Fixed tasks for evaluation &#x2757;</p>\n",
"- <p style=\"color:blue\">Randomly sampled tasks from a task distribution for meta-training &#x2757;</p>\n",
"\n",
"We generate them from the original, nominal optics, adding a random scaling factor to the quadrupole strengths.\n",
"\n",
"<center>\n",
"<img src=\"img/awake_lattice.png\" style=\"width:70%; margin:auto;\"/>\n",
" </center>"
"<img src=\"img/mdp_distribution.png\" style=\"width: 30%; margin:auto;\">\n"
]
},
{
Expand Down Expand Up @@ -323,7 +342,7 @@
" \\Delta a &= \\mathbf{R}^{-1}\\Delta s\n",
"\\end{align}\n",
"\n",
"$\\implies$ Actions from **RL policy**:\n",
"$\\implies$ Actions from deep **RL policy**:\n",
"With the policy we get the actions:\n",
"<center>\n",
"<img src=\"img/policy.png\" style=\"width:20%; margin:auto;\"/>\n",
Expand Down Expand Up @@ -364,7 +383,7 @@
"- This will be performed for different evaluation tasks, just to assess how the policy performs in different lattices.\n",
"\n",
"Side note:\n",
"- The benchmark policy will not immediately find the settings for the target trajectory, because the actions are scaled down for safety reasons so that the maximum step is within $[-1,1]$ in the normalized space.\n",
"- The benchmark policy will not immediately find the settings for the target trajectory, because the actions are limited so that the maximum step is within $[-1,1]$ in the normalized space.\n",
"- We can then compare the metrics of both policies.\n",
"<center>\n",
"<img src=\"img/steering_problem.png\" style=\"width:50%; margin:auto;\"/>\n",
Expand Down Expand Up @@ -455,7 +474,7 @@
"<p style=\"color:#038aa1;\">$\\implies$ What is the difference in episode length between the benchmark policy and PPO? </p> \n",
"<p style=\"color:#038aa1;\">$\\implies$ Look at the cumulative episode length, which policy takes longer?</p>\n",
"<p style=\"color:#038aa1;\">$\\implies$ Compare both cumulative rewards, which reward is higher and why?</p>\n",
"<p style=\"color:#038aa1;\">$\\implies$ Look at the final reward (-10*RMS(BPM readings)) and consider the convergence (in red) and termination conditions mentioned before. What can you say about how the episode was ended?</p>"
"<p style=\"color:#038aa1;\">$\\implies$ Look at the final reward (-10*RMS(BPM readings)) and consider the sucessful (in red) and unsuccessful termination conditions mentioned before. What can you say about how the episode was ended?</p>"
]
},
{
Expand Down Expand Up @@ -555,10 +574,14 @@
"- We have a <b>meta policy</b> $\\phi(\\theta)$, where $\\theta$ are the weights of a neural network. The meta policy starts untrained $\\phi_0$.\n",
"\n",
"<h3 style=\"color: #b51f2a\">Step 1: outer loop</h3>\n",
"We randomly sample a number of tasks $i$ (in our case $i\\in \\{1,\\dots,8\\}$ different lattices, called <code>meta-batch-size</code> in the code) from a task distribution, each one with its particular initial <b>task policy</b> $\\varphi_{0}^i=\\phi_0$.\n",
"\n",
"We randomly sample a number of tasks $i$ (in our case $i\\in \\{1,\\dots,8\\}$ different lattices, called `meta-batch-size` in the code) from a task distribution, each one with its particular initial <b>task policy</b> $\\varphi_{0}^i=\\phi_0$.\n",
"\n",
"<h3 style=\"color: #b51f2a\">Step 2: inner loop (adaptation)</h3>\n",
"For each task, we gather experience for several episodes, store the experience, and use it to perform gradient descent and update the weights of each task policy $\\varphi_{0}^i \\rightarrow \\varphi_{k}^i$ for $k$ gradient descent steps."
"\n",
"For each task, we gather experience for several episodes, store the experience, and use it to perform gradient descent and update the weights of each task policy $\\varphi_{0}^i \\rightarrow \\varphi_{1}^i$\n",
"\n",
"This is repeated for $k$ gradient descent steps to generate $\\varphi_{k}^i$."
]
},
{
Expand All @@ -573,7 +596,7 @@
"\n",
"<h3 style=\"color: #b51f2a\">Step 3: outer loop (meta training)</h3>\n",
"\n",
"We sum the losses calculated for each **task policy** and perform gradient descent on the **meta policy**\n",
"We generate episodes with the adapted **task policies** $\\varphi_{k}^i$. We sum the losses calculated for each task $\\tau_{i}$ and perform gradient descent on the **meta policy**\n",
"$\\phi_0 \\rightarrow \\phi_1$\n",
"\n",
"<center>\n",
Expand All @@ -600,10 +623,10 @@
"We start with a random meta policy, and we initialize the task policies with it: $\\phi_0 = \\varphi_{0}^i$\n",
"\n",
"```python\n",
"meta_step 0:\n",
"1 meta_step: # Outer loop\n",
" sample 8 tasks\n",
" for t in tasks:\n",
" for i in num_steps:\n",
" for task in tasks:\n",
" for fast_step in num_steps: # Inner loop\n",
" for fast_batch in fast_batch_size:\n",
" rollout 1 episode:\n",
" reset corrector_strength\n",
Expand Down Expand Up @@ -751,7 +774,7 @@
}
},
"source": [
"<h3 style=\"color: #b51f2a\">We can observe that the meta policy can solve the problem for different tasks (i.e. lattices)!</h3>\n",
"<h3 style=\"color: #b51f2a\">We can observe that the pre-trained meta policy can solve the problem for different tasks (i.e. lattices) within a few adaptation steps!</h3>\n",
"\n",
"<br>\n",
"\n",
Expand Down

0 comments on commit 5c6f8a3

Please sign in to comment.