diff --git a/Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/1. Transformer Models/Readme.md b/Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/1. Transformer Models/Readme.md
index ba0a9916..af94d706 100644
--- a/Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/1. Transformer Models/Readme.md
+++ b/Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/1. Transformer Models/Readme.md
@@ -10,7 +10,7 @@ LSTMs and GRUs can help to overcome the vanishing gradient problem, but even tho
2. In a conventional Encoder-decoder architeture, the model would again take T timesteps to compute the translation.
-## Transformers - Basics
+## RNN v/s Transformers
```buildoutcfg
TLDR:
1. In RNNs, parallel computing is difficult to implement.
@@ -30,3 +30,26 @@ TLDR:
6. Unlike the recurrent layer, the multi-head attention layer computes the outputs of each inputs in the sequence independently then it allows us to parallelize the computation. But it fails to model the sequential information for a given sequence. That is why you need to incorporate the positional encoding stage into the transformer model.
+
+## Applications of Transformers
+
+Some of the applications of Transformers include:
+1. Text summarization.
+2. Auto-complete.
+3. NER
+4. Automatic question-answering.
+5. NMT
+6. Chat-bots.
+7. Other NLP tasks:
+ * Sentiment analysis.
+ * Market intelligence.
+ * Text classification.
+ * Charecter recognition.
+ * Spell checking.
+
+## State of the art Transformers
+
+1. *GPT-2*: Generative Pre-training for Transformers
+2. *BERT* : Bi-directional Encoder Decoder Representations from Transformers.
+3. *T5* : Text-To-Text Transfer Transformer.
+
\ No newline at end of file
diff --git a/Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/images/6. T5 model.png b/Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/images/6. T5 model.png
new file mode 100644
index 00000000..95e40590
Binary files /dev/null and b/Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/images/6. T5 model.png differ