forked from IndicoDataSolutions/finetune
-
Notifications
You must be signed in to change notification settings - Fork 0
/
CHANGES.txt
40 lines (38 loc) · 1.18 KB
/
CHANGES.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
2019
v0.8.0 Mon Aug 19th
- BERT low memory mode
- RoBERTa base model support
v0.7.1 Thu June 27th
- Fix for cached_predict() on long sequences
v0.7.0 Thu June 27th
- GPT2-medium support
- Support for adapter finetuning strategy
- Add DeploymentModel class
v0.6.7 Thu May 30th
- Refactor sequence labeling functions
- Fix learning rate / tqdm issue for long documents
v0.6.5 Thu May 23rd
- Fix subtoken_predictions flag
- Only download required subset of model files lazily
v0.6.4 Tue May 14th
- Add BERT support
v0.6.3 Thu May 2nd
- Option to control maximum percentage of memory used
- Attention weight visualization for GPT
- Assorted bug fixes
v0.6.2 Thu Apr 4th
- Cached predict fix
- Model download fix
v0.6.1 Fri Mar 29th
- MTL bug fix
- Validation / early-stopping bug fix
v0.6.0 Thu Mar 28th
- Support for new base models
- GPT2 support
- Multi-task learning support
- Word CNN Baseline
- Code base refactor
2018
v0.5.9 Wed Oct 24th -- Resolve issues with running multiple concurrent finetune models
v0.5.1 Thu Oct 19th -- Fix lazy encoder install
v0.5.0 Tue Oct 17th -- Move core finetune infrastructure to tf.Estimator API for major model speedup!