Skip to content

TensorFlow Addons v0.11.0

Compare
Choose a tag to compare
@github-actions github-actions released this 06 Aug 01:20
· 11 commits to r0.11 since this release
3078485

Release Notes

  • Built against TensorFlow 2.3
  • CUDA kernels are compiled with CUDA 10.1
  • API docs found on the website

Changelog

  • Support building against CUDA 11 and CUDNN 8 (#1950)

tfa.activations

  • Add Snake layer and activation (#1967)
  • Deprecate gelu (#2048)

tfa.image

  • Set shape for dense image warp (#1993
  • Drop data_format argument (#1980)
  • Enable half and double for resampler GPU ops (#1852)

tfa.layers

  • Add Spectral Normalization layer (#1244)
  • Add CRF layer (#1999)
  • Add Snake layer and activation (#1967)
  • Add Spatial Pyramid Pooling layer (#1745)
  • Add Echo State Network (ESN) layer (#1862)
  • Incorporate low-rank techniques into DCN. (#1795)

tfa.metrics

  • Add geometric mean (#2031)
  • Fix R_Square shape issue in model.evaluate (#2034)

tfa.losses

  • Change the default distance metric for tfa.losses.triplet_semihard_loss and tfa.losses.triplet_hard_loss from squared euclidean norm to euclidean norm. Users must change distance_metric to "squared-L2" in order to achieve the old behavior.

tfa.optimizers

  • Add ProximalAdagrad optimizer (#1976)
  • Add support for scheduled weight decays in RectifiedAdam. (#1974)
  • Fixed lr/wd schedules for DecoupledWeightDecayExtension running on GPU (#2053) (#2029)
  • Fixed sparse novograd (#1970)
  • MovingAverage: add dynamic decay and swap weights (#1726)
  • Remove RAdam optional float total steps (#1871)

tfa.rnn

  • Move the tf.keras.layers.PeepholeLSTMCell to TFA (#1944)
  • Added echo state network (ESN) recurrent cell (#1811)

tfa.seq2seq

  • Improve support of global dtype policy in seq2seq layers (#1981)
  • Add a Python alternative to seq2seq.gather_tree (#1925)
  • Allow resetting embedding_fn when calling BeamSearchDecoder (#1917)
  • Fixup returned cell state structure in BasicDecoder (#1905)
  • Fixup returned cell state structure in BeamSearchDecoder (#1904)
  • Fix AttentionWrapper type annotation for multiple attention mechanisms (#1872)
  • Ensure cell state structure is unchanged on first AttentionWrapper call (#1861)
  • Remove sequential_update from AverageWrapper (#1807)

Thanks to our Contributors

@AakashKumarNain, @AntPeixe, @JakeTheWise, @MHStadler, @PRUBHTEJ, @Smankusors, @Squadrick, @Susmit-A, @WindQAQ, @autoih, @bhack, @brunodoamaral, @cgarciae, @charlielito, @csachs, @failure-to-thrive, @feyn-aman, @fsx950223, @gabrieldemarmiesse, @gugarosa, @guillaumekln, @jaeyoo, @jaspersjsun, @jlsneto, @ksachdeva, @lc0, @leandro-gracia-gil, @marload, @nluehr, @pedrolarben, @qlzh727, @seanpmorgan, @tanzhenyu, @tf-marissaw and @xvr-hlt