Releases: tensorflow/addons
TensorFlow Addons v0.15.0
Release Notes
- Built against TensorFlow 2.7
- CUDA kernels are compiled with CUDA11.2 and cuDNN 8.1.0
- API docs found on the website
Changelog
- Use multipython image for dev container (#2598)
- Add support for publishing macOS M1 ARM64 wheels for tfa-nightly (#2559)
Tutorials
- Update optimizers_cyclicallearningrate.ipynb (#2538)
tfa.activations
- Correct documentation for Snake activation to match literature and return statement (#2572) @fliptrail
tfa.iamge
- Fix euclidean distance transform float16 kernel (#2568)
tfa.layers
- Fix using NoisyNet with .fit() or .train_on_batch() (#2486)
- Fix spectral norm mixed precision (#2576)
tfa.optimizers
tfa.text
Thanks to our Contributors
@MarkDaoust, @bhack, @eli-osherovich, @fliptrail, @fsx950223, @howl-anderson, @juntang-zhuang, @jvishnuvardhan, @lgeiger, @markub3327, @seanpmorgan, @szutenberg and @vtjeng
TensorFlow Addons v0.14.0
Release Notes
- Built against TensorFlow 2.6
- CUDA kernels are compiled with CUDA11.2 and cuDNN 8.1.0
- API docs found on the website
Changelog
- Remove compatibility code for TensorFlow < 2.4 (#2545)
- Modify configure.py to recognize 'aarch64' for 64-Bit Raspberry Pi OS (#2540)
- Apple silicon support (#2504)
- Build fix Raspberry Pi 4 Linux ARM64 (#2487)
tfa.layers
- Add EmbeddingBag gpu op and layer (#2352) (#2517)(#2505)
- Fix StochasticDepth layer error in training mixed_float16 (#2450)
tfa.optimizers
- Adding a tutorial on CyclicalLearningRate (#2463)
Thanks to our Contributors
@HeatfanJohn, @Rocketknight1, @RyanGoslingsBugle, @fsx950223, @kaoh, @leondgarse, @lgeiger, @maxhgerlach, @sayakpaul, @seanpmorgan, @singhsidhukuldeep and @tetsuyasu
TensorFlow Addons v0.13.0
Release Notes
- Built against TensorFlow 2.5
- CUDA kernels are compiled with CUDA11.2 and cuDNN 8.1.0
- API docs found on the website
Changelog
tfa.activations
- Cleanup legacy codes for activations (#2394)
tfa.image
- Add python fallback for adjust_hsv_in_yiq (#2392)
- Remove ImageProjectiveTransform kernel (#2395)
- Fix EDT float16 and float64 kernels (#2412)
- Optimize EDT (#2402)
- Update cutout_ops.py (#2416)
tfa.metrics
- Initial commit of streaming Kendall's Tau algorithm. (#2423)
- Fix F1Score docs (#2462)
- Matthew Fix (#2406)
- Fix RSquare serialization (#2390)
- Make RSquare.reset_states to be able to run in tf.function (#2445)
tfa.optimizers
- Adding COntinuos COin Betting (COCOB) Backprop optimizer (#2063)
- Fix NovoGrad optimizer to work with float64 layers (#2467)
- Update cyclical_learning_rate.py (#2286)
- RectifiedAdam: Store 'total_steps' hyperparameter as float (#2369)
tfa.text
- fix wrong type hinting of
crf_log_likelihood
(#2471)
Thanks to our Contributors
@0x0badc0de, @bhack, @DragonPG2000, @Harsh188, @WindQAQ, @ashutosh1919, @fsx950223, @jeongukjae, @jonpsy, @juliangilbey, @lucasdavid, @lum4chi, @m-a-r-o-u, @nickswalker, @nleastaugh, @npanpaliya, @olesalscheider, @rehanguha, @seanpmorgan, @shubhanshu02, @sorensenjs, @whatwilliam and @xiedeping
TensorFlow Addons v0.12.1
Release Notes
- Built against TensorFlow 2.4.1
- CUDA kernels are compiled with CUDA 11
- API docs found on the website
Changelog
- Remove AVX2 Compilation by default (Aligns with https://github.com/tensorflow/tensorflow/releases/tag/v2.4.1)
tfa.image
- Fix image/fix sparse image warp unknown batch size (#2311)
TensorFlow Addons v0.12.0
Release Notes
- Built against TensorFlow 2.4
- CUDA kernels are compiled with CUDA 11
- API docs found on the website
Changelog
- Add AVX2 support (#2299)
- Drop TF2.2 compatibility (#2224)
- Drop python3.5 support (#2204)
- Expose tfa.types doc (#2162)
- Rename "Arguments:" to "Args:" (#2267)
- Add support for ARM architecture build from source (#2182)
tfa.activations
tfa.image
- Speedup gaussian kernel generation (#2149)
- Support fill_mode for transform (#2153)
- Use ImageProjectiveTransformV3 for TF >= 2.4.0 (#2293)
- Support unknown rank image (#2300)
- Fix sparse_image_warp partially unknown shape (#2308)
- Make cutout compatible with keras layer (#2302)
- Remove unsupported data_format (#2296)
- Refactor sharpness (#2287)
- Rodert fix image random cutout 2276 (#2285)
- Remove tf.function decorator in tfa.image.equalize (#2264)
- Support empty batches in ResamplerOp (#2219)
- Make cutout op compatible with non eager mode (#2190)
tfa.layers
-
Add stochastic depth layer (#2154)
-
Add MaxUnpooling2D layer (#2272)
-
Add noisy dense layers. (#2099)
-
Add discriminative Layer Training (#969)
-
Make MultiHeadAttention agnostic to dtype (float32 vs. float16) (#2253)
-
Change CRF layer dtype (#2270)
-
Change GroupNormalization default groups to 32. (#2241)
tfa.optimizers
- Standardized Testing Module (#2233)
- Fix LazyAdam resource variable ops performance issue (#2274)
- Add experimental_aggregate_gradients support (#2137)
tfa.rnn
- Bug fix for conflict variable name in layernorm cells. (#2284)
tfa.seq2seq
- Graduate _BaseAttentionMechanism to a public base class (#2209)
- Add a doctest example for BasicDecoder (#2214)
- Add a doctest example for AttentionWrapper (#2215)
- Improve sampler documentation, use doctest (#2213)
- Beam search decoding procedure added to seq2seq_nmt tutorial (#2140)
Thanks to our Contributors
@DanBmh, @DavidWAbrahams, @Harsh188, @JulianRodert, @LeonShams, @MHStadler, @MarkDaoust, @SamuelMarks, @WindQAQ, @aaronmondal, @abhishek-niranjan, @albertz, @bhack, @crccw, @edend10, @fsx950223, @gabrieldemarmiesse, @guillaumekln, @HMPH, @hp77-creator, @hwaxxer, @hyang0129, @kaixih, @lamberta, @marksandler2, @matwilso, @napsternxg, @nataliyah123, @perfinion, @qlzh727, @rmlarsen, @rushabh-v, @rybakov, @seanpmorgan, @stephengmatthews, @tgaddair and @thaink
TensorFlow Addons v0.11.2
TensorFlow Addons v0.11.1
Release Notes
- Update TF compatibility warning to include all of 2.3.x as acceptable.
TensorFlow Addons v0.11.0
Release Notes
- Built against TensorFlow 2.3
- CUDA kernels are compiled with CUDA 10.1
- API docs found on the website
Changelog
- Support building against CUDA 11 and CUDNN 8 (#1950)
tfa.activations
tfa.image
- Set shape for dense image warp (#1993
- Drop data_format argument (#1980)
- Enable half and double for resampler GPU ops (#1852)
tfa.layers
- Add Spectral Normalization layer (#1244)
- Add CRF layer (#1999)
- Add Snake layer and activation (#1967)
- Add Spatial Pyramid Pooling layer (#1745)
- Add Echo State Network (ESN) layer (#1862)
- Incorporate low-rank techniques into DCN. (#1795)
tfa.metrics
tfa.losses
- Change the default distance metric for
tfa.losses.triplet_semihard_loss
andtfa.losses.triplet_hard_loss
from squared euclidean norm to euclidean norm. Users must changedistance_metric
to "squared-L2" in order to achieve the old behavior.
tfa.optimizers
- Add ProximalAdagrad optimizer (#1976)
- Add support for scheduled weight decays in RectifiedAdam. (#1974)
- Fixed lr/wd schedules for DecoupledWeightDecayExtension running on GPU (#2053) (#2029)
- Fixed sparse novograd (#1970)
- MovingAverage: add dynamic decay and swap weights (#1726)
- Remove RAdam optional float total steps (#1871)
tfa.rnn
- Move the tf.keras.layers.PeepholeLSTMCell to TFA (#1944)
- Added echo state network (ESN) recurrent cell (#1811)
tfa.seq2seq
- Improve support of global dtype policy in seq2seq layers (#1981)
- Add a Python alternative to seq2seq.gather_tree (#1925)
- Allow resetting embedding_fn when calling BeamSearchDecoder (#1917)
- Fixup returned cell state structure in BasicDecoder (#1905)
- Fixup returned cell state structure in BeamSearchDecoder (#1904)
- Fix AttentionWrapper type annotation for multiple attention mechanisms (#1872)
- Ensure cell state structure is unchanged on first AttentionWrapper call (#1861)
- Remove
sequential_update
from AverageWrapper (#1807)
Thanks to our Contributors
@AakashKumarNain, @AntPeixe, @JakeTheWise, @MHStadler, @PRUBHTEJ, @Smankusors, @Squadrick, @Susmit-A, @WindQAQ, @autoih, @bhack, @brunodoamaral, @cgarciae, @charlielito, @csachs, @failure-to-thrive, @feyn-aman, @fsx950223, @gabrieldemarmiesse, @gugarosa, @guillaumekln, @jaeyoo, @jaspersjsun, @jlsneto, @ksachdeva, @lc0, @leandro-gracia-gil, @marload, @nluehr, @pedrolarben, @qlzh727, @seanpmorgan, @tanzhenyu, @tf-marissaw and @xvr-hlt
TensorFlow Addons v0.10.0
Release Notes
- Built against TensorFlow 2.2
- CUDA kernels are compiled with CUDA 10.1
- API docs found on the website
Changelog
- Enable ppc64le build (#1672)
tfa.activations
- Added the DepreciationWarning for the custom op version of activations functions (#1791)
tfa.image
- Fix condition tracing in scale_channel (#1830)
- Expose sharpness and equalize image op (#1827)
- Clarify flow definition for dense_image_warp (#1817)
- Added gaussian_blur_op (#1450)
tfa.layers
tfa.metrics
- Add sample_weight support to FScore metrics (#1816)
tfa.losses
- Added angular distance option to triplet loss (#1730)
- Enable npairs loss on windows (#1742)
- Added float16 and bfloat16 support for TripletSemiHardLoss, TripletHardLoss and LiftedStructLoss (#1683)
- Add Soft Weighted Kappa Loss (#762)
tfa.optimizers
- Fixed serializability bug in yogi (#1728)
Thanks to our Contributors
@Dagamies, @HauserA, @MarkDaoust, @Squadrick, @Susmit-A, @WindQAQ, @ageron, @amascia, @ashutosh1919, @autoih, @ben-arnao, @bhack, @fsx950223, @gabrieldemarmiesse, @ghosalsattam, @guillaumekln, @henry-eigen, @jharmsen, @olesalscheider, @seanpmorgan, @shun-lin, @terrytangyuan and @wenmin-wu
TensorFlow Addons v0.9.1
Release Notes
- Include CUDA kernels missing from 0.9.0
- Fix serialization for cyclical learning rate (#1623)