Releases: sylvaticus/BetaML.jl
Releases · sylvaticus/BetaML.jl
v0.12.1
v0.12.0
BetaML v0.12.0
- Added
FeatureRanker
, a flexible feature ranking estimator using multiple feature importance metrics - new functions
kl_divergence
andsobol_index
- added option to tree-based models to ignore specific variables in prediction, by following both the splits on nodes occurring on that dimensions, as the keyword
ignore_dims
to thepredict
function - added option
sampling_share
toRandomForestEstimator
model - DOC: added Benchmarks (but then temporarily removed due to the issue of SystemBenchmark not installable, see this issue )
- DOC: added
FeatureRanker
tutorial - bugfix on
l2loss_by_cv
for unsupervised models
v0.11.4
BetaML v0.11.4
bugfix (solve issue in cosine_distance - similarity was actually computed)
v0.11.3
BetaML v0.11.3
- bugfixes (removed old, undocumented, unused, type pirate
findfirst
andfindall
functions)
v0.11.2
v0.11.1
BetaML v0.11.1
- changed some keyword arguments of AutoEncoder and PCAEncoder:
outdims
=>encoded_size
andinnerdims
=>layers_size
This shouldn't be breaking as I twisted the constructor to accept the older names (until next breaking version 0.12)
v0.11.0
BetaML v0.11.0
Attention: many breaking changes in this version !!
- experimental new
ConvLayer
andPoolLayer
for convolutional networks. BetaML neural networks work only on CPU and even on CPU the convolution layers (but not the dense ones) are 2-3 times slower than Flux. Still they have some quite unique characteristics, like working with any dimensions or not requiring AD in most cases, so they may still be useful in some corner situations. Then, if you want to help in porting to GPU... ;-) - Isolated MLJ interface models into their own
Bmlj
submodule - Renamed many model in a congruent way
- Shortened the hyper-parameters and learnable parameters struct names
- Corrected many doc bugs
- Several bugfixes
v0.10.4
BetaML v0.10.4
- Added models
AutoEncoder
and MLJ wrapperAutoEncoderMLJ
with am=AutoEncoder(hp); fit!(m,x); x_latent = predict(m,x); x̂ = inverse_predict(m,x_latent)
interface. Users can optionally specify the number of dimensions to shrink the data (outdims
), the number of neurons of the inner layers (innerdims
) or the full details of the encoding and decoding layers and all the underlying NN options, but this remains optional. - Adapted
2loss_by_cv
function to unsupervised models with inverse_predict - Several bugfixes
Merged pull requests:
- CompatHelper: add new compat entry for Statistics at version 1, (keep existing compat) (#61) (@github-actions[bot])
- correct typo in
AbstractTrees.printnode
(#62) (@roland-KA)
Closed issues:
- Deprecation warning from ProgressMeter.jl (#58)
v0.10.3
BetaML v0.10.3
v0.10.2
BetaML v0.10.2
Merged pull requests:
- CompatHelper: add new compat entry for DelimitedFiles at version 1, (keep existing compat) (#55) (@github-actions[bot])