From 362e60f06bd15778a9814b0e83a6771ce3972bf2 Mon Sep 17 00:00:00 2001 From: Logan Bhamidipaty Date: Mon, 18 Nov 2024 14:28:08 -0800 Subject: [PATCH] logexp citation --- paper.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/paper.md b/paper.md index 0773d76..754b119 100644 --- a/paper.md +++ b/paper.md @@ -41,7 +41,7 @@ EPCA is used in reinforcement learning [@Roy], sample debiasing [@debiasing], an The absence of a general EPCA library likely stems from the limited interoperability between fast symbolic differentiation and optimization libraries in popular languages like Python and C. Julia, by contrast, uses multiple dispatch which promotes high levels of generic code reuse [@dispatch]. Multiple dispatch allows `ExpFamilyPCA.jl` to integrate fast symbolic differentiation [@symbolics], optimization [@optim], and numerically stable computation [@stable_exp] without requiring costly API conversions.[^1] As a result, `ExpFamilyPCA.jl` delivers speed, stability, and flexibility, with built-in support for most common distributions (§ [Supported Distributions](#supported-distributions)) and flexible constructors for custom distributions (§ [Custom Distributions](#supported-distributions)). -[^1]: Symbolic differentiation is essential for flexibly specifying the EPCA objective (see [documentation](https://sisl.github.io/ExpFamilyPCA.jl/stable/math/objectives/#2.-Using-F-and-f)). While numeric differentiation is faster, symbolic differentiation is performed only once to generate a closed form for the optimizer (e.g., `Optim.jl` [@optim]), making it more efficient in practice. `LogExpFunctions.jl` (which implements ideas from @stable_exp) mitigates overflow and underflow in exponential and logarithmic operations. +[^1]: Symbolic differentiation is essential for flexibly specifying the EPCA objective (see [documentation](https://sisl.github.io/ExpFamilyPCA.jl/stable/math/objectives/#2.-Using-F-and-f)). While numeric differentiation is faster, symbolic differentiation is performed only once to generate a closed form for the optimizer (e.g., `Optim.jl` [@optim]), making it more efficient in practice. `LogExpFunctions.jl` [@logexp] (which implements ideas from @stable_exp) mitigates overflow and underflow in exponential and logarithmic operations. ## Principal Component Analysis