Skip to content

Commit

Permalink
updated regularization section of paper
Browse files Browse the repository at this point in the history
  • Loading branch information
FlyingWorkshop committed Nov 19, 2024
1 parent c45df45 commit 9303105
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 4 deletions.
2 changes: 1 addition & 1 deletion paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ @article{Bregman
doi = {10.1016/0041-5553(67)90040-7},
url = {https://www.sciencedirect.com/science/article/pii/0041555367900407},
author = {L.M. Bregman},
abstract = {IN this paper we consider an iterative method of finding the common point of convex sets. This method can be regarded as a generalization of the methods discussed in [1–4]. Apart from problems which can be reduced to finding some point of the intersection of convex sets, the method considered can be applied to the approximate solution of problems in linear and convex programming.}
abstract = {In this paper we consider an iterative method of finding the common point of convex sets. This method can be regarded as a generalization of the methods discussed in [1–4]. Apart from problems which can be reduced to finding some point of the intersection of convex sets, the method considered can be applied to the approximate solution of problems in linear and convex programming.}
}

@misc{logexp,
Expand Down
7 changes: 4 additions & 3 deletions paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ $$

### Regularization

To ensure the optimum converges, we introduce a regularization term
Following @EPCA, we introduce a regularization term to ensure the optimum converges

$$\begin{aligned}
& \underset{\Theta}{\text{minimize}}
Expand All @@ -147,8 +147,9 @@ $$\begin{aligned}
& & \mathrm{rank}\left(\Theta\right) = k
\end{aligned}$$

where $\epsilon > 0$ and $\mu_0 \in \mathrm{range}(g)$.
where $\epsilon > 0$ and $\mu_0 \in \mathrm{range}(g)$.[^2]

[^2]: In practice, we allow $\epsilon \geq 0$, because special cases of EPCA like traditional PCA are well-known to converge without regularization.

### Example: Poisson EPCA

Expand All @@ -158,7 +159,7 @@ This is useful in applications like belief compression in reinforcement learning

![Left - KL Divergence for Poisson EPCA versus PCA. Right - Reconstructions from the models.](./scripts/combo.png)

For a larger environment with $200$ states, PCA struggles even with $10$ basis.
For a larger environment with $200$ states, PCA struggles even with $10$ basis components.

# API

Expand Down

0 comments on commit 9303105

Please sign in to comment.