You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Neither stage is really parallelizable on its own, and the first stage has to be completed before the second stage can begin. But with several threads available one could do some pipelining, where stage 2 of an earlier block is overlapped with stage 1 of a later block. This potentially improves throughput. The same idea should also be applicable to BZZ encoding.
Some reasons this might not be fruitful to implement:
unclear whether BZZ blobs in realistic documents are big enough to benefit (and note that bulk BZZ compression outside the context of a DjVu document is not an application anyone cares about)
there's a tradeoff where larger blocks gives better compression but also cuts into the benefit from pipelining
if one stage takes much longer than the other (might be true of BZZ encoding), the gain from pipelining tends to zero
negative consequences for latency (?)
need multiple scratch buffers and output buffers, so memory use increases
Still, I'd like to experiment with this. Note that the low-level interface of sndjvu_codec::bzz::dec already has everything needed to implement pipelining.
The text was updated successfully, but these errors were encountered:
BZZ decoding of a block divides into two stages:
Neither stage is really parallelizable on its own, and the first stage has to be completed before the second stage can begin. But with several threads available one could do some pipelining, where stage 2 of an earlier block is overlapped with stage 1 of a later block. This potentially improves throughput. The same idea should also be applicable to BZZ encoding.
Some reasons this might not be fruitful to implement:
Still, I'd like to experiment with this. Note that the low-level interface of
sndjvu_codec::bzz::dec
already has everything needed to implement pipelining.The text was updated successfully, but these errors were encountered: