You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AdaptiveDetector works by calculating a running average of the output score of fast cut detectors. In theory, the scores both new detectors produce (histograms and perceptual hashing) can be normalized and used as inputs for AdaptiveDetector.
We could change the configuration options for AdaptiveDetector to allow specifying which detector to use for frame scores, rather than duplicating the config options shared between it and ContentDetector, which is the only supported "input".
Ideally, the methods used - rolling average vs. flash suppression - could be defined globally, or as a separate command. This work is tracked in In the meantime, allowing AdaptiveDetector to specify where the underlying frame scores come from might be workable. This would also avoid a lot of duplication of options.
Another option: we could integrate AdaptiveDetector into the new flash suppression filter by adding a new suppression method which accepts the required parameters. This would be more extensible and future proof, and allow easier switching/comparison between methods. With this approach, adaptive detector would be removed entirely, and instead each detector would specify the "mode" to suppress flashes.
This is ideal since the adaptive ratio might differ between detectors since frame scores aren't normalized across all detectors yet, especially those that show nonlinear responses to change. It would allow us to provide better defaults and define them in the detectors themselves, rather than being tightly coupled as it is right now (AdaptiveDetector requires a lot of knowledge about ContentDetector).
The text was updated successfully, but these errors were encountered:
AdaptiveDetector works by calculating a running average of the output score of fast cut detectors. In theory, the scores both new detectors produce (histograms and perceptual hashing) can be normalized and used as inputs for AdaptiveDetector.
We could change the configuration options for
AdaptiveDetector
to allow specifying which detector to use for frame scores, rather than duplicating the config options shared between it andContentDetector
, which is the only supported "input".Ideally, the methods used - rolling average vs. flash suppression - could be defined globally, or as a separate command. This work is tracked in In the meantime, allowing AdaptiveDetector to specify where the underlying frame scores come from might be workable. This would also avoid a lot of duplication of options.
Another option: we could integrate AdaptiveDetector into the new flash suppression filter by adding a new suppression method which accepts the required parameters. This would be more extensible and future proof, and allow easier switching/comparison between methods. With this approach, adaptive detector would be removed entirely, and instead each detector would specify the "mode" to suppress flashes.
This is ideal since the adaptive ratio might differ between detectors since frame scores aren't normalized across all detectors yet, especially those that show nonlinear responses to change. It would allow us to provide better defaults and define them in the detectors themselves, rather than being tightly coupled as it is right now (AdaptiveDetector requires a lot of knowledge about ContentDetector).
The text was updated successfully, but these errors were encountered: