You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As of now our tests only check whether the resulting file is a valid GIF image.
However, we should add tests that check whether the input image data matches the output image data.
To achieve this, we would need to pass the GIF image created to an actual GIF decoder.
Steps:
Find a common, well-tested decoder (e.g. ImageMagick might be an option)
Add a compare routine to all current tests (Note: multi-frame / transparent GIFs might be tricky)
The text was updated successfully, but these errors were encountered:
A possible solution to verifying actual vs expected output is to store the latter in the repo and ensure the compressed, binary data of both exactly match.
If a change to the logic alters the actual output then the expected output must change too. This means a human will manually review both the logic change and the expected output change at the same time (e.g. GitHub provides image diff tooling as part of the PR process).
Thanks for the hint, Image diff tooling sounds interesting - I will have a look into this.
For now I would start with adding simple MD5 hashes for all test result GIFs.
As of now our tests only check whether the resulting file is a valid GIF image.
However, we should add tests that check whether the input image data matches the output image data.
To achieve this, we would need to pass the GIF image created to an actual GIF decoder.
Steps:
The text was updated successfully, but these errors were encountered: