-
-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce test runtimes #633
Comments
I'm fine with dropping ubuntu12.04, RHEL5 and Debian 6. |
Parallelization does seem to help, though the output is much different:
|
#634 shows the reduced runtime. ~10 minutes vs ~16, hopefully less on average since one of the jobs seemed to hang longer than the others.. It gets 32 parallel threads on the Travis VMs, which makes it FAST. That's the positive. The negative is that the logs are broken up in clumps per thread. We could decrease the parallelism with some overrides, but it simply make it less obfuscated, not unobfuscated. When there's a failure, finding it is difficult. (1532 when you give up) There's definitely a tradeoff here that would need to be evaluated to see if it's worthwhile. |
As discussed in the modulesync_config PR, I'm fine with parallel_rspec. |
@rnelson0 Is there any issue with simplecov/coveralls? |
The biggest problem is the testing of all of the RSpec unit tests against every supported operating system for no good reason, especially when the code has no logic to do anything differently for specific operating systems. For instance, there are no (or few) conditional statements that generate a different configuration file if a certain fact is detected, so really there is little value in running the same test case with different mocked facts. A textbook example of an increase in the number of tests not actually providing more confidence that things work as is expected. |
Oh I forgot about this issue. I implemented a hash with operating systems we like: https://github.com/voxpupuli/puppet-collectd/blob/master/spec/spec_helper_methods.rb#L18 |
It has been pointed out that tests on collectd take a long time to run, approx 15+ minutes on Travis CI and possibly longer. This makes adjusting a PR a test of patience. Average test runtimes for VP modules are in the 3-5 minute range making this 3-5x longer. We need to investigate a way to reduce the time.
We would like to solicit some solutions. Two that have already been discussed as low-hanging fruit are:
We are investigating the first already. The second is a bit more controversial. Per the current metadata.json most OSes are still supported. Only Debian 6 looks to be fully past end of life. However, the Enterprise Linux 5 family is coming up on End of Life (20170331) and some others may be nearing it as well. We loop over each supported OS in most of the tests, so removing even a single OS from the matrix should have some effect on the tests.
We are proposing to drop Debian 6 support now. We also plan to drop the EL5 family at the end of March.
That leads us to the more difficult target:
We would like help from frequent contributors and users with this step. Specifically, highlighting some of the slowest tests, the easiest tests to refactor, and any unnecessary tests.
The text was updated successfully, but these errors were encountered: