Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests/CI/Tooling #181

Open
AdrianDeWinter opened this issue Nov 11, 2024 · 4 comments
Open

Tests/CI/Tooling #181

AdrianDeWinter opened this issue Nov 11, 2024 · 4 comments

Comments

@AdrianDeWinter
Copy link

I thought having a proper issue for this might be better than to keep annoying Sim in discord.

I took a look at building up a proper testing toolchain, and from what I see, there a couple of potential goals for this:

  • Compatibility testing with new Blender versions
    Find out early and easily what is or isn't broken, without relying on the apparently lacklustre change-notes from Blender
  • Testing fixes and new features
    Classic TDD, how far to take that is a matter of philosophy
  • Regression testing
    Make sure nothing breaks (unnoticed)

And two rather different environments to do this in:

  • Running test interactively in Blender from VSCode
    Most easily accomplished by just running though a list of function references.
    Would be great if this could happen through VSCode's testing interface, but getting that into Blenders bundles python looks like a gigantic can of worms...
  • Running tests in an automated fashion, maybe even a proper CI
    The blender_addon_tester package can download any blender version, sets up pytest in that and then just runs through the tests like any other pytest CLI invocation. Wrapped in a short script, this is an easy way to invoke the test suite without needing to attach a debugger to a Blender instance. I even tested it on a headless Linux container :D
    This could also be automated via GitHub actions (limited to the main branch to conserve minutes, for example)

I'll open a Proof-of-concept PR and link it to this Issue.

@Simarilius-uk
Copy link
Contributor

You dont have to worry about annoying me, I'm happy to discuss whatever. This sounds pretty promising. Do you know your way round GitHub actions then? I managed to set up one to zip up the build for each commit, but getting one to do it right for releases has evaded me.

@AdrianDeWinter
Copy link
Author

AdrianDeWinter commented Nov 13, 2024

Alright :)
Not with GitHub Actions, but I work with GitLab CI/CD every day. Same thing, similar syntax, different name.

I took a look at the actions you have defined, and tried them on my fork.
The simple_bk and simple_release (the one with you customized version of blender-addon-release) both ran successfully, the latter one also successfully created a draft release (although GitHubs of UX of finding that draft later on is terrible).
The simple_build_and_release is missing it's triggers.

Looking at the resulting .zip's and comparing them to the actual releases here, it seems the structure is wildly different, and what is and isn't included also differs. Should I consider the current releases as the golden sample and get the GH Actions to build the same thing?

Regarding the tests, would you prefer to have simple scripts you can run manually inside of blender, or the full testing suite?
Having pytest available via the latter option does simplify testing significantly, and get's us code coverage too (and can be automated via GH Actions :) )

@Simarilius-uk
Copy link
Contributor

Simarilius-uk commented Nov 13, 2024

the zips that the simple_bk that runs on every commit builds are right. I had all kinds of trouble getting the release one to build one the same, was buggering it up every time, gave up and have just been doing it manually copying the last commit one.

@AdrianDeWinter
Copy link
Author

Alright, I'll take a crack at it on the weekend

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants