Alpha v0.16.0
This is a pretty big update with some noticeable changes and improvements, including a design overhaul and many quality-of-life features.
Major Changes
-
Major UI improvements (#1305, #1326, #1302, #1313, #1314, #1330, #1340)
- The first thing you might notice in this version is a bit of a redesign. Nodes have been majorly overhauled to be much more compact. While it might be a bit of a shock how different it looks at first, I guarantee you'll quickly get used to it and prefer this new look. The compactness of the nodes gives you much more space to fill out your chains with, and there is just a lot less wasted space overall. Check out this comparison of the space used by the old vs the new nodes
- With this compactness change also came a change to sliders. In my opinion, they are now a lot easier to see and interact with, and are closer to something like you'd see in blender.
- There have also been changes to pop-up boxes. Settings had a bit of an overhaul to fix some scrolling issues, and all pop-ups now have a darker color. Alerts also got icons representing the type of alert.
-
Node-on-Connection insertion (#1231, #1341)
- Something we all have always wanted in chaiNNer was the ability to insert a compatible node in-between two connected nodes, without having to delete the connection and make two new ones. Well, now this is a reality. All you have to do is hold
alt
while dragging a node in the editor, and both the node and the connection line will visually show you that the node is able to be connected this way. It's really convenient to be able to insert nodes into existing chains this way, and can save a lot of time.
- Something we all have always wanted in chaiNNer was the ability to insert a compatible node in-between two connected nodes, without having to delete the connection and make two new ones. Well, now this is a reality. All you have to do is hold
-
Informing you about invalid connections (#1299, #1306, #1308)
- The UI now will inform you when a connection you are trying to make is invalid. Before, we would tell you why a node was invalid after it was already connected and then made invalid. However, we had never explained why some connections were not possible to begin with. This had confused many users as they would try to do things like connect a GFPGAN model to Upscale Image and think there was something wrong rather than it being something intentional. While the UI would not let these users make that connection, it would not tell them why. This update adds an explanation tooltip that explains why a connection can't be made as you attempt to drop the connection on a handle. This should hopefully improve the user experience and lead to less confusion when this happens.
-
Increased View Image preview resolution (#1290, #1342, #1351)
- View Image just became a lot more useful. Now, instead of seeing only a 512x512 preview of your image, you will see a maximum of 2K resolution when zoomed in to the max. This still isn't ideal for extremely large images (larger than 2k) where you need to zoom in even more than chaiNNer allows, but a feature for that will be coming in the future. For now, this is a huge improvement and makes in-line previewing much much better. We also improved the performance of the node, so you may notice previews loading a bit faster than they used to.
-
Better DDS Support (Reading & writing, Windows only) (#1266, #1356)
- With this update, previously unsupported DDS textures files can now be read and written on Windows. We make use of Texconv, which is a small texture utility that we now bundle with chaiNNer. Unfortunately, Textconv is Windows-only, meaning Linux & MacOS users will not be able to take advantage of this. But we figured it was still better to have it than not.
Minor Changes
- CodeFormer support (#1327)
- This Face Upscaling architecture was highly requested to be added, and now you can use it in chaiNNer with the Face Upscale node.
- NCNN Optimizer (#1259)
- NCNN models now get optimized on load as well as on convert, which can save help them save a little bit of time when upscaling. For most models, this will only be minute optimizations, but for batch upscaling it can save quite a lot of time overall.
- Show current overall execution progress on taskbar (#1343, #1365)
- Made amount input a slider in High Boost Filter (#1288)
- Added amount and threshold inputs for Unsharp Mask (#1293, #1294)
- Added the ability to cache ONNX TensorRT conversions (#1287)
- Added a context menu for multi-node selections (#1289)
- Added Ko-fi donation button to header (#1300)
- Improved Convert Colorspace node to be more generalized for alpha modes (#1322)
- Add .avs support to the video selection menu (#1345)
- Allow users to select alert text (#1349)
- Support node-search context menu in iterators (#1369)
New Nodes
- Create Color (#1285)
- Three new nodes that allow you to create single-color images in RGB, RGBA, or Grayscale.
- Surface Blur (#1292)
- This node blurs an image using a bilateral blur filter, also known as "surface blur."
Bug Fixes
- Fixed picking a custom python path (#1283, #1284, #1315)
- Fixed FaceSR (GFPGAN, RestoreFormer) and Transformer-model (SwinIR, HAT, Swin2SR) interpolation (#1281)
- Fixed the file select window prompting you to create files when opening directories that do not exist (#1282)
- Fixed the integrated FFMPEG download error to just be a warning (#1279)
- Fixed blurry checkers in image preview (#1320)
- Fixed being able to select text in file and directory inputs (#1331)
- Fixed opening a file with chaiNNer while an existing chaiNNer instance is already open (#1344)
- Fixed exact-size ONNX models not being properly run (#1336)
- Disallow negative crop values (#1350)
Experimental Features
- While not enabled/disable behind the experimental features setting, both Apple MPS (via pytorch-nightly) and Microsoft DML (via pytorch-directml) support should theoretically work with a proper environment set up through your system python. People who have tried to set this up have had issues though, and ultimately I need more people to test this before I can officially say we support these things. However, if you want to test it yourself and know how to set these things up, feel free to give it a go and report back to us. (#1280, #1359)
As always, thanks to @joeyballentine @RunDevelopment and @theflyingzamboni