Skip to content
This repository has been archived by the owner on Jul 26, 2024. It is now read-only.

Releases: vmstan/gravity-sync

v4.0.7

15 Jan 15:24
50c786e
Compare
Choose a tag to compare

Fixes bug that prevented properly calculating hashes of the DNS CNAME file, resulting in always considering there are changes to sync (#431) by @alexperrault

Full Changelog: v4.0.6...v4.0.7

v4.0.6

07 Jan 18:35
ccb8962
Compare
Choose a tag to compare

Updates to respect configured options (#413) by @emroch

  • remote pihole detection uses configured docker/podman binaries
  • file permission updates use configured local/remote file owners

v4.0.5

10 Jul 16:46
ad783f1
Compare
Choose a tag to compare

What's Changed

Full Changelog: v4.0.4...v4.0.5

4.0.4

03 May 15:53
15867c0
Compare
Choose a tag to compare

404 DHCP Now Found

Gravity Sync will now sync static DHCP assignments. This is handled by management of the 04-pihole-static-dhcp.conf file. There is no additional work required for Gravity Sync to sync these static DHCP assignments, if you're using your Pi-hole as a DHCP server this file will exist and Gravity Sync will see it and act accordingly.

See the link above for more information on scoping for these changes.

4.0.3

29 Apr 15:04
53bc721
Compare
Choose a tag to compare

🐞 #339 Fixes bug in config script where a inputting a blank value wasn't properly handled reported by @chris2fourlaw
🪲 #335 Fixes bug in importer where non-standard SSH keyfiles were ignored reported by @zzen

🪵 The hash log for CNAME replication has been renamed. After upgrading, your first replication event will include CNAMEs even if nothing has changed. It should not indicate any problems.

4.0.2

13 Apr 14:46
eec49c9
Compare
Choose a tag to compare

A couple more bug fixes and a tweak to automation:

  • #319 adds some customization features to the gravity-sync auto function, with the help of @greggitter

Please see https://github.com/vmstan/gravity-sync/wiki/Automation for more.

4.0.1

12 Apr 19:34
73a6e00
Compare
Choose a tag to compare

A few minor issues fixed up, as reported by early adopters. Please see the linked issues for more information.

  • #324 adds ability to specify target SSH port in gravity-sync config command
  • #322 fixed by checking for status of gravity-sync.timer not service
  • #321 fixed by checking for status of gravity-sync.timer not service
  • #320 fix version output
  • #318 fix ssh key permission after migration
  • #326 adds custom SSH port to target info

Thanks to @jackc94 @Ksdmg @bakerboy448 for your feedback!

Full Changelog: v4.0.0...v4.0.1

4.0.0

11 Apr 22:10
41ecce4
Compare
Choose a tag to compare

The Re-Release

Gravity Sync 4 is an extensive rewrite of the application. There is not a single function, variable, or chunk of code that has gone untouched.

In the past I've always tried to make sure that anyone who wanted to update, would not have to do much more than occasionally reapply automation jobs to take advantage of new features. The effect of this made me reliant on flawed or shortsighted decisions that I'd made two years ago, and forced building all new features on top of (or within) the structure of those decisions.

Gravity Sync 4 is a fresh start.

Beta Testers

  1. Thank you to everyone who tested 4.0 and provided any feedback.
  2. To move from the beta track to the production version, running gravity-sync dev will toggle you back over.

Rethought Installation

The Gravity Sync installer has been rewritten to accommodate for both new installations and upgrades. (Upgrading only after a successful migration to 4.0, see below.) Previously it would detect existing installs and refuse to complete. There is also a dedicated update script that can be used to fix a broken deployment by executing bash /etc/gravity-sync/.gs/update.sh but should only be required if for some reason gravity-sync update has failed.

Rearchitected Deployment

The architecture and installation have been simplified so that Gravity Sync is installed on the pair of Pi-hole boxes that you want to use it. There is no more primary or secondary Pi-hole, and both sides run replication jobs as peers. When either side detects changes to your Pi-hole configuration, they are sent to the peer.

Automation jobs are set to run at rotating-randomized times to avoid collisions, and to prevent continuous re-syncing the hashes that were previously only saved to one instance are now also sent to the peer at the end of a successful replication event.

With automation fully enabled, changes to either Pi-hole will usually result in both sides reflecting the change within 5 minutes. However if you're making large changes you want to monitor, or just feeling impatient, you can still manually run a replication job anytime.

If you only want to run Gravity Sync on one Pi-hole, that's still fine too. An optional "peerless" mode will detect the absence of a configuration at the remote site and bypass the hash sync. A warning will appear in the Gravity Sync console output to configure at the remote side for full functional, but can be ignored. (This also should allow folks who replicate between more than two Pi-hole instances to have multiples syncing to a single source of truth.)

Relocated Executables

Gravity Sync previously required itself to live in the user's $HOME directory. Alternatively, if it was installed as the root user, Gravity Sync could live in any self contained folder on the system.

In 4.0, Gravity Sync now lives in /usr/local/bin (along side pihole) and the configuration files and job logs are contained in the /etc/gravity-sync folder. This change to the configuration folder should make containerizing Gravity Sync easier in the future. (hint, hint)

The main script has been renamed from gravity-sync.sh to gravity-sync and due to it's relocation, has the nice effect of letting you execute just gravity-sync from anywhere in the system. Said another way, you no longer have to run ./gravity-sync.sh from the Gravity Sync folder in $HOME anytime you want to manually run it.

Resimplified Backups

Along with the removal of the dedicated backup and restore features that happened in 3.5, in 4.0 the backup functions are resimplified in that backup files are no longer timestamped, so jobs that no longer complete successfully will not continue to write new backup files each time, until the hard drive has been filled.

  • All data backup operations have moved to the /tmp folders on both the local and remote Pi-hole.
  • All backup files have the extension .gsb appended to them during processing. This replaces the various .backup, .pull, .push files of previous versions.

The amount of write activity, network traffic, and overall execution time has been reduced by cutting down on the number of backup data copies that are created, especially during push operations.

Reduced Dependence

The dependency on a standalone copy of SQLite3 has been removed. Gravity Sync now uses the integrated SQLite3 instance that is built into Pi-hole's FTL component (pihole-FTL sql) which also removes impediments for folks wanting to install Gravity Sync on systems that didn't have the ability to also install SQLite3.

All cron related automation functions have been removed. The systemd based replication function that was introduced in 3.7 comes forward, with further enhancements. This means packages like 'cronie' are no longer needed on systems that lacked a built in crontab.

Really New

In addition to everything above there are a few new functions.

  • gravity-sync purge now uninstalls and completely removes Gravity Sync from the system. It previously only reset the installation to new.
  • gravity-sync disable will fully stop and remove all automation jobs in systemd.
  • gravity-sync monitor will let you watch the real time status of replication jobs.

If DNS Records and CNAME Records are detected at either side, they are replicated. Previously these functions required you to opt-in to replicating these, but this was done because their functionality was added after the initial release of Gravity Sync in 2020.

Required to Upgrade

There are two ways to upgrade to Gravity Sync 4.

Fresh Install

curl -sSL http://gravity.vmstan.com | bash

Using the curl to bash one liner listed above, you will deploy a new install of Gravity Sync 4 to both of your existing Pi-hole. Any current Gravity Sync configuration will be ignored.

You will need to generate a new configuration file using gravity-sync config but because the configuration workflow has been completely redone, in most cases you will only need to provide the IP address & username/password for your remote Pi-hole. Gravity Sync will automatically detect if you have a containerized instance of Pi-hole and prompt for the name of the instance(s). Once this is provided it will automatically detect the bind mounts in use for Pi-hole's two /etc directories. It's possible to have your new configuration generated in less than a minute.

Gravity Sync now uses a dedicated SSH key for connection to remote Pi-hold instances. This key is stored as /etc/gravity-sync/gravity-sync.rsa and should be used only for Gravity Sync. The configuration utility will create the new key file. Existing default id_rsa key associations to remote servers can be safely deleted/removed unless they're associated with some other process.

Migrate Existing Configuration

From your existing Gravity Sync folder, running any version 3.1 to 3.7, run ./gravity-sync.sh update and the version 4.0 code will temporarily come into the legacy folder. At this point ./gravity-sync.sh is the migration tool to move you to the version 4.0 format.

Run ./gravity-sync.sh again and a new Gravity Sync 4 installation will be laid down for you and your existing configuration, logs, SSH key, and hashes will be migrated to the fresh install. Legacy backup files are not retained. At the end of a successful script execution the legacy Gravity Sync folder will be deleted, so you may need to run a cd ~ to get yourself out of the now deleted folder.

Many of what were termed "Hidden Figures" as advanced configuration options have been removed, as they're no longer supported or the logic behind them being necessary have changed. So only the configuration settings relevant to 4.0 will be transferred.

Your automation settings will be defaulted to nothing. You will want to run a gravity-sync compare to make sure your migration was successful.

At this point you will want to perform a fresh install of Gravity Sync on your other Pi-hole to enable peer mode and hash sharing. Once that is successful enable automation on both nodes using gravity-sync auto.

3.7.0

29 Mar 21:11
ff32465
Compare
Choose a tag to compare

The Daemon Release

This release features a different method of managing automated replication. Crontab has been deprecated in favor of scheduled tasks using systemd timers. To switch to the new replication method, just run ./gravity-sync.sh auto again, and it will remove existing crontab settings in favor of systemd.

Using systemd allows more options in timing replication jobs, but this especially useful for viewing the logs of previous automation jobs via standard journalctl -u gravity-sync commands. Where as previously automation jobs would run at 5, 10, 15 or 30 minute intervals within an hour, new automation tasks within systemd are configured by default to run every 5-10 minutes after being started. (This is 5 minutes + a random timer < 5 minutes.) Replications will automatically attempt for the first time 2 minutes after the system is powered on. This bit of randomization should allow users with multiple secondary Pi-hole syncing from a single primary, the ability to stagger load on the primary.

  • Users wishing to adjust the timing should review the /etc/systemd/system/gravity-sync.timer file and adjust the values for OnBootSec, OnUnitInactive, and RandomizedDelaySec accordingly. There is no method within the Gravity Sync interface to adjust these, at present.
  • Edits made to your templates/gravity-sync.timer file are not be persistent after updating Gravity Sync, but the timers applied to your individual system in /etc/systemd/system/gravity-sync.timer will not change unless you run ./gravity-sync auto again after updating.
  • Leveraging system allows more options in timing replication jobs, but this especially useful for viewing the logs of previous automation jobs via journalctl -u gravity-sync More how-to on the advanced uses of journalctl can be found here.

Previously, running ./gravity-sync.sh auto and setting the value to 0 would disable the cronjob. Running it again would then prompt for a new value. Now the AUTOMATE function will only reapply the latest automation job settings. If you would like to temporarily pause automated replication, you can use systemctl stop gravity-sync.timer and then systemctl start gravity-sync.timer to begin again.

  • If you would like to completely disable automated replication, you can use systemctl disable gravity-sync for this task. Running ./gravity-sync.sh auto again will enable and start replication again.
  • If you would like to continue using or managing automation through crontab, these functions have been moved to ./gravity-sync.sh cron which was previously used to display the output of the previous cronjob.
  • Users can now just manually cat logs/gravity-sync.cron to view the output of the previous cronjob.

3.6.3

28 Mar 19:18
0875507
Compare
Choose a tag to compare
  • Adds logic to SMART task to only restart the Pi-hole DNS service if replication of Local DNS or CNAMEs take place within a replication task.
  • Adds local environment $PATH to the crontab when automation tasks are enabled to make sure that Pi-hole services can properly restart.
  • If you are running into issues with your Pi-hole storage filling up, it's likely due to the Pi-hole DNS service not restarting and the script failing, please update to 3.6.3 and recreate your automation task using ./gravity-sync.sh auto