Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Bucket Lock immutable object support for Google Cloud Storage. #776

Merged
merged 15 commits into from
Sep 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions docs/usage/immutable_snapshots.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Immutable Snapshots

## Overview of Immutable Objects

Several cloud providers offer functionality to create immutable objects within their storage services. Once an object is uploaded, it cannot be modified or deleted for a set period, known as the **immutability period**. These are referred to as **immutable objects**.

Currently, etcd-backup-restore supports the use of immutable objects on the following cloud platforms:

- Google Cloud Storage (currently supported)

## Enabling and using Immutable Snapshots with etcd-backup-restore

Etcd-backup-restore supports immutable objects, typically at what cloud providers call the "bucket level." During the creation of a bucket, it is configured to render objects immutable for a specific duration from the moment of their upload. This feature can be enabled through:

- **Google Cloud Storage**: [Bucket Lock](https://cloud.google.com/storage/docs/bucket-lock)

It is also possible to enable immutability retroactively by making appropriate API calls to your cloud provider, allowing the immutable snapshots feature to be used with existing buckets. For information on such configurations, please refer to your cloud provider's documentation.

The behaviour of bucket's objects uploaded before a bucket is set to immutable varies among storage providers. Etcd-backup-restore manages these objects and will perform garbage collection according to the configured garbage collection policy and the object's immutability expiry.

> Note: If immutable snapshots are not enabled then the object's immutability expiry will be considered as zero, hence causing no effect on current functionality.

## Current Capabilities

Etcd-backup-restore does not require configurations related to the immutability period of bucket's objects as this information is derived from the bucket's existing immutability settings. The etcd-backup-restore process also verifies the immutability expiry time of an object prior to initiating its garbage collection.

Therefore, it is advisable to configure your garbage collection policies based on the duration you want your objects to remain immutable.

## Storage Considerations

Making objects immutable for extended periods can increase storage costs since these objects cannot be removed once uploaded. Storing outdated snapshots beyond their utility does not significantly enhance recovery capabilities. Therefore, consider all factors before enabling immutability for buckets, as this feature is irreversible once set by cloud providers.
4 changes: 2 additions & 2 deletions pkg/compactor/compactor_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ var _ = Describe("Running Compactor", func() {
Expect(err).ShouldNot(HaveOccurred())

// Check if the compacted full snapshot is really present
snapList, err := store.List()
snapList, err := store.List(false)
Expect(err).ShouldNot(HaveOccurred())

compactedSnapshot = snapList[len(snapList)-1]
Expand Down Expand Up @@ -210,7 +210,7 @@ var _ = Describe("Running Compactor", func() {
Expect(err).ShouldNot(HaveOccurred())

// Check if the compacted full snapshot is really present
snapList, err := store.List()
snapList, err := store.List(false)
Expect(err).ShouldNot(HaveOccurred())

compactedSnapshot = snapList[len(snapList)-1]
Expand Down
4 changes: 2 additions & 2 deletions pkg/miscellaneous/miscellaneous.go
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ func GetLatestFullSnapshotAndDeltaSnapList(store brtypes.SnapStore) (*brtypes.Sn
fullSnapshot *brtypes.Snapshot
deltaSnapList brtypes.SnapList
)
snapList, err := store.List()
snapList, err := store.List(false)
if err != nil {
return nil, nil, err
}
Expand Down Expand Up @@ -97,7 +97,7 @@ type backup struct {
// GetFilteredBackups returns sorted by date (new -> old) SnapList. It will also filter the snapshots that should be included or not using the filter function.
// If the filter is nil it will return all snapshots. Also, maxBackups can be used to target only the last N snapshots (-1 = all).
func GetFilteredBackups(store brtypes.SnapStore, maxBackups int, filter func(snaps brtypes.Snapshot) bool) (brtypes.SnapList, error) {
snapList, err := store.List()
snapList, err := store.List(false)
if err != nil {
return nil, err
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/miscellaneous/miscellaneous_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -817,7 +817,7 @@ func NewDummyStore(snapList brtypes.SnapList) DummyStore {
return DummyStore{SnapList: snapList}
}

func (ds *DummyStore) List() (brtypes.SnapList, error) {
func (ds *DummyStore) List(_ bool) (brtypes.SnapList, error) {
return ds.SnapList, nil
}

Expand Down
2 changes: 1 addition & 1 deletion pkg/snapshot/copier/copier.go
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ func (c *Copier) copyBackups() error {

// Get destination snapshots and build a map keyed by name
c.logger.Info("Getting destination snapshots...")
destSnapshots, err := c.destSnapStore.List()
destSnapshots, err := c.destSnapStore.List(false)
if err != nil {
return fmt.Errorf("could not get destination snapshots: %v", err)
}
Expand Down
22 changes: 18 additions & 4 deletions pkg/snapshot/snapshotter/garbagecollector.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,8 @@ func (ssr *Snapshotter) RunGarbageCollector(stopCh <-chan struct{}) {

total := 0
ssr.logger.Info("GC: Executing garbage collection...")
snapList, err := ssr.store.List()
// List all (tagged and untagged) snapshots to garbage collect them according to the garbage collection policy.
snapList, err := ssr.store.List(true)
if err != nil {
metrics.SnapshotterOperationFailure.With(prometheus.Labels{metrics.LabelError: err.Error()}).Inc()
ssr.logger.Warnf("GC: Failed to list snapshots: %v", err)
Expand All @@ -62,6 +63,7 @@ func (ssr *Snapshotter) RunGarbageCollector(stopCh <-chan struct{}) {
} else {
// chunksDeleted stores the no of chunks deleted in the current iteration of GC.
var chunksDeleted int
// GarbageCollectChunks returns a filtered SnapList which does not contain chunks.
chunksDeleted, snapList = ssr.GarbageCollectChunks(snapList)
ssr.logger.Infof("GC: Total number garbage collected chunks: %d", chunksDeleted)
}
Expand Down Expand Up @@ -140,6 +142,10 @@ func (ssr *Snapshotter) RunGarbageCollector(stopCh <-chan struct{}) {
}

if deleteSnap {
if !nextSnap.IsDeletable() {
ssr.logger.Infof("GC: Skipping the snapshot: %s, since its immutability period hasn't expired yet", nextSnap.SnapName)
continue
}
ssr.logger.Infof("GC: Deleting old full snapshot: %s %v", nextSnap.CreatedOn.UTC(), deleteSnap)
if err := ssr.store.Delete(*nextSnap); err != nil {
ssr.logger.Warnf("GC: Failed to delete snapshot %s: %v", path.Join(nextSnap.SnapDir, nextSnap.SnapName), err)
Expand Down Expand Up @@ -199,8 +205,8 @@ func getSnapStreamIndexList(snapList brtypes.SnapList) []int {
}

// GarbageCollectChunks removes obsolete chunks based on the latest recorded snapshot.
// It eliminates chunks associated with snapshots that have already been uploaded.
// Additionally, it avoids deleting chunks linked to snapshots currently being uploaded to prevent the garbage collector from removing chunks before the composite is formed.
// It eliminates chunks associated with snapshots that have already been uploaded, and returns a SnapList which does not include chunks.
// Additionally, it avoids deleting chunks linked to snapshots currently being uploaded to prevent the garbage collector from removing chunks before the composite is formed. This chunk garbage collection is required only for GCS.
func (ssr *Snapshotter) GarbageCollectChunks(snapList brtypes.SnapList) (int, brtypes.SnapList) {
var nonChunkSnapList brtypes.SnapList
chunksDeleted := 0
Expand All @@ -216,6 +222,10 @@ func (ssr *Snapshotter) GarbageCollectChunks(snapList brtypes.SnapList) (int, br
}
// delete the chunk object
snapPath := path.Join(snap.SnapDir, snap.SnapName)
if !snap.IsDeletable() {
ssr.logger.Infof("GC: Skipping the snapshot: %s, since its immutability period hasn't expired yet", snap.SnapName)
continue
}
ssr.logger.Infof("GC: Deleting chunk for old snapshot: %s", snapPath)
if err := ssr.store.Delete(*snap); err != nil {
ssr.logger.Warnf("GC: Failed to delete chunk %s: %v", snapPath, err)
Expand Down Expand Up @@ -246,9 +256,13 @@ func (ssr *Snapshotter) GarbageCollectDeltaSnapshots(snapStream brtypes.SnapList
cutoffTime := time.Now().UTC().Add(-ssr.config.DeltaSnapshotRetentionPeriod.Duration)
for i := len(snapStream) - 1; i >= 0; i-- {
if (*snapStream[i]).Kind == brtypes.SnapshotKindDelta && snapStream[i].CreatedOn.Before(cutoffTime) {

snapPath := path.Join(snapStream[i].SnapDir, snapStream[i].SnapName)
ssr.logger.Infof("GC: Deleting old delta snapshot: %s", snapPath)

if !snapStream[i].IsDeletable() {
ssr.logger.Infof("GC: Skipping the snapshot: %s, since its immutability period hasn't expired yet", snapPath)
continue
}
if err := ssr.store.Delete(*snapStream[i]); err != nil {
ssr.logger.Warnf("GC: Failed to delete snapshot %s: %v", snapPath, err)
metrics.SnapshotterOperationFailure.With(prometheus.Labels{metrics.LabelError: err.Error()}).Inc()
Expand Down
42 changes: 21 additions & 21 deletions pkg/snapshot/snapshotter/snapshotter_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ var _ = Describe("Snapshotter", func() {
defer cancel()
err = ssr.Run(ctx.Done(), true)
Expect(err).Should(HaveOccurred())
list, err := store.List()
list, err := store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).Should(BeZero())
})
Expand Down Expand Up @@ -172,7 +172,7 @@ var _ = Describe("Snapshotter", func() {
})

It("should not take any snapshot", func() {
list, err := store.List()
list, err := store.List(false)
count := 0
for _, snap := range list {
if snap.Kind == brtypes.SnapshotKindFull {
Expand Down Expand Up @@ -225,7 +225,7 @@ var _ = Describe("Snapshotter", func() {
defer cancel()
err = ssr.Run(ctx.Done(), true)
Expect(err).ShouldNot(HaveOccurred())
list, err := store.List()
list, err := store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).ShouldNot(BeZero())
for _, snapshot := range list {
Expand Down Expand Up @@ -286,7 +286,7 @@ var _ = Describe("Snapshotter", func() {
ssrCtx := utils.ContextWithWaitGroup(testCtx, wg)
err = ssr.Run(ssrCtx.Done(), false)
Expect(err).ShouldNot(HaveOccurred())
list, err := store.List()
list, err := store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).ShouldNot(BeZero())
Expect(list[0].Kind).Should(Equal(brtypes.SnapshotKindDelta))
Expand Down Expand Up @@ -320,7 +320,7 @@ var _ = Describe("Snapshotter", func() {
err = ssr.Run(ssrCtx.Done(), true)

Expect(err).ShouldNot(HaveOccurred())
list, err := store.List()
list, err := store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).ShouldNot(BeZero())
Expect(list[0].Kind).Should(Equal(brtypes.SnapshotKindFull))
Expand Down Expand Up @@ -373,7 +373,7 @@ var _ = Describe("Snapshotter", func() {
defer cancel()
ssr.RunGarbageCollector(gcCtx.Done())

list, err := store.List()
list, err := store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).Should(Equal(len(expectedSnapList)))

Expand Down Expand Up @@ -403,7 +403,7 @@ var _ = Describe("Snapshotter", func() {
defer cancel()
ssr.RunGarbageCollector(gcCtx.Done())

list, err := store.List()
list, err := store.List(false)
Expect(err).ShouldNot(HaveOccurred())

incr := false
Expand Down Expand Up @@ -449,7 +449,7 @@ var _ = Describe("Snapshotter", func() {
Context("with all delta snapshots older than retention period", func() {
It("should delete all delta snapshots", func() {
store := prepareStoreWithDeltaSnapshots(testDir, deltaSnapshotCount)
list, err := store.List()
list, err := store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).Should(Equal(deltaSnapshotCount))

Expand All @@ -460,7 +460,7 @@ var _ = Describe("Snapshotter", func() {
Expect(err).NotTo(HaveOccurred())
Expect(deleted).To(Equal(deltaSnapshotCount))

list, err = store.List()
list, err = store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).Should(BeZero())
})
Expand All @@ -469,7 +469,7 @@ var _ = Describe("Snapshotter", func() {
Context("with no delta snapshots", func() {
It("should not delete any snapshots", func() {
store := prepareStoreWithDeltaSnapshots(testDir, 0)
list, err := store.List()
list, err := store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).Should(BeZero())

Expand All @@ -481,7 +481,7 @@ var _ = Describe("Snapshotter", func() {
Expect(err).NotTo(HaveOccurred())
Expect(deleted).Should(BeZero())

list, err = store.List()
list, err = store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).Should(BeZero())
})
Expand All @@ -490,7 +490,7 @@ var _ = Describe("Snapshotter", func() {
Context("with all delta snapshots younger than retention period", func() {
It("should not delete any snapshots", func() {
store := prepareStoreWithDeltaSnapshots(testDir, deltaSnapshotCount)
list, err := store.List()
list, err := store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).Should(Equal(6))

Expand All @@ -502,7 +502,7 @@ var _ = Describe("Snapshotter", func() {
Expect(err).NotTo(HaveOccurred())
Expect(deleted).Should(BeZero())

list, err = store.List()
list, err = store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).Should(Equal(deltaSnapshotCount))
})
Expand All @@ -511,7 +511,7 @@ var _ = Describe("Snapshotter", func() {
Context("with a mix of delta snapshots, some older and some younger than retention period", func() {
It("should delete only the delta snapshots older than the retention period", func() {
store := prepareStoreWithDeltaSnapshots(testDir, deltaSnapshotCount)
list, err := store.List()
list, err := store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).Should(Equal(6))

Expand All @@ -523,7 +523,7 @@ var _ = Describe("Snapshotter", func() {
Expect(err).NotTo(HaveOccurred())
Expect(deleted).To(Equal(3))

list, err = store.List()
list, err = store.List(false)
Expect(err).ShouldNot(HaveOccurred())
Expect(len(list)).Should(Equal(3))
})
Expand Down Expand Up @@ -565,7 +565,7 @@ var _ = Describe("Snapshotter", func() {
Expect(err).NotTo(HaveOccurred())
Expect(chunkCount).To(BeZero())

list, err := store.List()
list, err := store.List(false)
Expect(err).NotTo(HaveOccurred())
Expect(len(list)).To(BeZero())

Expand All @@ -585,7 +585,7 @@ var _ = Describe("Snapshotter", func() {
Expect(err).NotTo(HaveOccurred())
Expect(chunkCount).To(Equal(4))

list, err := store.List()
list, err := store.List(false)
Expect(err).NotTo(HaveOccurred())
Expect(len(list)).To(Equal(4))

Expand Down Expand Up @@ -621,7 +621,7 @@ var _ = Describe("Snapshotter", func() {
Expect(err).NotTo(HaveOccurred())
Expect(chunkCount).To(Equal(9))

list, err := store.List()
list, err := store.List(false)
Expect(err).NotTo(HaveOccurred())
Expect(len(list)).To(Equal(10))

Expand Down Expand Up @@ -658,7 +658,7 @@ var _ = Describe("Snapshotter", func() {
Expect(err).NotTo(HaveOccurred())
Expect(chunkCount).To(Equal(9))

list, err := store.List()
list, err := store.List(false)
Expect(err).NotTo(HaveOccurred())
Expect(len(list)).To(Equal(10))

Expand Down Expand Up @@ -695,7 +695,7 @@ var _ = Describe("Snapshotter", func() {
Expect(err).NotTo(HaveOccurred())
Expect(chunkCount).To(BeZero())

list, err := store.List()
list, err := store.List(false)
Expect(err).NotTo(HaveOccurred())
Expect(len(list)).To(Equal(3))

Expand Down Expand Up @@ -1266,7 +1266,7 @@ func addObjectsToStore(store brtypes.SnapStore, objectType string, kind string,

// getObjectCount returns counts of chunk and composite objects in the store
func getObjectCount(store brtypes.SnapStore) (int, int, error) {
list, err := store.List()
list, err := store.List(false)
if err != nil {
return 0, 0, err
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/snapstore/abs_snapstore.go
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,7 @@ func (a *ABSSnapStore) Fetch(snap brtypes.Snapshot) (io.ReadCloser, error) {
}

// List will return sorted list with all snapshot files on store.
func (a *ABSSnapStore) List() (brtypes.SnapList, error) {
func (a *ABSSnapStore) List(_ bool) (brtypes.SnapList, error) {
prefixTokens := strings.Split(a.prefix, "/")
// Last element of the tokens is backup version
// Consider the parent of the backup version level (Required for Backward Compatibility)
Expand Down
2 changes: 1 addition & 1 deletion pkg/snapstore/failed_snapstore.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ func (f *FailedSnapStore) Save(snap brtypes.Snapshot, rc io.ReadCloser) error {
}

// List will list the snapshots from store
func (f *FailedSnapStore) List() (brtypes.SnapList, error) {
func (f *FailedSnapStore) List(_ bool) (brtypes.SnapList, error) {
var snapList brtypes.SnapList
return snapList, fmt.Errorf("failed to list the snapshots")
}
Expand Down
Loading