Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement partition store restore-from-snapshot #2353

Open
wants to merge 7 commits into
base: feat/snapshot-upload
Choose a base branch
from

Conversation

pcholakov
Copy link
Contributor

@pcholakov pcholakov commented Nov 22, 2024

With this change, Partition Processor startup now checks the snapshot repository
for a partition snapshot before creating a blank store database. If a recent
snapshot is available, we will restore that instead of replaying the log from
the beginning.

This PR builds on #2310

Closes: #2000


Testing

Created snapshot by running restatectl create-snapshot -p 0, then dropped the partition CF with rocksdb_ldb drop_column_family --db=./restate-data/.../db data-0.

Running restate-server correctly restores the most recent available snapshot:

2024-11-27T18:43:40.042527Z TRACE restate_worker::partition_processor_manager::spawn_processor_task: Looking for snapshot to bootstrap partition store
2024-11-27T18:43:40.042670Z DEBUG on_asynchronous_event: restate_worker::partition_processor_manager: Partition processor was successfully created. target_run_mode=Leader partition_id=0 event=Started
2024-11-27T18:43:40.044494Z  INFO get_latest: aws_config::profile::credentials: constructed abstract provider from config file chain=ProfileChain { base: Sso { sso_session_name: Some("restate"), sso_region: "eu-central-1", sso_start_url: "https://d-99671f0c4b.awsapps.com/start", sso_account_id: Some("663487780041"), sso_role_name: Some("EngineerAccess") }, chain: [] } partition_id=0
2024-11-27T18:43:40.048798Z DEBUG on_asynchronous_event: restate_worker::partition_processor_manager::processor_state: Instruct partition processor to run as leader. leader_epoch=e56 partition_id=0 event=NewLeaderEpoch
2024-11-27T18:43:40.948928Z  INFO get_latest: aws_config::profile::credentials: loaded base credentials creds=Credentials { provider_name: "SSO", access_key_id: "ASIAZU6XUCTE3QRJBHL4", secret_access_key: "** redacted **", expires_after: "2024-11-28T02:43:39Z" } partition_id=0
2024-11-27T18:43:41.494631Z TRACE get_latest: restate_worker::partition::snapshots::repository: Latest snapshot metadata: LatestSnapshot { version: V1, partition_id: PartitionId(0), cluster_name: "localcluster", node_name: "Pavels-MacBook-Pro.local", created_at: Timestamp(SystemTime { tv_sec: 1732731695, tv_nsec: 510557000 }), snapshot_id: snap_14QnEfyWDrRj0GBbOz0XFiV, min_applied_lsn: Lsn(1212), path: "lsn_00000000000000001212-snap_14QnEfyWDrRj0GBbOz0XFiV" } partition_id=0
2024-11-27T18:43:42.390392Z  INFO get_latest: aws_config::profile::credentials: loaded base credentials creds=Credentials { provider_name: "SSO", access_key_id: "ASIAZU6XUCTE5UO2I244", secret_access_key: "** redacted **", expires_after: "2024-11-28T02:43:41Z" } partition_id=0
2024-11-27T18:43:42.556935Z DEBUG get_latest: restate_worker::partition::snapshots::repository: Getting snapshot data snapshot_id=snap_14QnEfyWDrRj0GBbOz0XFiV path="/Users/pavel/restate/restate/restate-data/snap_14QnEfyWDrRj0GBbOz0XFiV-OClIZB" partition_id=0
2024-11-27T18:43:43.400784Z  INFO aws_config::profile::credentials: loaded base credentials creds=Credentials { provider_name: "SSO", access_key_id: "ASIAZU6XUCTEW7NOHHGS", secret_access_key: "** redacted **", expires_after: "2024-11-28T02:43:42Z" }
2024-11-27T18:43:43.402483Z  INFO aws_config::profile::credentials: loaded base credentials creds=Credentials { provider_name: "SSO", access_key_id: "ASIAZU6XUCTEYPZHMLQY", secret_access_key: "** redacted **", expires_after: "2024-11-28T02:43:42Z" }
2024-11-27T18:43:43.607012Z TRACE restate_worker::partition::snapshots::repository: Downloaded snapshot data file to "/Users/pavel/restate/restate/restate-data/snap_14QnEfyWDrRj0GBbOz0XFiV-OClIZB/000398.sst" key=test-cluster-snapshots/0/lsn_00000000000000001212-snap_14QnEfyWDrRj0GBbOz0XFiV/000398.sst size=1144
2024-11-27T18:43:43.803874Z TRACE restate_worker::partition::snapshots::repository: Downloaded snapshot data file to "/Users/pavel/restate/restate/restate-data/snap_14QnEfyWDrRj0GBbOz0XFiV-OClIZB/000405.sst" key=test-cluster-snapshots/0/lsn_00000000000000001212-snap_14QnEfyWDrRj0GBbOz0XFiV/000405.sst size=1269
2024-11-27T18:43:43.804296Z  INFO get_latest: restate_worker::partition::snapshots::repository: Downloaded partition snapshot snapshot_id=snap_14QnEfyWDrRj0GBbOz0XFiV path="/Users/pavel/restate/restate/restate-data/snap_14QnEfyWDrRj0GBbOz0XFiV-OClIZB" partition_id=0
2024-11-27T18:43:43.804403Z TRACE restate_worker::partition_processor_manager::spawn_processor_task: Restoring partition snapshot partition_id=0
2024-11-27T18:43:43.804912Z  INFO restate_partition_store::partition_store_manager: Importing partition store snapshot partition_id=0 min_lsn=1212 path="/Users/pavel/restate/restate/restate-data/snap_14QnEfyWDrRj0GBbOz0XFiV-OClIZB"
2024-11-27T18:43:43.821260Z  INFO run: restate_worker::partition: Starting the partition processor. partition_id=0
2024-11-27T18:43:43.821502Z DEBUG run: restate_worker::partition: PartitionProcessor creating log reader last_applied_lsn=1212 current_log_tail=1220 partition_id=0
2024-11-27T18:43:43.821548Z DEBUG run: restate_worker::partition: Replaying the log from lsn=1213, log tail lsn=1220 partition_id=0
2024-11-27T18:43:43.821653Z  INFO run: restate_worker::partition: PartitionProcessor starting event loop. partition_id=0

Copy link

github-actions bot commented Nov 22, 2024

Test Results

  7 files  ±0    7 suites  ±0   4m 21s ⏱️ -1s
 47 tests ±0   46 ✅ ±0  1 💤 ±0  0 ❌ ±0 
182 runs  ±0  179 ✅ ±0  3 💤 ±0  0 ❌ ±0 

Results for commit 08c1ad2. ± Comparison against base commit ff6d9ce.

♻️ This comment has been updated with latest results.

Copy link
Contributor

@tillrohrmann tillrohrmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for creating this PR @pcholakov. The changes look really nice! I had a few minor question. It would be great to add the streaming write before merging. Once this is resolved +1 for merging :-)

));
let file_path = snapshot_dir.path().join(filename);
let file_data = self.object_store.get(&key).await?;
tokio::fs::write(&file_path, file_data.bytes().await?).await?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it would indeed be great to write the file in streaming fashion to disk. Especially once our SSTs grow.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe something like

let mut file_data = self.object_store.get(&key).await?.into_stream();
let mut snapshot_file = tokio::fs::File::create_new(&file_path).await?;
while let Some(data) = file_data.next().await {
       snapshot_file.write_all(&data?).await?;
}

can already be enough. Do you know how large the chunks of the stream returned by self.object_store.get(&key).await?.into_stream() will be?

Comment on lines 138 to 140
let partition_store = if !partition_store_manager
.has_partition_store(pp_builder.partition_id)
.await
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of scope of this PR: What is the plan how to handle a PP that has some data but the data is lagging too far behind? So starting the PP would result into a trim gap. Would we then drop the respective column family and restart it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't tracked down all the places that would need to be updated yet but I believe we can shut down the PP, drop the existing CF (or just move it out of the way), and follow the bootstrap path from there.

Comment on lines +246 to +327
/// Discover and download the latest snapshot available. Dropping the returned
/// `LocalPartitionSnapshot` will delete the local snapshot data files.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it because the files are stored in a temp directory? On LocalPartitionSnapshot itself I couldn't find how the files are deleted when dropping it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the temp dir also the mechanism to clean things up if downloading it failed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that TempDir::with_prefix_in takes care of it since it deletes the files when it gets dropped. This is a nice solution!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is exactly right! I'm not 100% in love with it - it works but it's a big magical as deletion happens implicitly when LocalPartitionSnapshot is dropped, and that could be quite far removed from the find_latest call. Something that's hard to do with this approach is to retain the snapshot files if importing it fails, which could be useful for troubleshooting.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The description seems off though. If I understand the code correctly, then dropping LocalPartitionSnapshot won't delete the files. What will happen via TempDir is that if an error occurs before we call snapshot_dir.into_path(), then the snapshot_dir will be deleted. Afterwards its the responsibility of the whoever owns LocalPartitionSnapshot to clean things up.

"Found snapshot to bootstrap partition, restoring it",
);
partition_store_manager
.open_partition_store_from_snapshot(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In

, we seem to copy the snapshot files to keep them intact. What is the reason for this? Wouldn't it be more efficient to move the files because it wouldn't inflict any I/O costs if the snapshot directory is on the same filesystem as the target directory?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RocksDB will create hard-links if possible - so no real I/O happens as long as the snapshot files and the DB are on the same filesystem. I kept it this way because it was useful for my initial testing but I think it would be better to set move_files = true now. In the future, we may add a config option to retain the snapshot files on import/export, and even then maybe only if an error occurs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, moving by default and copying if explicitly configured sounds like the right settings.

@pcholakov pcholakov force-pushed the feat/snapshot-upload branch 3 times, most recently from defc6ee to 7291ede Compare November 26, 2024 21:35
@pcholakov pcholakov marked this pull request as draft November 27, 2024 12:29
@pcholakov pcholakov changed the title Introduce SnapshotRepository find_latest and wire up partition restore Add SnapshotRepository::find_latest and wire up partition restore Nov 27, 2024
@pcholakov pcholakov changed the title Add SnapshotRepository::find_latest and wire up partition restore Implement partition store restore-from-snapshot Nov 27, 2024
With this change, Partition Processor startup now checks the snapshot repository
 for a partition snapshot before creating a blank store database. If a recent
 snapshot is available, we will restore that instead of replaying the log from
 the beginning.
@pcholakov pcholakov marked this pull request as ready for review November 27, 2024 18:47
@pcholakov
Copy link
Contributor Author

Next revision is up ready for review:

  • RocksDB files now moved on import
  • S3 downloads are streamed to disk
  • S3 downloads are parallel with bounded concurrency
  • Partition key range check on import

Copy link
Contributor

@muhamadazmy muhamadazmy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @pcholakov for creating this PR. I left few comments below. I hope they help

let _permit = concurrency_limiter.acquire().await?;
let mut file_data = StreamReader::new(object_store.get(&key).await?.into_stream());
let mut snapshot_file = tokio::fs::File::create_new(&file_path).await?;
let size = io::copy(&mut file_data, &mut snapshot_file).await?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a sanity check that the downloaded file size matches what is expected from the snapshot metadata

downloads.abort_all();
return Err(e.into());
}
Some(Ok(_)) => {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it intentional to not handle errors returned by the download routine ? the _ here is actually an anyhow::Result which itself can be an error.

Check suggestion below

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right and we need to handle the inner error case as well. Right now, we might accept an incomplete snapshot as complete if any of the file downloads fails with an error and not a panic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably also want to include in the error message which file failed the download process.

Comment on lines +425 to +436
loop {
match downloads.join_next().await {
None => {
break;
}
Some(Err(e)) => {
downloads.abort_all();
return Err(e.into());
}
Some(Ok(_)) => {}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
loop {
match downloads.join_next().await {
None => {
break;
}
Some(Err(e)) => {
downloads.abort_all();
return Err(e.into());
}
Some(Ok(_)) => {}
}
}
for result in downloads.join_next().await {
match result {
Err(err) => {
// join error
},
Ok(Err(err)) => {
// anyhow error
}
Ok(Ok(_)) => {
// download succeeded
},
}
}

Copy link
Contributor

@tillrohrmann tillrohrmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating this PR @pcholakov. The changes look good to me. The one thing we need to fix is the handling of failed downloads as pointed out by Azmy. Otherwise we might consider a snapshot completely downloaded while some ssts are missing.

Comment on lines +169 to +179
if snapshot.key_range.start() > partition_key_range.start()
|| snapshot.key_range.end() < partition_key_range.end()
{
warn!(
%partition_id,
snapshot_range = ?snapshot.key_range,
partition_range = ?partition_key_range,
"The snapshot key range does not fully cover the partition key range"
);
return Err(RocksError::SnapshotKeyRangeMismatch);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good check :-)

Comment on lines +246 to +327
/// Discover and download the latest snapshot available. Dropping the returned
/// `LocalPartitionSnapshot` will delete the local snapshot data files.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The description seems off though. If I understand the code correctly, then dropping LocalPartitionSnapshot won't delete the files. What will happen via TempDir is that if an error occurs before we call snapshot_dir.into_path(), then the snapshot_dir will be deleted. Afterwards its the responsibility of the whoever owns LocalPartitionSnapshot to clean things up.

};

let latest: LatestSnapshot =
serde_json::from_slice(latest.bytes().await?.iter().as_slice())?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does iter().as_slice() do? Would &latest.bytes().await? work instead?

trace!("Latest snapshot metadata: {:?}", latest);

let snapshot_metadata_path = object_store::path::Path::from(format!(
"{prefix}{partition_id}/{path}/metadata.json",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These paths are probably used on the write and read path. Should we share them through a function. That makes it easier to keep them in sync between the two paths.

Comment on lines +370 to +371
info!("Latest snapshot points to a snapshot that was not found in the repository!");
return Ok(None); // arguably this could also be an error
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering whether this does not denote a "corruption" of our snapshots and therefore might even warrant a panic? I mean we might still be lucky and don't encounter a trim gap because a) we haven't trimmed yet or b) our applied index is still after the trim point. So I guess this might have been the motivation to return None, here? This is actually more resilient than panicking in some cases. The downside is that we might be stuck in a retry loop if we are encountering a trim gap. Maybe raise the log level to warn so that this becomes more visible?

pub(crate) async fn get_latest(
&self,
partition_id: PartitionId,
) -> anyhow::Result<Option<LocalPartitionSnapshot>> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe something for the future: It feels as if callers might be interested in why get_latest failed in the future. I could imagine that different errors are handled differently (e.g. retried because connection to S3 failed vs. unsupported snapshot format). So anyhow::Result (while convienent) might not be the perfect fit here.

let _permit = concurrency_limiter.acquire().await?;
let mut file_data = StreamReader::new(object_store.get(&key).await?.into_stream());
let mut snapshot_file = tokio::fs::File::create_new(&file_path).await?;
let size = io::copy(&mut file_data, &mut snapshot_file).await?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this solution for copying the file in streaming fashion :-)

Comment on lines +431 to +432
downloads.abort_all();
return Err(e.into());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I think it does not make a difference right now for correctness, I would still recommend to drain downloads after aborting all because abort_all does not guarantee that tasks have completely finished (e.g. if something calls spawn_blocking).

downloads.abort_all();
return Err(e.into());
}
Some(Ok(_)) => {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right and we need to handle the inner error case as well. Right now, we might accept an incomplete snapshot as complete if any of the file downloads fails with an error and not a panic.

downloads.abort_all();
return Err(e.into());
}
Some(Ok(_)) => {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably also want to include in the error message which file failed the download process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants