You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would save alot of time (and money) to run backups for multiple smaller prefixes on one cluster.
This functionality can be added by editing the step producer python script to treat the filter argument as a list of comma seperated values.
For our use case, we cannot change the current format of the backups in s3. This causes an issue as the s3 destination for backups is currently passed in an argument for each prefix.
A possible solution is to also pass in the s3 destinations as a matching list of comma seperated values, but this would require bigger changes to the step producer, as well as changes to the invoking script, as the producer currently produces only one s3path.info.
The text was updated successfully, but these errors were encountered:
It would save alot of time (and money) to run backups for multiple smaller prefixes on one cluster.
This functionality can be added by editing the step producer python script to treat the filter argument as a list of comma seperated values.
For our use case, we cannot change the current format of the backups in s3. This causes an issue as the s3 destination for backups is currently passed in an argument for each prefix.
A possible solution is to also pass in the s3 destinations as a matching list of comma seperated values, but this would require bigger changes to the step producer, as well as changes to the invoking script, as the producer currently produces only one s3path.info.
The text was updated successfully, but these errors were encountered: