Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions settings-sample.rb
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,15 @@
# use SSL to transmit backups to S3 (a good idea)
USE_SSL = true

# LIMIT PRIVILEGES
# * Limits actions to only write new backups and create new buckets. No attempts are made to
# delete data from S3. Set this to true when the credentials you provide don't have the
# rights to perform deletions. This is a handy way to prevent disaster should someone
# malicious gain access to them. With correctly restricted IAM permissions they won't be
# able to delete existing backups. Use a separate server and credentials to purge old backups.
# (default is false)
SKIP_DELETE = false

# CREATE AWS/S3 CONNECTION
AWS::S3::Base.establish_connection!(
:access_key_id => '*** YOUR CREDENTIALS HERE ***',
Expand Down
13 changes: 8 additions & 5 deletions simple-s3-backup.rb
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
connection = Sequel.mysql nil, :user => MYSQL_USER, :password => MYSQL_PASS, :host => 'localhost', :encoding => 'utf8'
@databases = connection['show databases;'].collect { |db| db[:Database] }
@databases.delete("performance_schema") # Remove this db from the list, since it makes no sense to back up and causes some errors with --events.
@databases.delete("#mysql50#lost+found") # Skip this db since backup would fail. Not a real db but artifact of putting the MySQL datadir on own volume with ext3/4 fs.
elsif defined?(MYSQL_DBS)
@databases = MYSQL_DBS
end
Expand Down Expand Up @@ -108,8 +109,10 @@
# Remove tmp directory
FileUtils.remove_dir full_tmp_path

# Now, clean up unwanted archives
cutoff_date = Time.now.utc.to_i - (DAYS_OF_ARCHIVES * 86400)
bucket.objects.select{ |o| o.last_modified.to_i < cutoff_date }.each do |f|
S3Object.delete(f.key, S3_BUCKET)
end
# Now, clean up unwanted archives, if allowed
unless SKIP_DELETE
cutoff_date = Time.now.utc.to_i - (DAYS_OF_ARCHIVES * 86400)
bucket.objects.select{ |o| o.last_modified.to_i < cutoff_date }.each do |f|
S3Object.delete(f.key, S3_BUCKET)
end
end