Knowledge Base

Articles list

Storing Backups

One of the most common uses for storage services is storing backups. To help facilitate this, Selectel Cloud Storage is compatible with many different backup storage utilities.

To store a backup, open Cloud Storage in the control panel and create a private container (instructions). Private containers require authentication.

When uploading files to storage, it's best to create an additional user with minimum privileges; this is merely a fail-safe as primary users always have full access.

Cyberduck File Manager

Cyberduck is a popular FTP and SFTP client for Windows, MacOS, and Linux that supports the most popular protocols, including FTP, SFTP, WebDAV, and AmazonS3. The software can also be used to manage various cloud storage services.

Installing Cyberduck GUI

Windows

We’ve created a Cyberduck profile specially for our Cloud Storage clients. This profile uses Swift instead of FTP, which improves the overall quality of work.

To load the profile:

  1. Download our profile.
  2. Install the client, copying the Selectel profile to the profiles folder (the default location for this is C:\Program Files (x86)\Cyberduck\profiles).
  3. Launch Cyberduck and click Open Connection.
    image
  4. Choose the Selectel Cloud Storage profile. 
    image
  5. Fill in the Selectel ID:Username field with:
    • Selectel ID (1) - this is your login to the Selectel control panel
    • Username (2) - this is your Cloud Storage user name

    Please note that the user name and password are set by the account owner and can be found in the control panel under Cloud Storage  Users.

    image
  6. Enter your password in the Password field.
  7. Click Connect.

A connection to Cloud Storage will be established and a list of your available containers will be displayed in the new window.

MacOS

Cyberduck GUI can be installed on MacOS from the Appstore.

  1. Download our profile and choose Selectel Cloud Storage as your Cyberduck connection.
    image
  2. Fill in the Selectel ID:Username field with:
    • Selectel ID - this is your login to the Selectel control panel
    • Username - this is your Cloud Storage user name
  3. Enter your password in the Password field.
  4. Click Connect.
    image

This will open a window displaying your available containers.

Linux

There is no Cyberduck GUI available for Linux.

Installing Cyberduck Console Clients

Windows

Cyberduck CLI can be installed from the Chocolatey package manager.

Enter the command

choco install duck

Download the latest version of the MSI installer from the link (https://dist.duck.sh/). More information can be found at https://trac.cyberduck.io/wiki/help/en/howto/cli#Windows.

image

To install the Selectel profile, copy the profile to the profiles folder (the default location for this is C:\Program Files (x86)\Cyberduck CLI\profiles).

MacOS

Cyberduck CLI for MacOS is installed using the Homebrew package manager:

brew install duck

Linux

Here we’ll describe the installation process for Ubuntu 16.04; for other distributions, refer to the official documentation.

To install Cyberduck:

1. Add the necessary repositories:

echo 'deb https://s3.amazonaws.com/repo.deb.cyberduck.io nightly main' >> /etc/apt/sources.d/cyberduck.list
echo 'deb https://s3.amazonaws.com/repo.deb.cyberduck.io stable main' >> /etc/apt/sources.d/cyberduck.list

2. Add a key:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys FE7097963FEFBE72

3. Run the following commands:

sudo apt-get update
sudo apt-get install duck

Basic Console Operations

Download the Selectel profile:

wget wget https://static.selectel.ru/kb/selectel-storage.cyberduckprofile

Create a profiles directory:

mkdir -p .duck/profiles

Move the Selectel profile to the profiles directory:

mv 'selectel-storage.cyberduckprofile' .duck/profiles/

All of the commands for working with Selectel Cloud Storage look as follows:

duck - selectel:// --username : --password 

To view a list of all possible arguments, enter:

duck --help

Retrieving a Container’s File List

To retrieve a list of files saved in a container, use the -l option (or --list):

duck -l selectel:// --username : --password 

The printout will look like this:

1.pdf
2.jpg
3.png

Please note that some Linux systems won't display filenames with non-Latin characters.

Downloading a File

To download a file from the cloud, we use the command:

duck -d selectel:// --username : --password 

Opening and Editing a File on the Local Machine

Using Cyberduck CLI, we can open files to edit on the local machine. When we finish editing, the updated file (with all of the changes made) will be uploaded to storage. This is done using the --edit argument:

duck --edit selectel:// --username : --password 

The file will be opened by the system’s default application for that file type, and the updated file will be uploaded automatically.

This should be useful for users who host their static sites in our storage. To quickly edit a site’s text, you just have to run the command, make the desired changes, and save.

Uploading Objects to the Cloud

Objects can be uploaded using the following command:

duck --upload selectel:/// --username : --password 

Please note that the full storage path has to be given in order to upload. For example, if we want to save the file myimage.png to the container ‘images’, then we have to enter its path as: /images/myimage.png.

Cyberduck can upload large objects (bigger than 2 GB) to the cloud in segments.

Object Versions and Backups

Cyberduck CLI is a convenient tool for making backups and archiving data. We’ll look at a few examples to better understand these functions.

Let’s say we have a directory on a local machine whose contents should periodically be copied to the cloud. A special script has been written for this and a cron task has been added, which sends backups to the cloud every day at a given time.

The script will look like this:

#!/bin/bash
SWIFT_USERNAME=username
SWIFT PASSWORD=password for accessing storage
SWIFT_AUTH_URL=auth.selcdn.ru
BACKUP_PATH=path to backup in storage
LOCAL_PATH=path to folder on local machine
  
duck --upload selectel://$SWIFT_USERNAME@$SWIFT_AUTHURL$BACKUP_PATH $LOCAL_PATH --existing rename --password $SWIFT_PASSWORD -q

Look at the syntax of the duck command. In this example, the --existing key is used, which tells the program what to do with the existing files in storage. The rename option renames existing copies by adding the date and time to their name.

Using Cyberduck, we can make and compare backups. This is done using the compare option:

duck --upload  selectel://  --existing compare --username : --password 

When executing this command, the program compares the uploaded copy to existing files by size, last modified date, and checksum. If the parameters are different, then the old version gets deleted and the new version is uploaded to storage.

By using the skip option, only new files (i.e. files that have been added to the local folder since the last upload) are uploaded to storage. Existing files won’t be uploaded, even if they were changed on the local machine.

Finally, the overwrite option simply deletes the existing backup from storage and uploads the new version.

Synchronizing Local Files with Storage Files

File synchronization is a process that leaves you with two directories with identical sets of the latest files: one on the local machine and one in the cloud. If any files have been changed, added, or deleted on the local machine, then these files will be changed, added to, or deleted from storage, and vice versa.

Synchronization is performed using the command:

duck --synchronize selectel:// --username : --password  

Using the synchronization function, copies of data in storage can be kept up-to-date with those on the local machine.

An example of this script would be:

#!/bin/bash
SWIFT_USERNAME=username
SWIFT_PASSWORD=password for accessing storage
SWIFT_AUTH_URL=auth.selcdn.ru
BACKUP_PATH=path to backup in storage
LOCAL_PATH=path to folder on local machine
  
duck --synchronize selectel://$SWIFT_USERNAME@SWIFT_AUTHURL$BACKUP_PATH $LOCAL_PATH --password $SWIFT_PASSWORD -q

Just add the corresponding task to cron and data will automatically synchronize at the given interval.

This function is useful for storing static sites in the cloud. To update a site, just make the appropriate changes to files on the local machine and then run the synchronization command.

Copying Files

To copy a file from one container to another, we run the following command:

duck --сopy selectel:// / / -p 

Option -v

To print information in the console on all the HTTP requests that occurred while performing storage operations and their responses, we use the -v (or -verbose) option. This helps us understand how third-party applications interact with storage.

Installing Supload

Supload (GitHub) is a tool specially designed for uploading files to our Cloud Storage.

With supload, you can:

  • upload local files to storage
  • recursively upload all files in user-defined folders and subfolders
  • verify file uploads using checksums
  • upload only new and modified files
  • automatically delete files from storage

If you have a medium-sized site with a MySQL database, you can configure regular backups by downloading two scripts and configuring the necessary parameters.

To install supload, run:

wget https://raw.github.com/selectel/supload/master/supload.sh
mv supload.sh /usr/local/bin/supload
chmod +x /usr/local/bin/supload

Making Backups

To make a backup, download and configure the script:

wget https://raw.github.com/selectel/storage/master/utils/sbackup.sh
chmod +x sbackup.sh

Open the script sbackup.sh in any text editor and set the following parameters:

  • SS_USER — username for accessing storage (the additional user we suggest creating)
  • SS_PWD — corresponding user's password
  • SS_CONTAINER — name of container where backups will be uploaded to
  • TARGET_DIR — path to site files
  • BACKUP_DIR — path on server where backups will temporarily be stored
  • EXCLUDE_LIST — list of files to exclude from archives
  • DB_NAME — name of MySQL database; to backup all existing databases, enter value __ALL__
  • DB_USER and DB_PWD — username and password for accessing MySQL
  • EMAIL — email where backup reports will be sent (leave blank to disable)
  • EMAIL_ONLY_ON_ERROR — if yes, reports will only be sent if a problem/error occurs
  • DELETE_BACKUPS_AFTER_UPLOAD — if yes, then backups will be deleted from your temporary folder after a successful upload
  • STORAGE_EXPIRE — lets you indicate how many days a backup will be kept in storage before being deleted

To check your script or manually make a backup, run:

./sbackup.sh

The results will be printed in the console.

You can configure regular backups using cron. To do this, move the script to a special directory:

mv sbackup.sh /etc/cron.daily/50_sbackup

Afterwards, cron will automatically launch the archiving script once a day.

Retrieving Backups from Private Containers

Backups can be downloaded from private containers using CyberDuck and special links, without making the container public. After uploading a backup to your server, the data has to be extracted:

mkdir backup_files
# extract an archive to the folder backup_files
tar xvf backupname_2013-01-26_08h40m.tar.bz2 -C backup_files/
# restore a database (this operation my erase the current contents of the database)
bzcat mysql_backupname_ALL_2013-01-26_08h40m.bz2 | mysql

Uploading Files to a Container

To upload a local file (my.doc) to an existing container (files), run the command:

supload -u USERNAME -k USERKEY files my.doc

Data can be uploaded to a specific folder in the container:

supload -u USERNAME -k USERKEY files/docs/ my.doc

In this case, the file's checksum (MD5) is calculated before uploading and the upload will only be considered successful if the checksums match.

To upload all of a folder's contents, use the option -r:

supload -u USERNAME -k USERKEY -r files local/docs/

Checksums will be verified for every file.

Checksum verification offers another advantage: if you launch the utility again and the checksum data of files in storage matches the checksum value of the local files, those file will not be uploaded. This lets you upload only new and modified files.

Deleting Files

Storage supports automatic file deletion and supload lets you specificy how long files should be saved for:

supload -u USERNAME -k USERKEY -d 7d files my.doc

The -d option indicates the time period in minutes (m), hours (h), or days (d) that files should be stored for before being automatically deleted. This option also works for recursive uploads. If a file has already been uploaded, then launching the command again will not change the file's lifespan.

Let's say your archiving system places backups in the folder /var/backups/site/ and assigns the file a lifespan. You can use supload to periodically upload all of your files to storage for a limited time, for example:

supload -u USERNAME -k USERKEY -d 31d -r backups /var/backups/sites

Each newly uploaded backup will be kept in storage for 31 days, and previously uploaded files will gradually expire. Files will automatically be deleted 31 days after they are uploaded. For this setup to work properly, your archiving system must have a shorter storage period than supload, otherwise older files will be uploaded again.

Duplicity Installation and Setup

Duplicity supports various protocols for connecting to file servers: SSH/SCP, FTP, HSI, WebDAV, Tahoe-LAFS, and Amazon S3. It archives data and uploads it onto local or remote file servers and even encrypts it with GnuPG for added security. 

Duplicity is included in most official Linux repositories and is installed from the standard package manager. We will be looking at Duplicity in Ubuntu:

sudo apt-get install duplicity

To access a cloud server on a client machine, the python-swiftclient and librsync packages must be installed:

sudo apt-get install python-swiftclient
sudo apt-get install librsync-dev

Now we need to install the swiftbackend plugin. First we clone the appropriate repositories with launchpad (this requires Bazaar as a dependency):

sudo apt-get install bzr
bzr branch lp:~mhu-s/duplicity/swiftbackend

Then we run the following command:

cd swiftbackend && sudo python dist/setup.py install

Once Duplicity has been installed, we can access Cloud Storage.

Open a text editor and write a small script for making backups:

# Authorization data for connecting to the cloud
export SWIFT_USERNAME="user name"
export SWIFT_PASSWORD="cloud storage password"
export SWIFT_AUTHURL="https://auth.selcdn.ru"

# Archive 
duplicity /folder path/on client machine swift://container name in cloud

# Clear authorization data as security measure
unset SWIFT_USERNAME
unset SWIFT_USERNAME
unset SWIFT_AUTHURL

We’ll save this file as backup.sh and make it executable:

chmod +x backup.sh

Afterwards, we run the following command:

./backup.sh

Next, GnuPG will ask for the keyword to access our files.

Then the backup process begins. Statistics will be displayed in the console:

----------------------[ Backup statistics ]----------------------
StartTime 1391068911.00 (Thu Jan 30 12:01:51 2014)
EndTime 1391068911.02 (Thu Jan 30 12:01:51 2014)
ElapsedTime 0.02 (0.02 seconds)
SourceFiles 5
SourceFileSize 190210 (186 KB)
NewFiles 5
NewFileSize 190210 (186 KB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 5
RawDeltaSize 186114 (182 KB)
TotalDestinationSizeChange 185217 (181 KB)
Errors 0
-----------------------------------------------------------------

New files will be added to the specified container in storage:

duplicity-full-signatures.20140130T073550Z.sigtar.gpg
duplicity-full.20140130T073550Z.manifest.gpg
duplicity-full.20140130T073550Z.vol1.difftar.gpg

To download the encrypted backup from storage onto a local machine, we need to write a script that has the same authorization data as above, but in a slightly different command:

duplicity swift://container name /path/to folder/on local/machine

We’ll save this script as restore.sh and make it an executable file.

When we run ./restore.sh, GnuPG will ask for the keyword. After entering it, all of the backed up files will be downloaded to the specified directory on the local machine.

Rclone Installation and Setup

Rclone is a tool for synchronizing data in Cloud Storage and a local machine. It can be used for backups and static sites.

Download links can be found for various operating systems on the download page.

We’ll be looking at rclone for Linux. To install, we first have to download the necessary packages and then run:

unzip rclone-current-linux-amd64.zip
cd rclone-v1.36-linux-amd64
sudo cp rclone /usr/sbin/
sudo chown root:root /usr/sbin/rclone
sudo chmod 755 /usr/sbin/rclone
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb

Once the installation is complete, we configure rclone for Selectel Cloud Storage:

rclone config

The following dialog will appear in the console:

No remotes found - make a new one
n) New remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
n/r/c/s/q>

We choose n and press Enter. Next, we’ll need to enter a name for our remote storage connection after name>. We enter a name (like Selectel) and move on to the next step:

name> Selectel
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
5 / Encrypt/Decrypt a remote
\ "crypt"
6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
7 / Google Drive
\ "drive"
8 / Hubic
\ "hubic"
9 / Local Disk
\ "local"
10 / Microsoft OneDrive
\ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
12 / SSH/SFTP Connection
\ "sftp"
13 / Yandex Disk
\ "yandex"

We enter 11 for Swift and press Enter.

Storage> 11

Afterwards, we'll be prompted for our username and password:

User name to log in.

user> [username]
API key or password.
key> [password]
Authentication URL for server.
Choose a number from below, or type in your own value
1 / Rackspace US
\ "https://auth.api.rackspacecloud.com/v1.0"
2 / Rackspace UK
\ "https://lon.auth.api.rackspacecloud.com/v1.0"
3 / Rackspace v2
\ "https://identity.api.rackspacecloud.com/v2.0"
4 / Memset Memstore UK
\ "https://auth.storage.memset.com/v1.0"
5 / Memset Memstore UK v2
\ "https://auth.storage.memset.com/v2.0"
6 / OVH
\ "https://auth.cloud.ovh.net/v2.0"

Selectel Cloud Storage isn’t listed, so we’ll have to enter the address manually:

auth > https://auth.selcdn.ru/v1.0

The next two points (tenant and region) are optional and can be skipped. The last prompt in the dialog will ask us to verify our configuration:

Remote config
--------------------
[selectel]
user = your_username
key = your_password
auth = https://auth.selcdn.ru/v1.0
tenant = user
region =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d>

If all of the information is correct, we select y and press Enter.

Command Syntax

View a list of containers in storage:

rclone lsd selectel:

Create a new container:

rclone mkdir selectel:[container name]

View a list of files in a container:

rclone ls selectel:[container name]

Copy files from the local machine to storage:

rclone copy /home/local/directory

Synchronize files on the local machine with storage:

rclone sync /home/local/directory selectel:[container name]

Synchronize files in storage with the local machine:

rclone selectel:[container name] sync /home/local/directory

When performing copy and synchronization operations, rclone checks each file for the date and time it was last modified or its MD5 checksum. Only modified files are transferred from the source directory.

These are the only commands we'll list here, but anyone interested can look at the official documentation. You can also get information using the command:

rclone --help

The bulk of rclone’s functions are available in other tools for working with cloud storage. There is, however, one unique function that is missing from every other tool that we know of: migrating data from one cloud server to another.

Example

We’ll look at the following practical use case: we have a folder with photos in Google Drive, and we have to migrate the contents to our Cloud Storage.

First we create a new connection. In the list of available clouds, we choose Google Drive. Afterwards, we have to enter two parameters: client_id and client_secret. We’ll leave these blank and press Enter.

Next, we’re asked the following question:

Remote config
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine or Y didn't work
y) Yes
n) No
y/n>

We choose “no” (n). Rclone will generate a link where we can obtain the code:

If your browser doesn't open automatically go to the following link: https://accounts.google.com/o/oauth2/auth?client_id=202264815644.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=ac901aefe97aff8ce65fe593060d0b0c
Log in and authorize rclone for access

We open this link in our browser and grant rclone access to our files.

Afterwards, the Google Drive API will return a code that we’ll have to enter when prompted:

Enter verification code>

The connection to Google Drive has been set up.

To start migration, run the command:

rclone copy [connection name]:[directory name] [selectel]:[container name]

Backing Up WordPress Sites

UpdraftPlus is a simple and easy-to-use plugin for backing up WordPress sites with OpenStack Swift.

Before installing the plugin, you will have to make some preparations:

  • create a container in storage where you'll save your backups
  • create a special user with access to only this container
  • create a write-accessible folder for temporary backups in the wp-content directory on the server

Now install UpdraftPlus:

  1. From the Plugins menu, click Add New.
  2. After activating the plugin, UpdraftPlus Backups will be added to the Settings menu.
  3. Select this item and open the Settings tab.
  4. From the Choose your remote storage list, click OpenStack (Swift).
  5. In the Authentication URI field, enter https://auth.selcdn.ru.
    image
  6. In the Tenant field, enter your account number.
  7. Enter the username and password of the additional user in the respective fields.
  8. In the Container field, enter the name of the container where you will store your backups.
  9. After entering all of these parameters, click Test OpenStack Settings.
  10. If the test is a success, a popup will open saying the plugin could access the container and create files.
    image
  11. Scroll down to the Expert settings section and click Show expert settings.
  12. In the Backup directory field, enter the name of the previously created folder for saving your temporary backups.

The plugin is now ready.

Under the Settings tab, you can also select which files should be included in the backups and how often backups should be made.

After making all your changes, click Save Changes and click the tab Current Status.

To start the backup process, click Backup Now.

imageData can be restored from a backup by clicking the Restore button.