Installation and configuration of s3cmd (under Linux)


S3cmd is a fairly popular cross-platform tool for conveniently working with S3 compatible object stores.

With S3cmd you can:

  • manage data
  • download and upload files
  • configure access permissions
  • synchronize local storage with the cloud
  • perform automatic backups
  • organize file sharing

In this tutorial, we’ll take a look at how to use S3cmd with our object storage.

Installation and configuration

Installation (CentOS 7)

There are several ways to install the S3cmd client, consider one of them:yum install epel-release -yyum install s3cmds3cmd –versions3cmd version 2.1.0

Other methods to install S3cmd can be found here.


To configure S3cmd, use the command:

s3cmd --configure

The command will ask for values ​​for the following parameters:

  • Access Key
  • Secret key
  • Default Region
  • S3 Endpoint
  • DNS-style bucket + hostname: port template for accessing a bucket
  • Leave the other parameters unchanged.

Creation of a configuration file

In order not to invoke the configuration file every time you call the S3cmd utility, you can set up the default configuration file (stored in a hidden file in the user’s home directory ~ / .s3cfg). This can be done with the command:

s3cmd --configure

NB! In our example we will not use the configuration file by default, we will create a separate file named example_config.

s3cmd --configure -c example_config.cfg

The script starts by requesting an Access Key and a Secret Key.
Replace YOUR_ACCESS_KEY and YOUR_SECRET_KEY with your credentials.
Let’s specify the region US.

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using
the env variables.
Default Region [US]: US

Now you need to specify the endpoint (in our case,

Use "" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint []:

We support DNS-style bucket access, so the parameters DNS-style bucket + hostname: port template for accessing a bucket must be set to: %(bucket)

Use "%(bucket)" to the target Amazon S3. "%(bucket)s" and "%(location)s"
vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [
(bucket)]: %(bucket)

The other parameters remain unchanged.

Encryption password is used to protect your files from reading
by unauthorized persons whilein transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]:On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:

After that, the script will display all the set parameters and also provide an opportunity to test them:

New settings:
  Default Region: US
  S3 Endpoint:
  DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)
  Encryption password:
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: True
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] Y

After testing press Y to save the parameters.

Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] Y
Configuration saved to 'example_config.cfg'

The file was created successfully.

NB! If necessary, the configuration settings can be changed directly in the created file.


We suggest you read the article “Using the S3cmd Utility“.

Useful links

Share on facebook
Share on linkedin
Share on twitter
Share on pinterest
Share on email

Related News