Synapse storage provider to fetch and store media in Amazon S3
Find a file
Sean Quah 04e3d31b40
Add support for thumbnail offload to scripts/s3_media_upload (#61)
Media are now considered deleted only if both the original file and all
thumbnails have been deleted.

`cache.db`s built before this change may incorrectly count media as
deleted while their thumbnails still exist in the local cache. This can
be resolved by either:
 a) deleting `cache.db` and running an `update` to crawl through the
    entire local cache again. This may take an extremely long time for
    large Synapse deployments.
 b) uploading the contents of local_thumbnails/ and remote_thumbnail/
    manually, then deleting the uploaded files. Note that a running
    Synapse instance may write new thumbnails during the process.

    If the S3 storage provider has been installed since the very start
    and configured to store both local and remote media synchronously,
    all thumbnails should already be in S3 and the upload step can be
    skipped.

This commit changes the behavior of the `write` command. Previously,
`write` would only output undeleted file paths. Now the output contains
a mix of file and thumbnail directory paths, which may sometimes already
be deleted / not exist.
2021-09-15 10:18:26 +01:00
.github/workflows Misc improvements to scripts/s3_media_upload and fix CI (#59) 2021-09-10 11:39:50 +01:00
scripts Add support for thumbnail offload to scripts/s3_media_upload (#61) 2021-09-15 10:18:26 +01:00
.gitignore Compatibility with changes to the LoggingContext in Synapse (#36) 2020-05-05 07:11:38 -04:00
LICENSE Initial commit 2018-02-07 11:13:13 +00:00
MANIFEST.in Fix a Py3 issue and package & test it better (#15) 2018-10-23 20:48:22 +11:00
README.md Fix name of s3_media_upload script and other minor README cleanups (#35) 2020-04-27 11:58:27 +01:00
s3_storage_provider.py Improve the efficiency of the S3 storage provider (#50) 2021-01-21 12:22:43 +00:00
setup.cfg Fix a Py3 issue and package & test it better (#15) 2018-10-23 20:48:22 +11:00
setup.py Improve the efficiency of the S3 storage provider (#50) 2021-01-21 12:22:43 +00:00
test_s3.py Black the codebase (#29) 2020-01-23 11:48:59 +00:00
tox.ini Misc improvements to scripts/s3_media_upload and fix CI (#59) 2021-09-10 11:39:50 +01:00

Synapse S3 Storage Provider

This module can be used by synapse as a storage provider, allowing it to fetch and store media in Amazon S3.

Usage

The s3_storage_provider.py should be on the PYTHONPATH when starting synapse.

Example of entry in synapse config:

media_storage_providers:
- module: s3_storage_provider.S3StorageProviderBackend
  store_local: True
  store_remote: True
  store_synchronous: True
  config:
    bucket: <S3_BUCKET_NAME>
    # All of the below options are optional, for use with non-AWS S3-like
    # services, or to specify access tokens here instead of some external method.
    region_name: <S3_REGION_NAME>
    endpoint_url: <S3_LIKE_SERVICE_ENDPOINT_URL>
    access_key_id: <S3_ACCESS_KEY_ID>
    secret_access_key: <S3_SECRET_ACCESS_KEY>

    # The object storage class used when uploading files to the bucket.
    # Default is STANDARD.
    #storage_class: "STANDARD_IA"

    # The maximum number of concurrent threads which will be used to connect
    # to S3. Each thread manages a single connection. Default is 40.
    #
    #threadpool_size: 20

This module uses boto3, and so the credentials should be specified as described here.

Regular cleanup job

There is additionally a script at scripts/s3_media_upload which can be used in a regular job to upload content to s3, then delete that from local disk. This script can be used in combination with configuration for the storage provider to pull media from s3, but upload it asynchronously.

Once the package is installed, the script should be run somewhat like the following. We suggest using tmux or screen as these can take a long time on larger servers.

database.yaml should contain the keys that would be passed to psycopg2 to connect to your database. They can be found in the contents of the database.args parameter in your homeserver.yaml.

More options are available in the command help.

> cd s3_media_upload
# cache.db will be created if absent. database.yaml is required to
# contain PG credentials
> ls
cache.db database.yaml
# Update cache from /path/to/media/store looking for files not used
# within 2 months
> s3_media_upload update /path/to/media/store 2m
Syncing files that haven't been accessed since: 2018-10-18 11:06:21.520602
Synced 0 new rows
100%|█████████████████████████████████████████████████████████████| 1074/1074 [00:33<00:00, 25.97files/s]
Updated 0 as deleted

> s3_media_upload upload /path/to/media/store matrix_s3_bucket_name --storage-class STANDARD_IA --delete
# prepare to wait a long time