For a couple of years now I have been hosting and maintain Mattermost instance used by me and my friends. We chose Mattermost because of the simple installation and migration process from Slack.

The server is working on DigitalOcean droplet(2 GiB; 1vCPU; 50 GiB SSD). The droplet is more than capable enough for our traffic (12 users). Unfortunately, we have started to experience issues with file storage, aka 50 GiB is not so much space in 2022 :).

First action and quick bandage was to move files to S3 bucket. I have chosen back blaze as the storage is cheaper. Unfortunately, the solution was not working as anticipated. Upload and Download have started to be slow in general. For downloads, we can fix it by cache, but the upload part is much more annoying from the user perspective.

The first step that I’m planning to make to improve this situation is to move the bucket region from US to EU. That should improve latency and time to first byte.

The second step is to add some caching layer. For the long time that was easy. You would start minio as cache and done. But as you probably guessed, if that would be so easy, I would not write about it on a blog post :D.

So minio have removed the cache option from the server, as it was rarely used. As a typical developer, I had to fight with myself for about a week to not start new project.

The first attempt to solve this problem was to use s3fs-fuse. On the surface, it was looking to me as a no-brainer. I will just mount S3 bucket as a directory with cache and done. After reading the docs, my assumption was that the writes are asynchronous, so it will work quickly and sync up under the “hood”. The first test showed all the drawbacks and issues with this solution. S3fs will not show you existing files, so there is no easy way to back port the files. Write was still slow and there was still no cache cleaning etc. After finding out that it is not solving any issues and just adds more, I have ditched the idea of mounting the bucket at all.

So back to the drawing board, I checked the minio docs again and found out that tiering would solve our issues. Basically, we can set up minio as our default bucket and configure remote tier with lifecycle management policy. That solution works correctly and was straightforward to set up. The longest part was to backfill the bucket from our US region to minio server in Frankfurt and to our remote tier in the Netherlands.

Instalation and configuration

First we have to start minio, I have chosen to use docker image:

mkdir -p ${HOME}/minio/data
docker run \
   -p 127.0.0.1:9000:9000 \
   -p 127.0.0.1:9090:9090 \
   --user $(id -u):$(id -g) \
   --name minio1 \
   -d \
   -e "MINIO_ROOT_USER=CHANGEME" \
   -e "MINIO_ROOT_PASSWORD=CHANGEME" \
   -v ${HOME}/minio/data:/data \
   quay.io/minio/minio server /data --console-address ":9090"

We are binding ports to localhost as there is no need to expose minio to the internet.

Next we have to tunnel connection. I’m using SSH for that purpose.

ssh -L 127.0.0.1:9000:127.0.0.1:9000 -L 127.0.0.1:9090:127.0.0.1:9090 MYSERVERURL

After successful connection to the dashboard, http://127.0.0.1:9090. We can proceed with configuration in the terminal. For the next steps we will need minio client, detail installation manual is here https://min.io/docs/minio/linux/reference/minio-mc.html. Now we have to add an alias to our minio installation.

mc alias set minio http://127.0.0.1:9000 ACCESS_KEY SECRET_KEY

then we can create Lifcycle policy:

wget -O - https://min.io/docs/minio/linux/examples/LifecycleManagementAdmin.json | \
mc admin policy add minio LifecycleAdminPolicy /dev/stdin
mc admin user add minio alphaLifecycleAdmin LongRandomSecretKey
mc admin policy set minio LifecycleAdminPolicy user=alphaLifecycleAdmin

Next step would be to add our new tier:

mc ilm tier add s3 minio B2_EU_TIER --endpoint https://s3.eu-central-003.backblazeb2.com --access-key ACCESS_KEY --secret-key SECRET_KEY --bucket BUCKET_NAME

In the minio UI we can create new bucket. In our case I named the bucket mattermost and we can add lifecycle rule to the bucket:

mc ilm rule add minio/mattermost --transition-tier B2_EU_TIER --transition-days 30

And that is it, minio is now ready to get new files and store files older than 30 days in a remote bucket. For backfilling data, I have used the mc mirror command.

Summery

The minio server is working really well. Uploads are quick and snappy, and access time to remote files is acceptable. In terms of Mattermost, I would really like to see the option of asynchronously uploads to bucket, maybe that is a good idea for new PR?