With the final lists migrated to mailman3[1], the mailman2 server can
finally be killed.
When the mailman3 server was initially setup[2], it was done on a
separate server because the mailman and mailman3 packages conflicted,
and the traffic was routed over wireguard (HTTP, LMTP and SMTP).
Instead of installing mailman3 on the original lists.al.org server and
transferring the data, it was easier just to install the missing pieces
(basically Postfix and adjusting the Nginx configuration) on the ml3
server and move the IPs (to keep the IP mail reputation).
So basically the following was done:
- The IPs for the original lists.al.org was moved to the mailman3.al.org
server
- The mailman2 datadir was transferred to mailman3.al.org server, so we
can keep the pipermail links alive, and import missing mails if needed
- The original lists.al.org server was decommissioned
- The mailman3.al.org server was renamed to lists.al.org
- The missing pieces was added to the mailman3 role (basically Postfix +
Nginx adjustments)
- The mailman role was deleted and the mailman3 role renamed to mailman
[1] 75ac7d09 ("mailman: Fourth and final batch of mailman3 migrated lists")
[2] 9294828f ("Setup mailman3 server")
Fix #59
Vagrant Cloud has been used for years by arch-boxes[1] for publishing
Vagrant boxes. Access to the organization[2] was handed out to a few
members of the DevOps team and the creator of the organization
(arch-boxes maintainer at the time).
With this commit the control of the organization is handed over to the
DevOps team through a new Vagrant Cloud account.
[1] https://gitlab.archlinux.org/archlinux/arch-boxes
[2] https://app.vagrantup.com/archlinux/
We want to migrate to mailman3 as mailman2 is basically unmaintained and
requires Python 2 which is EOL.
Because the mailman and mailman3 packages conflict and we don't want to
perform a big bang migration, mailman3 must be deployed on a separate
server. mailman-web (mailman3's web interface) hasn't been packaged yet,
so for now we are using my homebrewed PKGBUILD[1].
[1] https://gist.github.com/klausenbusk/5982063f95c503754a51ed2fefb8915e
Ref #59
We had a GeoIP mirror in the past based on nginx and its GeoIP module,
but it didn't perform very well, due to the high latency (asking a
central server for the package and then redirected to the closest
mirror).
One of the reasons for offering this service, is so we can relieve
mirror.pkgbuild.com which is burning a ton of traffic (50TB/month),
likely due to it being the default mirror in our Docker image. Another
reason is so we can offer a link to our arch-boxes images in libosinfo
(used by gnome-boxes, virt-install and virt-manager), with good enough
performance for most users.
This time we take a different approach and use a DNS based solution,
which means the latency penalty is only paid once (the first DNS
request). The downside is that the mirrors must have a valid certificate
for the same domain name, which makes using third-party mirrors a
challenge. So for now, we are just using the sponsored mirorrs
controlled by the DevOps team.
Fix #101
Change docs/ssh-known_hosts.txt to be partially managed by Ansible, so
custom entries can be added to the top of the file. Use the new format
to write down the host keys of our two borg hosts.
Using GitLab's official backup tool takes too much time and, more
importantly, space; /srv/gitlab is a bit over 430G but backing it
up nearly exhausts its 1TB volume.
As we're creating btrfs snapshots and backing those up with borg, it
seems unnecessary to also create tarballs of the same data. GitLab's
documentation mentions snapshots as a viable backup strategy, and to
the restored system it should seem like recovering from a power loss.
[1] https://docs.gitlab.com/ee/raketasks/backup_restore#alternative-backup-strategies
Collects the smart data using smartctl and outputs them in the
textcollector dir. This expects smartd to be configured to regularly
self tests on a regular interval to detect if a disk is broken.
These are already known (so no need to hide them) and are fairly static
(so variables are more of a hindrance) so it's better to use the actual
usernames in the documentation. Also, simplify the first example given.
Add a default rate limit for 20 req/s for the uwsgi endpoint and
automatically ban users who reach this limit. The nginx-limit-req rule
does not ban users who reach the rss limit as these are not likely DoS
attempts.