Merge branch 'spantaleev:master' into mautrix-wsproxy
This commit is contained in:
commit
a7fc572382
36
CHANGELOG.md
36
CHANGELOG.md
@ -1,3 +1,39 @@
|
||||
# 2021-10-23
|
||||
|
||||
## Hangouts bridge no longer updated, superseded by a Googlechat bridge
|
||||
|
||||
The mautrix-hangouts bridge is no longer receiving updates upstream and is likely to stop working in the future.
|
||||
We still retain support for this bridge in the playbook, but you're encouraged to switch away from it.
|
||||
|
||||
There's a new [mautrix-googlechat](https://github.com/mautrix/googlechat) bridge that you can [install using the playbook](docs/configuring-playbook-bridge-mautrix-googlechat.md).
|
||||
Your **Hangouts bridge data will not be migrated**, however. You need to start fresh with the new bridge.
|
||||
|
||||
|
||||
# 2021-08-23
|
||||
|
||||
## LinkedIn bridging support via beeper-linkedin
|
||||
|
||||
Thanks to [Alexandar Mechev](https://github.com/apmechev), the playbook can now install the [beeper-linkedin](https://gitlab.com/beeper/linkedin) bridge for bridging to [LinkedIn](https://www.linkedin.com/) Messaging.
|
||||
|
||||
This brings the total number of bridges supported by the playbook up to 20. See all supported bridges [here](docs/configuring-playbook.md#bridging-other-networks).
|
||||
|
||||
To get started with bridging to LinkedIn, see [Setting up Beeper LinkedIn bridging](docs/configuring-playbook-bridge-beeper-linkedin.md).
|
||||
|
||||
|
||||
# 2021-08-20
|
||||
|
||||
# Sygnal upgraded - ARM support and no longer requires a database
|
||||
|
||||
The [Sygnal](docs/configuring-playbook-sygnal.md) push gateway has been upgraded from `v0.9.0` to `v0.10.1`.
|
||||
|
||||
This is an optional component for the playbook, so most of our users wouldn't care about this announcement.
|
||||
|
||||
Since this feels like a relatively big (and untested, as of yet) Sygnal change, we're putting up this changelog entry.
|
||||
|
||||
The new version is also available for the ARM architecture. It also no longer requires a database anymore.
|
||||
If you need to downgrade to the previous version, changing `matrix_sygnal_version` or `matrix_sygnal_docker_image` will not be enough, as we've removed the `database` configuration completely. You'd need to switch to an earlier playbook commit.
|
||||
|
||||
|
||||
# 2021-05-21
|
||||
|
||||
## Hydrogen support
|
||||
|
||||
16
README.md
16
README.md
@ -45,17 +45,21 @@ Using this playbook, you can get the following services configured on your serve
|
||||
|
||||
- (optional, advanced) the [Matrix Corporal](https://github.com/devture/matrix-corporal) reconciliator and gateway for a managed Matrix server
|
||||
|
||||
- (optional) the [mautrix-telegram](https://github.com/tulir/mautrix-telegram) bridge for bridging your Matrix server to [Telegram](https://telegram.org/)
|
||||
- (optional) the [mautrix-telegram](https://github.com/mautrix/telegram) bridge for bridging your Matrix server to [Telegram](https://telegram.org/)
|
||||
|
||||
- (optional) the [mautrix-whatsapp](https://github.com/tulir/mautrix-whatsapp) bridge for bridging your Matrix server to [WhatsApp](https://www.whatsapp.com/)
|
||||
- (optional) the [mautrix-whatsapp](https://github.com/mautrix/whatsapp) bridge for bridging your Matrix server to [WhatsApp](https://www.whatsapp.com/)
|
||||
|
||||
- (optional) the [mautrix-facebook](https://github.com/tulir/mautrix-facebook) bridge for bridging your Matrix server to [Facebook](https://facebook.com/)
|
||||
- (optional) the [mautrix-facebook](https://github.com/mautrix/facebook) bridge for bridging your Matrix server to [Facebook](https://facebook.com/)
|
||||
|
||||
- (optional) the [mautrix-hangouts](https://github.com/tulir/mautrix-hangouts) bridge for bridging your Matrix server to [Google Hangouts](https://en.wikipedia.org/wiki/Google_Hangouts)
|
||||
- (optional) the [mautrix-hangouts](https://github.com/mautrix/hangouts) bridge for bridging your Matrix server to [Google Hangouts](https://en.wikipedia.org/wiki/Google_Hangouts)
|
||||
|
||||
- (optional) the [mautrix-instagram](https://github.com/tulir/mautrix-instagram) bridge for bridging your Matrix server to [Instagram](https://instagram.com/)
|
||||
- (optional) the [mautrix-googlechat](https://github.com/mautrix/googlechat) bridge for bridging your Matrix server to [Google Chat](https://en.wikipedia.org/wiki/Google_Chat)
|
||||
|
||||
- (optional) the [mautrix-signal](https://github.com/tulir/mautrix-signal) bridge for bridging your Matrix server to [Signal](https://www.signal.org/)
|
||||
- (optional) the [mautrix-instagram](https://github.com/mautrix/instagram) bridge for bridging your Matrix server to [Instagram](https://instagram.com/)
|
||||
|
||||
- (optional) the [mautrix-signal](https://github.com/mautrix/signal) bridge for bridging your Matrix server to [Signal](https://www.signal.org/)
|
||||
|
||||
- (optional) the [beeper-linkedin](https://gitlab.com/beeper/linkedin) bridge for bridging your Matrix server to [LinkedIn](https://www.linkedin.com/)
|
||||
|
||||
- (optional) the [matrix-appservice-irc](https://github.com/matrix-org/matrix-appservice-irc) bridge for bridging your Matrix server to [IRC](https://wikipedia.org/wiki/Internet_Relay_Chat)
|
||||
|
||||
|
||||
@ -26,14 +26,14 @@ The following repositories allow you to copy and use this setup:
|
||||
|
||||
Updates to this section are trailed here:
|
||||
|
||||
[GoMatrixHosting Matrix Docker Ansible Deploy](https://gitlab.com/GoMatrixHosting/gomatrixhosting-matrix-docker-ansible-deploy)
|
||||
[GoMatrixHosting Matrix Docker Ansible Deploy](https://gitlab.com/GoMatrixHosting/matrix-docker-ansible-deploy)
|
||||
|
||||
|
||||
## Does I need an AWX setup to use this? How do I configure it?
|
||||
|
||||
Yes, you'll need to configure an AWX instance, the [Create AWX System](https://gitlab.com/GoMatrixHosting/create-awx-system) repository makes it easy to do. Just follow the steps listed in ['/docs/Installation.md' of that repository](https://gitlab.com/GoMatrixHosting/create-awx-system/-/blob/master/docs/Installation.md).
|
||||
Yes, you'll need to configure an AWX instance, the [Create AWX System](https://gitlab.com/GoMatrixHosting/create-awx-system) repository makes it easy to do. Just follow the steps listed in ['/docs/Installation_AWX.md' of that repository](https://gitlab.com/GoMatrixHosting/create-awx-system/-/blob/master/docs/Installation_AWX.md).
|
||||
|
||||
For simpler installation steps you can use to get started with this system, check out our minimal installation guide at ['/doc/Installation_Minimal.md of that repository'](https://gitlab.com/GoMatrixHosting/create-awx-system/-/blob/master/docs/Installation_Minimal.md).
|
||||
For simpler installation steps you can use to get started with this system, check out our minimal installation guide at ['/doc/Installation_Minimal_AWX.md of that repository'](https://gitlab.com/GoMatrixHosting/create-awx-system/-/blob/master/docs/Installation_Minimal_AWX.md).
|
||||
|
||||
|
||||
## Does I need a front-end WordPress site? And a DigitalOcean account?
|
||||
|
||||
@ -31,12 +31,12 @@ If you are using Cloudflare DNS, make sure to disable the proxy and set all reco
|
||||
| Type | Host | Priority | Weight | Port | Target |
|
||||
| ----- | ---------------------------- | -------- | ------ | ---- | ---------------------- |
|
||||
| SRV | `_matrix-identity._tcp` | 10 | 0 | 443 | `matrix.<your-domain>` |
|
||||
| CNAME | `dimension` (*) | - | - | - | `matrix.<your-domain>` |
|
||||
| CNAME | `jitsi` (*) | - | - | - | `matrix.<your-domain>` |
|
||||
| CNAME | `stats` (*) | - | - | - | `matrix.<your-domain>` |
|
||||
| CNAME | `goneb` (*) | - | - | - | `matrix.<your-domain>` |
|
||||
| CNAME | `sygnal` (*) | - | - | - | `matrix.<your-domain>` |
|
||||
| CNAME | `hydrogen` (*) | - | - | - | `matrix.<your-domain>` |
|
||||
| CNAME | `dimension` | - | - | - | `matrix.<your-domain>` |
|
||||
| CNAME | `jitsi` | - | - | - | `matrix.<your-domain>` |
|
||||
| CNAME | `stats` | - | - | - | `matrix.<your-domain>` |
|
||||
| CNAME | `goneb` | - | - | - | `matrix.<your-domain>` |
|
||||
| CNAME | `sygnal` | - | - | - | `matrix.<your-domain>` |
|
||||
| CNAME | `hydrogen` | - | - | - | `matrix.<your-domain>` |
|
||||
|
||||
## Subdomains setup
|
||||
|
||||
@ -68,4 +68,4 @@ This is an optional feature. See [ma1sd's documentation](https://github.com/ma1u
|
||||
|
||||
Note: This `_matrix-identity._tcp` SRV record for the identity server is different from the `_matrix._tcp` that can be used for Synapse delegation. See [howto-server-delegation.md](howto-server-delegation.md) for more information about delegation.
|
||||
|
||||
When you're done with the DNS configuration and ready to proceed, continue with [Configuring this Ansible playbook](configuring-playbook.md).
|
||||
When you're done with the DNS configuration and ready to proceed, continue with [Getting the playbook](getting-the-playbook.md).
|
||||
|
||||
59
docs/configuring-playbook-bridge-beeper-linkedin.md
Normal file
59
docs/configuring-playbook-bridge-beeper-linkedin.md
Normal file
@ -0,0 +1,59 @@
|
||||
# Setting up Beeper Linkedin (optional)
|
||||
|
||||
The playbook can install and configure [beeper-linkedin](https://gitlab.com/beeper/linkedin) for you, for bridging to [LinkedIn](https://www.linkedin.com/) Messaging. This bridge is based on the mautrix-python framework and can be configured in a similar way to the other mautrix bridges
|
||||
|
||||
See the project's [documentation](https://gitlab.com/beeper/linkedin/-/blob/master/README.md) to learn what it does and why it might be useful to you.
|
||||
|
||||
```yaml
|
||||
matrix_beeper_linkedin_enabled: true
|
||||
```
|
||||
|
||||
There are some additional things you may wish to configure about the bridge before you continue.
|
||||
|
||||
Encryption support is off by default. If you would like to enable encryption, add the following to your `vars.yml` file:
|
||||
```yaml
|
||||
matrix_beeper_linkedin_configuration_extension_yaml: |
|
||||
bridge:
|
||||
encryption:
|
||||
allow: true
|
||||
default: true
|
||||
```
|
||||
|
||||
If you would like to be able to administrate the bridge from your account it can be configured like this:
|
||||
```yaml
|
||||
matrix_beeper_linkedin_configuration_extension_yaml: |
|
||||
bridge:
|
||||
permissions:
|
||||
'@YOUR_USERNAME:YOUR_DOMAIN': admin
|
||||
```
|
||||
|
||||
You may wish to look at `roles/matrix-bridge-beeper-linkedin/templates/config.yaml.j2` to find other things you would like to configure.
|
||||
|
||||
|
||||
## Set up Double Puppeting
|
||||
|
||||
If you'd like to use [Double Puppeting](https://docs.mau.fi/bridges/general/double-puppeting.html) (hint: you most likely do), you have 2 ways of going about it.
|
||||
|
||||
### Method 1: automatically, by enabling Shared Secret Auth
|
||||
|
||||
The bridge will automatically perform Double Puppeting if you enable [Shared Secret Auth](configuring-playbook-shared-secret-auth.md) for this playbook.
|
||||
|
||||
This is the recommended way of setting up Double Puppeting, as it's easier to accomplish, works for all your users automatically, and has less of a chance of breaking in the future.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
You then need to start a chat with `@linkedinbot:YOUR_DOMAIN` (where `YOUR_DOMAIN` is your base domain, not the `matrix.` domain).
|
||||
|
||||
Send `login YOUR_LINKEDIN_EMAIL_ADDRESS` to the bridge bot to enable bridging for your LinkedIn account.
|
||||
|
||||
If you run into trouble, check the [Troubleshooting](#troubleshooting) section below.
|
||||
|
||||
After successfully enabling bridging, you may wish to [set up Double Puppeting](#set-up-double-puppeting), if you haven't already done so.
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Bridge asking for 2FA even if you don't have 2FA enabled
|
||||
|
||||
If you don't have 2FA enabled and are logging in from a strange IP for the first time, LinkedIn will send an email with a one-time code. You can use this code to authorize the bridge session. In my experience, once the IP is authorized, you will not be asked again.
|
||||
@ -4,7 +4,7 @@
|
||||
|
||||
The playbook can install and configure [Heisenbridge](https://github.com/hifi/heisenbridge) - the bouncer-style [IRC](https://en.wikipedia.org/wiki/Internet_Relay_Chat) bridge for you.
|
||||
|
||||
See the project's [README](https://github.com/hifi/heisenbridge/blob/master/README.md) to learn what it does and why it might be useful to you.
|
||||
See the project's [README](https://github.com/hifi/heisenbridge/blob/master/README.md) to learn what it does and why it might be useful to you. You can also take a look at [this demonstration video](https://www.youtube.com/watch?v=nQk1Bp4tk4I).
|
||||
|
||||
## Configuration
|
||||
|
||||
@ -33,4 +33,6 @@ After the bridge is successfully running just DM `@heisenbridge:your-homeserver`
|
||||
Help is available for all commands with the `-h` switch.
|
||||
If the bridge ignores you and a DM is not accepted then the owner setting may be wrong.
|
||||
|
||||
You can also learn the basics by watching [this demonstration video](https://www.youtube.com/watch?v=nQk1Bp4tk4I).
|
||||
|
||||
If you encounter issues or feel lost you can join the project room at [#heisenbridge:vi.fi](https://matrix.to/#/#heisenbridge:vi.fi) for help.
|
||||
|
||||
@ -1,8 +1,8 @@
|
||||
# Setting up Mautrix Facebook (optional)
|
||||
|
||||
The playbook can install and configure [mautrix-facebook](https://github.com/tulir/mautrix-facebook) for you.
|
||||
The playbook can install and configure [mautrix-facebook](https://github.com/mautrix/facebook) for you.
|
||||
|
||||
See the project's [documentation](https://github.com/tulir/mautrix-facebook/blob/master/ROADMAP.md) to learn what it does and why it might be useful to you.
|
||||
See the project's [documentation](https://github.com/mautrix/facebook/blob/master/ROADMAP.md) to learn what it does and why it might be useful to you.
|
||||
|
||||
```yaml
|
||||
matrix_mautrix_facebook_enabled: true
|
||||
|
||||
58
docs/configuring-playbook-bridge-mautrix-googlechat.md
Normal file
58
docs/configuring-playbook-bridge-mautrix-googlechat.md
Normal file
@ -0,0 +1,58 @@
|
||||
# Setting up Mautrix Google Chat (optional)
|
||||
|
||||
The playbook can install and configure [mautrix-googlechat](https://github.com/mautrix/googlechat) for you.
|
||||
|
||||
See the project's [documentation](https://docs.mau.fi/bridges/python/googlechat/index.html) to learn what it does and why it might be useful to you.
|
||||
|
||||
To enable the [Google Chat](https://chat.google.com/) bridge just use the following playbook configuration:
|
||||
|
||||
|
||||
```yaml
|
||||
matrix_mautrix_googlechat_enabled: true
|
||||
```
|
||||
|
||||
|
||||
## Set up Double Puppeting
|
||||
|
||||
If you'd like to use [Double Puppeting](https://docs.mau.fi/bridges/general/double-puppeting.html) (hint: you most likely do), you have 2 ways of going about it.
|
||||
|
||||
### Method 1: automatically, by enabling Shared Secret Auth
|
||||
|
||||
The bridge will automatically perform Double Puppeting if you enable [Shared Secret Auth](configuring-playbook-shared-secret-auth.md) for this playbook.
|
||||
|
||||
This is the recommended way of setting up Double Puppeting, as it's easier to accomplish, works for all your users automatically, and has less of a chance of breaking in the future.
|
||||
|
||||
|
||||
### Method 2: manually, by asking each user to provide a working access token
|
||||
|
||||
**Note**: This method for enabling Double Puppeting can be configured only after you've already set up bridging (see [Usage](#usage)).
|
||||
|
||||
When using this method, **each user** that wishes to enable Double Puppeting needs to follow the following steps:
|
||||
|
||||
- retrieve a Matrix access token for yourself. You can use the following command:
|
||||
|
||||
```
|
||||
curl \
|
||||
--data '{"identifier": {"type": "m.id.user", "user": "YOUR_MATRIX_USERNAME" }, "password": "YOUR_MATRIX_PASSWORD", "type": "m.login.password", "device_id": "Mautrix-googlechat", "initial_device_display_name": "Mautrix-googlechat"}' \
|
||||
https://matrix.DOMAIN/_matrix/client/r0/login
|
||||
```
|
||||
|
||||
- send the access token to the bot. Example: `login-matrix MATRIX_ACCESS_TOKEN_HERE`
|
||||
|
||||
- make sure you don't log out the `Mautrix-googlechat` device some time in the future, as that would break the Double Puppeting feature
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
Once the bot is enabled you need to start a chat with `googlechat bridge bot` with handle `@googlechatbot:YOUR_DOMAIN` (where `YOUR_DOMAIN` is your base domain, not the `matrix.` domain).
|
||||
|
||||
Send `login` to the bridge bot to receive a link to the portal from which you can enable the bridging. Open the link sent by the bot and follow the instructions.
|
||||
|
||||
Automatic login may not work. If it does not, reload the page and select the "Manual login" checkbox before starting. Manual login involves logging into your Google account normally and then manually getting the OAuth token from browser cookies with developer tools.
|
||||
|
||||
Once logged in, recent chats should show up as new conversations automatically. Other chats will get portals as you receive messages.
|
||||
|
||||
You can learn more about authentication from the bridge's [official documentation on Authentication](https://docs.mau.fi/bridges/python/googlechat/authentication.html).
|
||||
|
||||
After successfully enabling bridging, you may wish to [set up Double Puppeting](#set-up-double-puppeting), if you haven't already done so.
|
||||
|
||||
@ -1,8 +1,10 @@
|
||||
# The [Mautrix Hangouts Bridge](https://mau.dev/mautrix/hangouts) is no longer maintained. It has changed to a [Google Chat Bridge](https://github.com/mautrix/googlechat). Setup instructions for the Google Chat Bridge can be [found here](configuring-playbook-bridge-mautrix-googlechat.md).
|
||||
|
||||
# Setting up Mautrix Hangouts (optional)
|
||||
|
||||
The playbook can install and configure [mautrix-hangouts](https://github.com/tulir/mautrix-hangouts) for you.
|
||||
The playbook can install and configure [mautrix-hangouts](https://github.com/mautrix/hangouts) for you.
|
||||
|
||||
See the project's [documentation](https://github.com/tulir/mautrix-hangouts/wiki#usage) to learn what it does and why it might be useful to you.
|
||||
See the project's [documentation](https://docs.mau.fi/bridges/python/hangouts/index.html) to learn what it does and why it might be useful to you.
|
||||
|
||||
To enable the [Google Hangouts](https://hangouts.google.com/) bridge just use the following playbook configuration:
|
||||
|
||||
@ -14,7 +16,7 @@ matrix_mautrix_hangouts_enabled: true
|
||||
|
||||
## Set up Double Puppeting
|
||||
|
||||
If you'd like to use [Double Puppeting](https://github.com/tulir/mautrix-hangouts/wiki/Authentication#double-puppeting) (hint: you most likely do), you have 2 ways of going about it.
|
||||
If you'd like to use [Double Puppeting](https://docs.mau.fi/bridges/general/double-puppeting.html) (hint: you most likely do), you have 2 ways of going about it.
|
||||
|
||||
### Method 1: automatically, by enabling Shared Secret Auth
|
||||
|
||||
@ -52,7 +54,7 @@ Automatic login may not work. If it does not, reload the page and select the "Ma
|
||||
|
||||
Once logged in, recent chats should show up as new conversations automatically. Other chats will get portals as you receive messages.
|
||||
|
||||
You can learn more about authentication from the bridge's [official documentation on Authentication](https://github.com/tulir/mautrix-hangouts/wiki/Authentication).
|
||||
You can learn more about authentication from the bridge's [official documentation on Authentication](https://docs.mau.fi/bridges/python/hangouts/authentication.html).
|
||||
|
||||
After successfully enabling bridging, you may wish to [set up Double Puppeting](#set-up-double-puppeting), if you haven't already done so.
|
||||
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Setting up Mautrix Instagram (optional)
|
||||
|
||||
The playbook can install and configure [mautrix-instagram](https://github.com/tulir/mautrix-instagram) for you.
|
||||
The playbook can install and configure [mautrix-instagram](https://github.com/mautrix/instagram) for you.
|
||||
|
||||
See the project's [documentation](https://docs.mau.fi/bridges/python/instagram/index.html) to learn what it does and why it might be useful to you.
|
||||
|
||||
|
||||
@ -1,8 +1,8 @@
|
||||
# Setting up Mautrix Signal (optional)
|
||||
|
||||
The playbook can install and configure [mautrix-signal](https://github.com/tulir/mautrix-signal) for you.
|
||||
The playbook can install and configure [mautrix-signal](https://github.com/mautrix/signal) for you.
|
||||
|
||||
See the project's [documentation](https://github.com/tulir/mautrix-signal/wiki) to learn what it does and why it might be useful to you.
|
||||
See the project's [documentation](https://docs.mau.fi/bridges/python/signal/index.html) to learn what it does and why it might be useful to you.
|
||||
|
||||
**Note/Prerequisite**: If you're running with the Postgres database server integrated by the playbook (which is the default), you don't need to do anything special and can easily proceed with installing. However, if you're [using an external Postgres server](configuring-playbook-external-postgres.md), you'd need to manually prepare a Postgres database for this bridge and adjust the variables related to that (`matrix_mautrix_signal_database_*`).
|
||||
|
||||
@ -12,9 +12,54 @@ Use the following playbook configuration:
|
||||
matrix_mautrix_signal_enabled: true
|
||||
```
|
||||
|
||||
There are some additional things you may wish to configure about the bridge before you continue.
|
||||
|
||||
The relay bot functionality is off by default. If you would like to enable the relay bot, add the following to your `vars.yml` file:
|
||||
```yaml
|
||||
matrix_mautrix_signal_relaybot_enabled: true
|
||||
```
|
||||
If you want to activate the relay bot in a room, use `!signal set-relay`.
|
||||
Use `!signal unset-relay` to deactivate.
|
||||
By default, any user on your homeserver will be able to use the bridge.
|
||||
If you enable the relay bot functionality, it will relay every user's messages in a portal room - no matter which homeserver they're from.
|
||||
|
||||
Different levels of permission can be granted to users:
|
||||
|
||||
* relay - Allowed to be relayed through the bridge, no access to commands;
|
||||
* user - Use the bridge with puppeting;
|
||||
* admin - Use and administer the bridge.
|
||||
|
||||
The permissions are following the sequence: nothing < relay < user < admin.
|
||||
|
||||
The default permissions are set as follows:
|
||||
```yaml
|
||||
permissions:
|
||||
'*': relay
|
||||
YOUR_DOMAIN: user
|
||||
```
|
||||
|
||||
If you want to augment the preset permissions, you might want to set the additional permissions with the following settings in your `vars.yml` file:
|
||||
```yaml
|
||||
matrix_mautrix_signal_configuration_extension_yaml: |
|
||||
bridge:
|
||||
permissions:
|
||||
'@YOUR_USERNAME:YOUR_DOMAIN': admin
|
||||
```
|
||||
|
||||
This will add the admin permission to the specific user, while keepting the default permissions.
|
||||
|
||||
In case you want to replace the default permissions settings **completely**, populate the following item within your `vars.yml` file:
|
||||
```yaml
|
||||
matrix_mautrix_signal_bridge_permissions: |
|
||||
'@ADMIN:YOUR_DOMAIN': admin
|
||||
'@USER:YOUR_DOMAIN' : user
|
||||
```
|
||||
|
||||
You may wish to look at `roles/matrix-bridge-mautrix-signal/templates/config.yaml.j2` to find more information on the permissions settings and other options you would like to configure.
|
||||
|
||||
## Set up Double Puppeting
|
||||
|
||||
If you'd like to use [Double Puppeting](https://github.com/tulir/mautrix-signal/wiki/Authentication#double-puppeting) (hint: you most likely do), you have 2 ways of going about it.
|
||||
If you'd like to use [Double Puppeting](https://docs.mau.fi/bridges/general/double-puppeting.html) (hint: you most likely do), you have 2 ways of going about it.
|
||||
|
||||
### Method 1: automatically, by enabling Shared Secret Auth
|
||||
|
||||
|
||||
@ -1,8 +1,8 @@
|
||||
# Setting up Mautrix Telegram (optional)
|
||||
|
||||
The playbook can install and configure [mautrix-telegram](https://github.com/tulir/mautrix-telegram) for you.
|
||||
The playbook can install and configure [mautrix-telegram](https://github.com/mautrix/telegram) for you.
|
||||
|
||||
See the project's [documentation](https://github.com/tulir/mautrix-telegram/wiki#usage) to learn what it does and why it might be useful to you.
|
||||
See the project's [documentation](https://docs.mau.fi/bridges/python/telegram/index.html) to learn what it does and why it might be useful to you.
|
||||
|
||||
You'll need to obtain API keys from [https://my.telegram.org/apps](https://my.telegram.org/apps) and then use the following playbook configuration:
|
||||
|
||||
@ -14,7 +14,7 @@ matrix_mautrix_telegram_api_hash: YOUR_TELEGRAM_API_HASH
|
||||
|
||||
## Set up Double Puppeting
|
||||
|
||||
If you'd like to use [Double Puppeting](https://github.com/tulir/mautrix-telegram/wiki/Authentication#replacing-telegram-accounts-matrix-puppet-with-matrix-account) (hint: you most likely do), you have 2 ways of going about it.
|
||||
If you'd like to use [Double Puppeting](https://docs.mau.fi/bridges/general/double-puppeting.html) (hint: you most likely do), you have 2 ways of going about it.
|
||||
|
||||
### Method 1: automatically, by enabling Shared Secret Auth
|
||||
|
||||
@ -45,7 +45,7 @@ https://matrix.DOMAIN/_matrix/client/r0/login
|
||||
|
||||
You then need to start a chat with `@telegrambot:YOUR_DOMAIN` (where `YOUR_DOMAIN` is your base domain, not the `matrix.` domain).
|
||||
|
||||
If you want to use the relay-bot feature ([relay bot documentation](https://github.com/tulir/mautrix-telegram/wiki/Relay-bot)), which allows anonymous user to chat with telegram users, use the following additional playbook configuration:
|
||||
If you want to use the relay-bot feature ([relay bot documentation](https://docs.mau.fi/bridges/python/telegram/relay-bot.html)), which allows anonymous user to chat with telegram users, use the following additional playbook configuration:
|
||||
|
||||
```yaml
|
||||
matrix_mautrix_telegram_bot_token: YOUR_TELEGRAM_BOT_TOKEN
|
||||
|
||||
@ -1,8 +1,8 @@
|
||||
# Setting up Mautrix Whatsapp (optional)
|
||||
|
||||
The playbook can install and configure [mautrix-whatsapp](https://github.com/tulir/mautrix-whatsapp) for you.
|
||||
The playbook can install and configure [mautrix-whatsapp](https://github.com/mautrix/whatsapp) for you.
|
||||
|
||||
See the project's [documentation](https://github.com/tulir/mautrix-whatsapp/wiki) to learn what it does and why it might be useful to you.
|
||||
See the project's [documentation](https://docs.mau.fi/bridges/go/whatsapp/index.html) to learn what it does and why it might be useful to you.
|
||||
|
||||
Use the following playbook configuration:
|
||||
|
||||
@ -13,7 +13,7 @@ matrix_mautrix_whatsapp_enabled: true
|
||||
|
||||
## Set up Double Puppeting
|
||||
|
||||
If you'd like to use [Double Puppeting](https://github.com/tulir/mautrix-whatsapp/wiki/Authentication#replacing-whatsapp-accounts-matrix-puppet-with-matrix-account) (hint: you most likely do), you have 2 ways of going about it.
|
||||
If you'd like to use [Double Puppeting](https://docs.mau.fi/bridges/general/double-puppeting.html) (hint: you most likely do), you have 2 ways of going about it.
|
||||
|
||||
### Method 1: automatically, by enabling Shared Secret Auth
|
||||
|
||||
|
||||
@ -13,8 +13,6 @@ playbook configuration:
|
||||
|
||||
```yaml
|
||||
matrix_mx_puppet_discord_enabled: true
|
||||
matrix_mx_puppet_discord_client_id: ""
|
||||
matrix_mx_puppet_discord_client_secret: ""
|
||||
```
|
||||
|
||||
|
||||
|
||||
@ -11,8 +11,6 @@ playbook configuration:
|
||||
|
||||
```yaml
|
||||
matrix_mx_puppet_groupme_enabled: true
|
||||
matrix_mx_puppet_groupme_client_id: ""
|
||||
matrix_mx_puppet_groupme_client_secret: ""
|
||||
```
|
||||
|
||||
|
||||
|
||||
@ -13,8 +13,6 @@ playbook configuration:
|
||||
|
||||
```yaml
|
||||
matrix_mx_puppet_slack_enabled: true
|
||||
matrix_mx_puppet_slack_client_id: ""
|
||||
matrix_mx_puppet_slack_client_secret: ""
|
||||
```
|
||||
|
||||
|
||||
|
||||
@ -11,8 +11,6 @@ playbook configuration:
|
||||
|
||||
```yaml
|
||||
matrix_mx_puppet_steam_enabled: true
|
||||
matrix_mx_puppet_steam_client_id: ""
|
||||
matrix_mx_puppet_steam_client_secret: ""
|
||||
```
|
||||
|
||||
|
||||
|
||||
@ -3,14 +3,12 @@
|
||||
**[Dimension](https://dimension.t2bot.io) can only be installed after Matrix services are installed and running.**
|
||||
If you're just installing Matrix services for the first time, please continue with the [Configuration](configuring-playbook.md) / [Installation](installing.md) flow and come back here later.
|
||||
|
||||
**Note**: enabling Dimension, means that the `openid` API endpoints will be exposed on the Matrix Federation port (usually `8448`), even if [federation](configuring-playbook-federation.md) is disabled. It's something to be aware of, especially in terms of firewall whitelisting (make sure port `8448` is accessible).
|
||||
**Note**: This playbook now supports running [Dimension](https://dimension.t2bot.io) in both a federated and [unfederated](https://github.com/turt2live/matrix-dimension/blob/master/docs/unfederated.md) environments. This is handled automatically based on the value of `matrix_synapse_federation_enabled`. Enabling Dimension, means that the `openid` API endpoints will be exposed on the Matrix Federation port (usually `8448`), even if [federation](configuring-playbook-federation.md) is disabled. It's something to be aware of, especially in terms of firewall whitelisting (make sure port `8448` is accessible).
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This playbook now supports running [Dimension](https://dimension.t2bot.io) in both a federated and an [unfederated](https://github.com/turt2live/matrix-dimension/blob/master/docs/unfederated.md) environment. This is handled automatically based on the value of `matrix_synapse_federation_enabled`.
|
||||
|
||||
Other important prerequisite is the `dimension.<your-domain>` DNS record being set up correctly. See [Configuring your DNS server](configuring-dns.md) on how to set up DNS record correctly.
|
||||
The `dimension.<your-domain>` DNS record must be created. See [Configuring your DNS server](configuring-dns.md) on how to set up DNS record correctly.
|
||||
|
||||
|
||||
## Enable
|
||||
@ -24,7 +22,7 @@ matrix_dimension_enabled: true
|
||||
|
||||
## Define admin users
|
||||
|
||||
These users can modify the integrations this Dimension supports. Admin interface is accessible by opening Dimension in Element and clicking the settings icon.
|
||||
These users can modify the integrations this Dimension supports. Admin interface is accessible at `https://dimension.<your-domain>/riot-app/admin` after logging in to element.
|
||||
Add this to your configuration file (`inventory/host_vars/matrix.<your-domain>/vars.yml`):
|
||||
|
||||
```yaml
|
||||
@ -45,11 +43,11 @@ To get an access token for the Dimension user, you can follow one of two options
|
||||
*Through an interactive login*:
|
||||
|
||||
1. In a private browsing session (incognito window), open Element.
|
||||
2. Log in with the `dimension` user and its password.
|
||||
1. Log in with the `dimension` user and its password.
|
||||
1. Set the display name and avatar, if required.
|
||||
2. In the settings page choose "Help & About", scroll down to the bottom and click `Access Token: <click to reveal>`.
|
||||
3. Copy the highlighted text to your configuration.
|
||||
4. Close the private browsing session. **Do not log out**. Logging out will invalidate the token, making it not work.
|
||||
1. In the settings page choose "Help & About", scroll down to the bottom and expand the `Access Token` section.
|
||||
1. Copy the access token to your configuration.
|
||||
1. Close the private browsing session. **Do not log out**. Logging out will invalidate the token, making it not work.
|
||||
|
||||
*With CURL*
|
||||
|
||||
@ -81,6 +79,8 @@ After these variables have been set, please run the following command to re-run
|
||||
ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start
|
||||
```
|
||||
|
||||
After Dimension has been installed you may need to log out and log back in for it to pick up the new integrations manager. Then you can access integrations in Element by opening a room, clicking the Room info button (`i`) button in the top right corner of the screen, and then clicking Add widgets, bridges & bots.
|
||||
|
||||
|
||||
## Jitsi domain
|
||||
|
||||
|
||||
@ -26,7 +26,6 @@ matrix_jitsi_enabled: true
|
||||
|
||||
# Run `bash inventory/scripts/jitsi-generate-passwords.sh` to generate these passwords,
|
||||
# or define your own strong passwords manually.
|
||||
matrix_jitsi_jicofo_component_secret: ""
|
||||
matrix_jitsi_jicofo_auth_password: ""
|
||||
matrix_jitsi_jvb_auth_password: ""
|
||||
matrix_jitsi_jibri_recorder_password: ""
|
||||
@ -129,7 +128,7 @@ Until this gets integrated into the playbook, we need to register new users / me
|
||||
Please SSH into your matrix host machine and execute the following command targeting the `matrix-jitsi-prosody` container:
|
||||
|
||||
```bash
|
||||
docker exec matrix-jitsi-prosody prosodyctl --config /config/prosody.cfg.lua register <USERNAME> matrix-jitsi-web <PASSWORD>
|
||||
docker exec matrix-jitsi-prosody prosodyctl --config /config/prosody.cfg.lua register <USERNAME> meet.jitsi <PASSWORD>
|
||||
```
|
||||
|
||||
Run this command for each user you would like to create, replacing `<USERNAME>` and `<PASSWORD>` accordingly. After you've finished, please exit the host.
|
||||
|
||||
@ -71,7 +71,7 @@ After following the [Preparation](#preparation) guide above, you can take a loo
|
||||
|
||||
### Using another external webserver
|
||||
|
||||
Feel free to look at the [examples/apache](../examples/apache) directory, or the [template files in the matrix-nginx-proxy role](../roles/matrix-nginx-proxy/templates/conf.d/).
|
||||
Feel free to look at the [examples/apache](../examples/apache) directory, or the [template files in the matrix-nginx-proxy role](../roles/matrix-nginx-proxy/templates/nginx/conf.d/).
|
||||
|
||||
|
||||
## Method 2: Fronting the integrated nginx reverse-proxy webserver with another reverse-proxy
|
||||
@ -108,6 +108,9 @@ matrix_nginx_proxy_container_federation_host_bind_port: '127.0.0.1:8449'
|
||||
# Since we don't obtain any certificates (`matrix_ssl_retrieval_method: none` above), it won't work by default.
|
||||
# An alternative is to tweak some of: `matrix_coturn_tls_enabled`, `matrix_coturn_tls_cert_path` and `matrix_coturn_tls_key_path`.
|
||||
matrix_coturn_enabled: false
|
||||
|
||||
# Trust the reverse proxy to send the correct `X-Forwarded-Proto` header as it is handling the SSL connection.
|
||||
matrix_nginx_proxy_trust_forwarded_proto: true
|
||||
```
|
||||
|
||||
With this, nginx would still be in use, but it would not bother with anything SSL related or with taking up public ports.
|
||||
|
||||
@ -56,8 +56,72 @@ Name | Description
|
||||
`matrix_nginx_proxy_proxy_synapse_metrics`|Set this to `true` to make matrix-nginx-proxy expose the Synapse metrics at `https://matrix.DOMAIN/_synapse/metrics`
|
||||
`matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_enabled`|Set this to `true` to password-protect (using HTTP Basic Auth) `https://matrix.DOMAIN/_synapse/metrics` (the username is always `prometheus`, the password is defined in `matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_key`)
|
||||
`matrix_nginx_proxy_proxy_synapse_metrics_basic_auth_key`|Set this to a password to use for HTTP Basic Auth for protecting `https://matrix.DOMAIN/_synapse/metrics` (the username is always `prometheus` - it's not configurable)
|
||||
`matrix_server_fqn_grafana`|Use this variable to override the domain at which the Grafana web user-interface is at (defaults to `stats.DOMAIN`).
|
||||
`matrix_server_fqn_grafana`|Use this variable to override the domain at which the Grafana web user-interface is at (defaults to `stats.DOMAIN`)
|
||||
|
||||
### Collecting worker metrics to an external Prometheus server
|
||||
|
||||
If you are using workers (`matrix_synapse_workers_enabled`) and have enabled `matrix_nginx_proxy_proxy_synapse_metrics` as described above, the playbook will also automatically proxy the all worker threads's metrics to `https://matrix.DOMAIN/_synapse-worker-TYPE-ID/metrics`, where `TYPE` corresponds to the type and `ID` to the instanceId of a worker as exemplified in `matrix_synapse_workers_enabled_list`.
|
||||
|
||||
The playbook also generates an exemplary prometheus.yml config file (`matrix_base_data_path/external_prometheus.yml.template`) with all the correct paths which you can copy to your Prometheus server and adapt to your needs, especially edit the specified `password_file` path and contents and path to your `synapse-v2.rules`.
|
||||
It will look a bit like this:
|
||||
```yaml
|
||||
scrape_configs:
|
||||
- job_name: 'synapse'
|
||||
metrics_path: /_synapse/metrics
|
||||
scheme: https
|
||||
basic_auth:
|
||||
username: prometheus
|
||||
password_file: /etc/prometheus/password.pwd
|
||||
static_configs:
|
||||
- targets: ['matrix.DOMAIN:443']
|
||||
labels:
|
||||
job: "master"
|
||||
index: 1
|
||||
- job_name: 'synapse-generic_worker-1'
|
||||
metrics_path: /_synapse-worker-generic_worker-18111/metrics
|
||||
scheme: https
|
||||
basic_auth:
|
||||
username: prometheus
|
||||
password_file: /etc/prometheus/password.pwd
|
||||
static_configs:
|
||||
- targets: ['matrix.DOMAIN:443']
|
||||
labels:
|
||||
job: "generic_worker"
|
||||
index: 18111
|
||||
```
|
||||
|
||||
### Collecting system and Postgres metrics to an external Prometheus server (advanced)
|
||||
|
||||
When you normally enable the Prometheus and Grafana via the playbook, it will also show general system (via node-exporter) and Postgres (via postgres-exporter) stats. If you are instead collecting your metrics to an external Prometheus server, you can follow this advanced configuration example to also export these stats.
|
||||
|
||||
It would be possible to use `matrix_prometheus_node_exporter_container_http_host_bind_port` etc., but that is not always the best choice, for example because your server is on a public network.
|
||||
|
||||
Use the following variables in addition to the ones mentioned above:
|
||||
|
||||
Name | Description
|
||||
-----|----------
|
||||
`matrix_nginx_proxy_proxy_grafana_enabled`|Set this to `true` to make the stats subdomain (`matrix_server_fqn_grafana`) available via the Nginx proxy
|
||||
`matrix_ssl_additional_domains_to_obtain_certificates_for`|Add `"{{ matrix_server_fqn_grafana }}"` to this list to have letsencrypt fetch a certificate for the stats subdomain
|
||||
`matrix_prometheus_node_exporter_enabled`|Set this to `true` to enable the node (general system stats) exporter
|
||||
`matrix_prometheus_postgres_exporter_enabled`|Set this to `true` to enable the Postgres exporter
|
||||
`matrix_nginx_proxy_proxy_grafana_additional_server_configuration_blocks`|Add locations to this list depending on which of the above exporters you enabled (see below)
|
||||
|
||||
```nginx
|
||||
matrix_nginx_proxy_proxy_grafana_additional_server_configuration_blocks:
|
||||
- 'location /node-exporter/ {
|
||||
resolver 127.0.0.11 valid=5s;
|
||||
proxy_pass http://matrix-prometheus-node-exporter:9100/;
|
||||
auth_basic "protected";
|
||||
auth_basic_user_file /nginx-data/matrix-synapse-metrics-htpasswd;
|
||||
}'
|
||||
- 'location /postgres-exporter/ {
|
||||
resolver 127.0.0.11 valid=5s;
|
||||
proxy_pass http://matrix-prometheus-postgres-exporter:9187/;
|
||||
auth_basic "protected";
|
||||
auth_basic_user_file /nginx-data/matrix-synapse-metrics-htpasswd;
|
||||
}'
|
||||
```
|
||||
You can customize the `location`s to your liking, just point your Prometheus to there later (e.g. `stats.DOMAIN/node-exporter/metrics`). Nginx is very picky about the `proxy_pass`syntax: take care to follow the example closely and note the trailing slash as well as absent use of variables. postgres-exporter uses the nonstandard port 9187.
|
||||
|
||||
## More information
|
||||
|
||||
|
||||
@ -55,3 +55,22 @@ Certain Synapse administration tasks (managing users and rooms, etc.) can be per
|
||||
## Synapse + OpenID Connect for Single-Sign-On
|
||||
|
||||
If you'd like to use OpenID Connect authentication with Synapse, you'll need some additional reverse-proxy configuration (see [our nginx reverse-proxy doc page](configuring-playbook-nginx.md#synapse-openid-connect-for-single-sign-on)).
|
||||
|
||||
In case you encounter errors regarding the parsing of the variables, you can try to add `{% raw %}` and `{% endraw %}` blocks around them. For example ;
|
||||
|
||||
```
|
||||
- idp_id: keycloak
|
||||
idp_name: "Keycloak"
|
||||
issuer: "https://url.ix/auth/realms/x"
|
||||
client_id: "matrix"
|
||||
client_secret: "{{ vault_synapse_keycloak }}"
|
||||
scopes: ["openid", "profile"]
|
||||
authorization_endpoint: "https://url.ix/auth/realms/x/protocol/openid-connect/auth"
|
||||
token_endpoint: "https://url.ix/auth/realms/x/protocol/openid-connect/token"
|
||||
userinfo_endpoint: "https://url.ix/auth/realms/x/protocol/openid-connect/userinfo"
|
||||
user_mapping_provider:
|
||||
config:
|
||||
display_name_template: "{% raw %}{{ user.given_name }}{% endraw %} {% raw %}{{ user.family_name }}{% endraw %}"
|
||||
email_template: "{% raw %}{{ user.email }}{% endraw %}"
|
||||
```
|
||||
|
||||
|
||||
@ -98,12 +98,16 @@ When you're done with all the configuration you'd like to do, continue with [Ins
|
||||
|
||||
- [Setting up Mautrix Hangouts bridging](configuring-playbook-bridge-mautrix-hangouts.md) (optional)
|
||||
|
||||
- [Setting up Mautrix Google Chat bridging](configuring-playbook-bridge-mautrix-googlechat.md) (optional)
|
||||
|
||||
- [Setting up Mautrix Instagram bridging](configuring-playbook-bridge-mautrix-instagram.md) (optional)
|
||||
|
||||
- [Setting up Mautrix Signal bridging](configuring-playbook-bridge-mautrix-signal.md) (optional)
|
||||
|
||||
- [Setting up Appservice IRC bridging](configuring-playbook-bridge-appservice-irc.md) (optional)
|
||||
|
||||
- [Setting up Beeper LinkedIn bridging](configuring-playbook-bridge-beeper-linkedin.md) (optional)
|
||||
|
||||
- [Setting up Appservice Discord bridging](configuring-playbook-bridge-appservice-discord.md) (optional)
|
||||
|
||||
- [Setting up Appservice Slack bridging](configuring-playbook-bridge-appservice-slack.md) (optional)
|
||||
|
||||
@ -69,7 +69,7 @@ It is, however, **a little fragile**, as future updates performed by this playbo
|
||||
|
||||
If you don't need the base domain (e.g. `example.com`) for anything else (hosting a website, etc.), you can point it to the Matrix server's IP address and tell the playbook to configure it.
|
||||
|
||||
This is the easiest way to set up well-known serving -- letting the playbook handle the whole base domain for you (including SSL certificates, etc.). However, if you need to use the base domain for other things (such as hosting some website, etc.), going with Option 1 or Option 2 might be more suitable.
|
||||
This is the easiest way to set up well-known serving -- letting the playbook handle the whole base domain for you (including SSL certificates, etc.). However, if you need to use the base domain for other things (such as hosting some website, etc.), going with Option 1 or Option 3 might be more suitable.
|
||||
|
||||
See [Serving the base domain](configuring-playbook-base-domain-serving.md) to learn how the playbook can help you set it up.
|
||||
|
||||
|
||||
@ -40,17 +40,19 @@ These services are not part of our default installation, but can be enabled by [
|
||||
|
||||
- [zeratax/matrix-registration](https://hub.docker.com/r/devture/zeratax-matrix-registration/) - [matrix-registration](https://github.com/ZerataX/matrix-registration): a simple python application to have a token based matrix registration (optional)
|
||||
|
||||
- [tulir/mautrix-telegram](https://mau.dev/tulir/mautrix-telegram/container_registry) - the [mautrix-telegram](https://github.com/tulir/mautrix-telegram) bridge to [Telegram](https://telegram.org/) (optional)
|
||||
- [mautrix/telegram](https://mau.dev/mautrix/telegram/container_registry) - the [mautrix-telegram](https://github.com/mautrix/telegram) bridge to [Telegram](https://telegram.org/) (optional)
|
||||
|
||||
- [tulir/mautrix-whatsapp](https://mau.dev/tulir/mautrix-whatsapp/container_registry) - the [mautrix-whatsapp](https://github.com/tulir/mautrix-whatsapp) bridge to [Whatsapp](https://www.whatsapp.com/) (optional)
|
||||
- [mautrix/whatsapp](https://mau.dev/mautrix/whatsapp/container_registry) - the [mautrix-whatsapp](https://github.com/mautrix/whatsapp) bridge to [Whatsapp](https://www.whatsapp.com/) (optional)
|
||||
|
||||
- [tulir/mautrix-facebook](https://mau.dev/tulir/mautrix-facebook/container_registry) - the [mautrix-facebook](https://github.com/tulir/mautrix-facebook) bridge to [Facebook](https://facebook.com/) (optional)
|
||||
- [mautrix/facebook](https://mau.dev/mautrix/facebook/container_registry) - the [mautrix-facebook](https://github.com/mautrix/facebook) bridge to [Facebook](https://facebook.com/) (optional)
|
||||
|
||||
- [tulir/mautrix-hangouts](https://mau.dev/tulir/mautrix-hangouts/container_registry) - the [mautrix-hangouts](https://github.com/tulir/mautrix-hangouts) bridge to [Google Hangouts](https://en.wikipedia.org/wiki/Google_Hangouts) (optional)
|
||||
- [mautrix/hangouts](https://mau.dev/mautrix/hangouts/container_registry) - the [mautrix-hangouts](https://github.com/mautrix/hangouts) bridge to [Google Hangouts](https://en.wikipedia.org/wiki/Google_Hangouts) (optional)
|
||||
|
||||
- [tulir/mautrix-instagram](https://mau.dev/tulir/mautrix-instagram/container_registry) - the [mautrix-instagram](https://github.com/tulir/mautrix-instagram) bridge to [Instagram](https://instagram.com/) (optional)
|
||||
- [mautrix/googlechat](https://mau.dev/mautrix/googlechat/container_registry) - the [mautrix-googlechat](https://github.com/mautrix/googlechat) bridge to [Google Chat](https://en.wikipedia.org/wiki/Google_Chat) (optional)
|
||||
|
||||
- [tulir/mautrix-signal](https://mau.dev/tulir/mautrix-signal/container_registry) - the [mautrix-signal](https://github.com/tulir/mautrix-signal) bridge to [Signal](https://www.signal.org/) (optional)
|
||||
- [mautrix/instagram](https://mau.dev/mautrix/instagram/container_registry) - the [mautrix-instagram](https://github.com/mautrix/instagram) bridge to [Instagram](https://instagram.com/) (optional)
|
||||
|
||||
- [mautrix/signal](https://mau.dev/mautrix/signal/container_registry) - the [mautrix-signal](https://github.com/mautrix/signal) bridge to [Signal](https://www.signal.org/) (optional)
|
||||
|
||||
- [matrixdotorg/matrix-appservice-irc](https://hub.docker.com/r/matrixdotorg/matrix-appservice-irc) - the [matrix-appservice-irc](https://github.com/matrix-org/matrix-appservice-irc) bridge to [IRC](https://wikipedia.org/wiki/Internet_Relay_Chat) (optional)
|
||||
|
||||
|
||||
@ -121,7 +121,7 @@ This is similar to the [EMnify/matrix-synapse-auto-deploy](https://github.com/EM
|
||||
|
||||
- this one **can be executed more than once** without causing trouble
|
||||
|
||||
- works on various distros: **CentOS** (7.0+), Debian-based distributions (**Debian** 9/Stretch+, **Ubuntu** 16.04+), **Archlinux**
|
||||
- works on various distros: **CentOS** (7.0+), Debian-based distributions (**Debian** 10/Buster+, **Ubuntu** 18.04+), **Archlinux**
|
||||
|
||||
- this one installs everything in a single directory (`/matrix` by default) and **doesn't "contaminate" your server** with files all over the place
|
||||
|
||||
|
||||
@ -82,8 +82,8 @@ Based on your setup, you have different ways to go about it:
|
||||
#
|
||||
# NOTE: these are in-container paths. `/matrix/ssl` on the host is mounted into the container
|
||||
# at the same path (`/matrix/ssl`) by default, so if that's the path you need, it would be seamless.
|
||||
matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate: /matrix/ssl/config/live/matrix.<your-domain>/fullchain.pem
|
||||
matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate_key: /matrix/ssl/config/live/matrix.<your-domain>/privkey.pem
|
||||
matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate: /matrix/ssl/config/live/<your-domain>/fullchain.pem
|
||||
matrix_nginx_proxy_proxy_matrix_federation_api_ssl_certificate_key: /matrix/ssl/config/live/<your-domain>/privkey.pem
|
||||
```
|
||||
|
||||
If your files are not in `/matrix/ssl` but in some other location, you would need to mount them into the container:
|
||||
|
||||
@ -23,12 +23,10 @@ To import, run this command (make sure to replace `<server-path-to-postgres-dump
|
||||
|
||||
```sh
|
||||
ansible-playbook -i inventory/hosts setup.yml \
|
||||
--extra-vars='postgres_default_import_database=synapse server_path_postgres_dump=<server-path-to-postgres-dump.sql>' \
|
||||
--extra-vars='server_path_postgres_dump=<server-path-to-postgres-dump.sql>' \
|
||||
--tags=import-postgres
|
||||
```
|
||||
|
||||
We specify the `synapse` database as the default import database. If your dump is a single-database dump (`pg_dump`), then we need to tell it where to go to. If you're redefining `matrix_synapse_database_database` to something other than `synapse`, please adjust it here too. For database dumps spanning multiple databases (`pg_dumpall`), you can remove the `postgres_default_import_database` definition (but it doesn't hurt to keep it too).
|
||||
|
||||
**Note**: `<server-path-to-postgres-dump.sql>` must be a file path to a Postgres dump file on the server (not on your local machine!).
|
||||
|
||||
|
||||
@ -62,7 +60,7 @@ ALTER TABLE public.application_services_state OWNER TO synapse_user;
|
||||
It can be worked around by changing the username to `synapse`, for example by using `sed`:
|
||||
|
||||
```Shell
|
||||
$ sed -i "s/synapse_user/synapse/g" homeserver.sql"
|
||||
$ sed -i "s/synapse_user/synapse/g" homeserver.sql
|
||||
```
|
||||
|
||||
This uses sed to perform an 'in-place' (`-i`) replacement globally (`/g`), searching for `synapse user` and replacing with `synapse` (`s/synapse_user/synapse`). If your database username was different, change `synapse_user` to that username instead.
|
||||
|
||||
@ -1,25 +1,25 @@
|
||||
# Installing
|
||||
|
||||
## 1. Installing the Matrix services
|
||||
|
||||
If you've [configured your DNS](configuring-dns.md) and have [configured the playbook](configuring-playbook.md), you can start the installation procedure.
|
||||
|
||||
Run this as-is to set up a server:
|
||||
Run this command to install the Matrix services:
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventory/hosts setup.yml --tags=setup-all
|
||||
```
|
||||
|
||||
**Note**: if you don't use SSH keys for authentication, but rather a regular password, you may need to add `--ask-pass` to the above (and all other) Ansible commands.
|
||||
The above command **doesn't start any services just yet** (another step does this later - below). Feel free to **re-run this setup command any time** you think something is off with the server configuration.
|
||||
|
||||
**Note**: if you **do** use SSH keys for authentication, **and** use a non-root user to *become* root (sudo), you may need to add `-K` (`--ask-become-pass`) to the above (and all other) Ansible commands.
|
||||
|
||||
The above command **doesn't start any services just yet** (another step does this later - below).
|
||||
|
||||
Feel free to **re-run this setup command any time** you think something is off with the server configuration.
|
||||
**Notes**:
|
||||
- if you **don't** use SSH keys for authentication, but rather a regular password, you may need to add `--ask-pass` to the above (and all other) Ansible commands.
|
||||
- if you **do** use SSH keys for authentication, **and** use a non-root user to *become* root (sudo), you may need to add `-K` (`--ask-become-pass`) to the above (and all other) Ansible commands.
|
||||
|
||||
|
||||
## Things you might want to do after installing
|
||||
## 2. Things you might want to do after installing
|
||||
|
||||
After installing, but before starting the services, you may want to do additional things like:
|
||||
**Before starting the services**, you may want to do additional things like:
|
||||
|
||||
- [Importing an existing SQLite database (from another Synapse installation)](importing-synapse-sqlite.md) (optional)
|
||||
|
||||
@ -28,20 +28,22 @@ After installing, but before starting the services, you may want to do additiona
|
||||
- [Importing `media_store` data files from an existing Synapse installation](importing-synapse-media-store.md) (optional)
|
||||
|
||||
|
||||
## Starting the services
|
||||
## 3. Starting the services
|
||||
|
||||
When you're ready to start the Matrix services (and set them up to auto-start in the future):
|
||||
When you're ready to start the Matrix services (and set them up to auto-start in the future), run this command:
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventory/hosts setup.yml --tags=start
|
||||
```
|
||||
|
||||
Now that services are running, you need to **finalize the installation process** (required for federation to work!) by [Configuring Service Discovery via .well-known](configuring-well-known.md)
|
||||
## 4. Finalize the installation
|
||||
|
||||
Now that services are running, you need to **finalize the installation process** (required for federation to work!) by [Configuring Service Discovery via .well-known](configuring-well-known.md).
|
||||
|
||||
|
||||
## Things to do next
|
||||
## 5. Things to do next
|
||||
|
||||
If you have started services and **finalized the installation process** (required for federation to work!) by [Configuring Service Discovery via .well-known](configuring-well-known.md), you can:
|
||||
After you have started the services and **finalized the installation process** (required for federation to work!) by [Configuring Service Discovery via .well-known](configuring-well-known.md), you can:
|
||||
|
||||
- [check if services work](maintenance-checking-services.md)
|
||||
- or [create your first Matrix user account](registering-users.md)
|
||||
|
||||
@ -14,7 +14,7 @@ Table of contents:
|
||||
|
||||
## Purging old data with the Purge History API
|
||||
|
||||
You can use the **[Purge History API](https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst)** to delete old messages on a per-room basis. **This is destructive** (especially for non-federated rooms), because it means **people will no longer have access to history past a certain point**.
|
||||
You can use the **[Purge History API](https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.md)** to delete old messages on a per-room basis. **This is destructive** (especially for non-federated rooms), because it means **people will no longer have access to history past a certain point**.
|
||||
|
||||
To make use of this API, **you'll need an admin access token** first. You can find your access token in the setting of some clients (like Element).
|
||||
Alternatively, you can log in and obtain a new access token like this:
|
||||
@ -27,7 +27,7 @@ https://matrix.DOMAIN/_matrix/client/r0/login
|
||||
|
||||
Synapse's Admin API is not exposed to the internet by default. To expose it you will need to add `matrix_nginx_proxy_proxy_matrix_client_api_forwarded_location_synapse_admin_api_enabled: true` to your `vars.yml` file.
|
||||
|
||||
Follow the [Purge History API](https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst) documentation page for the actual purging instructions.
|
||||
Follow the [Purge History API](https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.md) documentation page for the actual purging instructions.
|
||||
|
||||
After deleting data, you may wish to run a [`FULL` Postgres `VACUUM`](./maintenance-postgres.md#vacuuming-postgresql).
|
||||
|
||||
|
||||
@ -4,8 +4,8 @@ To install Matrix services using this Ansible playbook, you need:
|
||||
|
||||
- (Recommended) An **x86** server ([What kind of server specs do I need?](faq.md#what-kind-of-server-specs-do-i-need)) running one of these operating systems:
|
||||
- **CentOS** (7 only for now; [8 is not yet supported](https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/300))
|
||||
- **Debian** (9/Stretch or newer)
|
||||
- **Ubuntu** (16.04 or newer, although [20.04 may be problematic](ansible.md#supported-ansible-versions))
|
||||
- **Debian** (10/Buster or newer)
|
||||
- **Ubuntu** (18.04 or newer, although [20.04 may be problematic](ansible.md#supported-ansible-versions))
|
||||
- **Archlinux**
|
||||
|
||||
Generally, newer is better. We only strive to support released stable versions of distributions, not betas or pre-releases. This playbook can take over your whole server or co-exist with other services that you have there.
|
||||
|
||||
@ -22,10 +22,17 @@ List of roles where self-building the Docker image is currently possible:
|
||||
- `matrix-mailer`
|
||||
- `matrix-bridge-appservice-irc`
|
||||
- `matrix-bridge-appservice-slack`
|
||||
- `matrix-bridge-appservice-webhooks`
|
||||
- `matrix-bridge-mautrix-facebook`
|
||||
- `matrix-bridge-mautrix-hangouts`
|
||||
- `matrix-bridge-mautrix-googlechat`
|
||||
- `matrix-bridge-mautrix-telegram`
|
||||
- `matrix-bridge-mautrix-signal`
|
||||
- `matrix-bridge-mautrix-whatsapp`
|
||||
- `matrix-bridge-mx-puppet-skype`
|
||||
- `matrix-bot-mjolnir`
|
||||
- `matrix-bot-matrix-reminder-bot`
|
||||
- `matrix-email2matrix`
|
||||
|
||||
Adding self-building support to other roles is welcome. Feel free to contribute!
|
||||
|
||||
|
||||
@ -32,6 +32,7 @@
|
||||
ProxyPreserveHost On
|
||||
ProxyRequests Off
|
||||
ProxyVia On
|
||||
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
|
||||
|
||||
# Keep some URIs free for different proxy/location
|
||||
ProxyPassMatch ^/.well-known/matrix/client !
|
||||
@ -45,6 +46,14 @@
|
||||
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
|
||||
ProxyPass /_synapse/client http://127.0.0.1:8008/_synapse/client retry=0 nocanon
|
||||
ProxyPassReverse /_synapse/client http://127.0.0.1:8008/_synapse/client
|
||||
|
||||
# Proxy Admin API (necessary for Synapse-Admin)
|
||||
# ProxyPass /_synapse/admin http://127.0.0.1:8008/_synapse/admin retry=0 nocanon
|
||||
# ProxyPassReverse /_synapse/admin http://127.0.0.1:8008/_synapse/admin
|
||||
|
||||
# Proxy Synapse-Admin
|
||||
# ProxyPass /synapse-admin http://127.0.0.1:8766 retry=0 nocanon
|
||||
# ProxyPassReverse /synapse-admin http://127.0.0.1:8766
|
||||
|
||||
# Map /.well-known/matrix/client for client discovery
|
||||
Alias /.well-known/matrix/client /matrix/static-files/.well-known/matrix/client
|
||||
@ -111,6 +120,7 @@ Listen 8448
|
||||
ProxyPreserveHost On
|
||||
ProxyRequests Off
|
||||
ProxyVia On
|
||||
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
|
||||
|
||||
# Proxy all remaining traffic to the Synapse port
|
||||
# Beware: In this example the local traffic goes to the local synapse server at 127.0.0.1
|
||||
|
||||
@ -14,7 +14,7 @@ matrix_domain: YOUR_BARE_DOMAIN_NAME_HERE
|
||||
#
|
||||
# In case SSL renewal fails at some point, you'll also get an email notification there.
|
||||
#
|
||||
# If you decide to use another method for managing SSL certifites (different than the default Let's Encrypt),
|
||||
# If you decide to use another method for managing SSL certificates (different than the default Let's Encrypt),
|
||||
# you won't be required to define this variable (see `docs/configuring-playbook-ssl-certificates.md`).
|
||||
#
|
||||
# Example value: someone@example.com
|
||||
|
||||
@ -41,6 +41,8 @@ matrix_awx_enabled: false
|
||||
|
||||
matrix_nginx_proxy_data_path: "{{ '/chroot/website' if (matrix_awx_enabled and not matrix_nginx_proxy_base_domain_homepage_enabled) else (matrix_nginx_proxy_base_path + '/data') }}"
|
||||
matrix_nginx_proxy_data_path_in_container: "{{ '/nginx-data/matrix-domain' if (matrix_awx_enabled and not matrix_nginx_proxy_base_domain_homepage_enabled) else '/nginx-data' }}"
|
||||
matrix_nginx_proxy_data_path_extension: "{{ '' if (matrix_awx_enabled and not matrix_nginx_proxy_base_domain_homepage_enabled) else '/matrix-domain' }}"
|
||||
matrix_nginx_proxy_base_domain_create_directory: "{{ not matrix_awx_enabled }}"
|
||||
|
||||
######################################################################
|
||||
#
|
||||
@ -102,6 +104,8 @@ matrix_appservice_discord_database_password: "{{ matrix_synapse_macaroon_secret_
|
||||
# We don't enable bridges by default.
|
||||
matrix_appservice_webhooks_enabled: false
|
||||
|
||||
matrix_appservice_webhooks_container_image_self_build: "{{ matrix_architecture != 'amd64' }}"
|
||||
|
||||
# Normally, matrix-nginx-proxy is enabled and nginx can reach matrix-appservice-webhooks over the container network.
|
||||
# If matrix-nginx-proxy is not enabled, or you otherwise have a need for it, you can expose
|
||||
# matrix-appservice-webhooks' client-server port to the local host.
|
||||
@ -214,6 +218,42 @@ matrix_appservice_irc_database_password: "{{ matrix_synapse_macaroon_secret_key
|
||||
######################################################################
|
||||
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# matrix-bridge-beeper-linkedin
|
||||
#
|
||||
######################################################################
|
||||
|
||||
# We don't enable bridges by default.
|
||||
matrix_beeper_linkedin_enabled: false
|
||||
|
||||
matrix_beeper_linkedin_systemd_required_services_list: |
|
||||
{{
|
||||
['docker.service']
|
||||
+
|
||||
(['matrix-synapse.service'] if matrix_synapse_enabled else [])
|
||||
+
|
||||
(['matrix-postgres.service'] if matrix_postgres_enabled else [])
|
||||
+
|
||||
(['matrix-nginx-proxy.service'] if matrix_nginx_proxy_enabled else [])
|
||||
}}
|
||||
|
||||
matrix_beeper_linkedin_appservice_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'linked.as.token') | to_uuid }}"
|
||||
|
||||
matrix_beeper_linkedin_homeserver_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'linked.hs.token') | to_uuid }}"
|
||||
|
||||
matrix_beeper_linkedin_login_shared_secret: "{{ matrix_synapse_ext_password_provider_shared_secret_auth_shared_secret if matrix_synapse_ext_password_provider_shared_secret_auth_enabled else '' }}"
|
||||
|
||||
matrix_beeper_linkedin_bridge_presence: "{{ matrix_synapse_presence_enabled if matrix_synapse_enabled else true }}"
|
||||
|
||||
matrix_beeper_linkedin_database_password: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'maulinkedin.db') | to_uuid }}"
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# /matrix-bridge-beeper-linkedin
|
||||
#
|
||||
######################################################################
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# matrix-bridge-mautrix-facebook
|
||||
@ -297,6 +337,47 @@ matrix_mautrix_hangouts_database_password: "{{ matrix_synapse_macaroon_secret_ke
|
||||
######################################################################
|
||||
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# matrix-bridge-mautrix-googlechat
|
||||
#
|
||||
######################################################################
|
||||
|
||||
# We don't enable bridges by default.
|
||||
matrix_mautrix_googlechat_enabled: false
|
||||
|
||||
matrix_mautrix_googlechat_container_image_self_build: "{{ matrix_architecture not in ['amd64', 'arm64'] }}"
|
||||
|
||||
matrix_mautrix_googlechat_systemd_required_services_list: |
|
||||
{{
|
||||
['docker.service']
|
||||
+
|
||||
(['matrix-synapse.service'] if matrix_synapse_enabled else [])
|
||||
+
|
||||
(['matrix-postgres.service'] if matrix_postgres_enabled else [])
|
||||
+
|
||||
(['matrix-nginx-proxy.service'] if matrix_nginx_proxy_enabled else [])
|
||||
}}
|
||||
|
||||
matrix_mautrix_googlechat_appservice_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'gc.as.token') | to_uuid }}"
|
||||
|
||||
matrix_mautrix_googlechat_homeserver_token: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'gc.hs.token') | to_uuid }}"
|
||||
|
||||
matrix_mautrix_googlechat_container_http_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:9007' }}"
|
||||
|
||||
matrix_mautrix_googlechat_login_shared_secret: "{{ matrix_synapse_ext_password_provider_shared_secret_auth_shared_secret if matrix_synapse_ext_password_provider_shared_secret_auth_enabled else '' }}"
|
||||
|
||||
# Postgres is the default, except if not using `matrix_postgres` (internal postgres)
|
||||
matrix_mautrix_googlechat_database_engine: "{{ 'postgres' if matrix_postgres_enabled else 'sqlite' }}"
|
||||
matrix_mautrix_googlechat_database_password: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'mau.gc.db') | to_uuid }}"
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# /matrix-bridge-mautrix-googlechat
|
||||
#
|
||||
######################################################################
|
||||
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# matrix-bridge-mautrix-instagram
|
||||
@ -374,13 +455,15 @@ matrix_mautrix_signal_login_shared_secret: "{{ matrix_synapse_ext_password_provi
|
||||
matrix_mautrix_signal_database_engine: 'postgres'
|
||||
matrix_mautrix_signal_database_password: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'mau.signal.db') | to_uuid }}"
|
||||
|
||||
matrix_mautrix_signal_container_self_build: "{{ matrix_architecture not in ['amd64', 'arm64'] }}"
|
||||
matrix_mautrix_signal_daemon_container_self_build: "{{ matrix_architecture != 'amd64' }}"
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# /matrix-bridge-mautrix-signal
|
||||
#
|
||||
######################################################################
|
||||
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# matrix-bridge-mautrix-telegram
|
||||
@ -392,6 +475,8 @@ matrix_mautrix_telegram_enabled: false
|
||||
|
||||
# Images are multi-arch (amd64 and arm64, but not arm32).
|
||||
matrix_mautrix_telegram_container_self_build: "{{ matrix_architecture not in ['arm64', 'amd64'] }}"
|
||||
matrix_telegram_lottieconverter_container_self_build: "{{ matrix_architecture not in ['arm64', 'amd64'] }}"
|
||||
matrix_telegram_lottieconverter_container_self_build_mask_arch: "{{ matrix_architecture != 'amd64' }}"
|
||||
|
||||
matrix_mautrix_telegram_systemd_required_services_list: |
|
||||
{{
|
||||
@ -433,6 +518,8 @@ matrix_mautrix_telegram_database_password: "{{ matrix_synapse_macaroon_secret_ke
|
||||
# We don't enable bridges by default.
|
||||
matrix_mautrix_whatsapp_enabled: false
|
||||
|
||||
matrix_mautrix_whatsapp_container_image_self_build: "{{ matrix_architecture not in ['arm64', 'amd64'] }}"
|
||||
|
||||
matrix_mautrix_whatsapp_systemd_required_services_list: |
|
||||
{{
|
||||
['docker.service']
|
||||
@ -849,6 +936,7 @@ matrix_bot_matrix_reminder_bot_systemd_required_services_list: |
|
||||
# Postgres is the default, except if not using `matrix_postgres` (internal postgres)
|
||||
matrix_bot_matrix_reminder_bot_database_engine: "{{ 'postgres' if matrix_postgres_enabled else 'sqlite' }}"
|
||||
matrix_bot_matrix_reminder_bot_database_password: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'reminder.bot.db') | to_uuid }}"
|
||||
matrix_bot_matrix_reminder_bot_container_self_build: "{{ matrix_architecture != 'amd64' }}"
|
||||
|
||||
######################################################################
|
||||
#
|
||||
@ -893,6 +981,8 @@ matrix_bot_go_neb_container_http_host_bind_port: "{{ '' if matrix_nginx_proxy_en
|
||||
# We don't enable bots by default.
|
||||
matrix_bot_mjolnir_enabled: false
|
||||
|
||||
matrix_bot_mjolnir_container_image_self_build: "{{ matrix_architecture != 'amd64'}}"
|
||||
|
||||
matrix_bot_mjolnir_systemd_required_services_list: |
|
||||
{{
|
||||
['docker.service']
|
||||
@ -1072,6 +1162,8 @@ matrix_dynamic_dns_enabled: false
|
||||
|
||||
matrix_email2matrix_enabled: false
|
||||
|
||||
matrix_email2matrix_container_image_self_build: "{{ matrix_architecture != 'amd64' }}"
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# /matrix-email2matrix
|
||||
@ -1157,17 +1249,7 @@ matrix_mailer_container_image_self_build: "{{ matrix_architecture != 'amd64'}}"
|
||||
# If you wish to use the public identity servers (matrix.org, vector.im) instead of your own you may wish to disable this.
|
||||
matrix_ma1sd_enabled: true
|
||||
|
||||
# There's no prebuilt ma1sd image for the `arm32` architecture.
|
||||
# We're relying on self-building there.
|
||||
matrix_ma1sd_architecture: "{{
|
||||
{
|
||||
'amd64': 'amd64',
|
||||
'arm32': 'arm32',
|
||||
'arm64': 'arm64',
|
||||
}[matrix_architecture]
|
||||
}}"
|
||||
|
||||
matrix_ma1sd_container_image_self_build: "{{ matrix_architecture not in ['arm64', 'amd64'] }}"
|
||||
matrix_ma1sd_container_image_self_build: "{{ matrix_architecture != 'amd64' }}"
|
||||
|
||||
# Normally, matrix-nginx-proxy is enabled and nginx can reach ma1sd over the container network.
|
||||
# If matrix-nginx-proxy is not enabled, or you otherwise have a need for it, you can expose
|
||||
@ -1300,6 +1382,8 @@ matrix_nginx_proxy_synapse_media_repository_locations: "{{matrix_synapse_workers
|
||||
matrix_nginx_proxy_synapse_user_dir_locations: "{{ matrix_synapse_workers_user_dir_endpoints|default([]) }}"
|
||||
matrix_nginx_proxy_synapse_frontend_proxy_locations: "{{ matrix_synapse_workers_frontend_proxy_endpoints|default([]) }}"
|
||||
|
||||
matrix_nginx_proxy_proxy_synapse_workers_enabled_list: "{{ matrix_synapse_workers_enabled_list }}"
|
||||
|
||||
matrix_nginx_proxy_systemd_wanted_services_list: |
|
||||
{{
|
||||
(['matrix-synapse.service'] if matrix_synapse_enabled else [])
|
||||
@ -1416,6 +1500,12 @@ matrix_postgres_additional_databases: |
|
||||
'password': matrix_appservice_irc_database_password,
|
||||
}] if (matrix_appservice_irc_enabled and matrix_appservice_irc_database_engine == 'postgres' and matrix_appservice_irc_database_hostname == 'matrix-postgres') else [])
|
||||
+
|
||||
([{
|
||||
'name': matrix_beeper_linkedin_database_name,
|
||||
'username': matrix_beeper_linkedin_database_username,
|
||||
'password': matrix_beeper_linkedin_database_password,
|
||||
}] if (matrix_beeper_linkedin_enabled and matrix_beeper_linkedin_database_engine == 'postgres' and matrix_beeper_linkedin_database_hostname == 'matrix-postgres') else [])
|
||||
+
|
||||
([{
|
||||
'name': matrix_mautrix_facebook_database_name,
|
||||
'username': matrix_mautrix_facebook_database_username,
|
||||
@ -1428,6 +1518,12 @@ matrix_postgres_additional_databases: |
|
||||
'password': matrix_mautrix_hangouts_database_password,
|
||||
}] if (matrix_mautrix_hangouts_enabled and matrix_mautrix_hangouts_database_engine == 'postgres' and matrix_mautrix_hangouts_database_hostname == 'matrix-postgres') else [])
|
||||
+
|
||||
([{
|
||||
'name': matrix_mautrix_googlechat_database_name,
|
||||
'username': matrix_mautrix_googlechat_database_username,
|
||||
'password': matrix_mautrix_googlechat_database_password,
|
||||
}] if (matrix_mautrix_googlechat_enabled and matrix_mautrix_googlechat_database_engine == 'postgres' and matrix_mautrix_googlechat_database_hostname == 'matrix-postgres') else [])
|
||||
+
|
||||
([{
|
||||
'name': matrix_mautrix_instagram_database_name,
|
||||
'username': matrix_mautrix_instagram_database_username,
|
||||
@ -1506,18 +1602,12 @@ matrix_postgres_additional_databases: |
|
||||
'password': matrix_etherpad_database_password,
|
||||
}] if (matrix_etherpad_enabled and matrix_etherpad_database_engine == 'postgres' and matrix_etherpad_database_hostname == 'matrix-postgres') else [])
|
||||
+
|
||||
([{
|
||||
'name': matrix_sygnal_database_name,
|
||||
'username': matrix_sygnal_database_username,
|
||||
'password': matrix_sygnal_database_password,
|
||||
}] if (matrix_sygnal_enabled and matrix_sygnal_database_engine == 'postgres' and matrix_sygnal_database_hostname == 'matrix-postgres') else [])
|
||||
+
|
||||
([{
|
||||
'name': matrix_prometheus_postgres_exporter_database_name,
|
||||
'username': matrix_prometheus_postgres_exporter_database_username,
|
||||
'password': matrix_prometheus_postgres_exporter_database_password,
|
||||
}] if (matrix_prometheus_postgres_exporter_enabled and matrix_prometheus_postgres_exporter_database_hostname == 'matrix-postgres') else [])
|
||||
|
||||
|
||||
}}
|
||||
|
||||
matrix_postgres_import_roles_to_ignore: |
|
||||
@ -1556,10 +1646,6 @@ matrix_sygnal_metrics_prometheus_enabled: "{{ matrix_prometheus_enabled }}"
|
||||
|
||||
matrix_sygnal_container_http_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:6000' }}"
|
||||
|
||||
# Postgres is the default, except if not using `matrix_postgres` (internal postgres)
|
||||
matrix_sygnal_database_engine: "{{ 'postgres' if matrix_postgres_enabled else 'sqlite' }}"
|
||||
matrix_sygnal_database_password: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'sygnal') | to_uuid }}"
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# /matrix-sygnal
|
||||
@ -1714,16 +1800,23 @@ matrix_synapse_email_notif_from: "Matrix <{{ matrix_mailer_sender_address }}>"
|
||||
|
||||
# Even if TURN doesn't support TLS (it does by default),
|
||||
# it doesn't hurt to try a secure connection anyway.
|
||||
#
|
||||
# When Let's Encrypt certificates are used (the default case),
|
||||
# we don't enable `turns` endpoints, because WebRTC in Element can't talk to them.
|
||||
# Learn more here: https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/1145
|
||||
matrix_synapse_turn_uris: |
|
||||
{{
|
||||
[]
|
||||
+
|
||||
[
|
||||
'turns:' + matrix_server_fqn_matrix + '?transport=udp',
|
||||
'turns:' + matrix_server_fqn_matrix + '?transport=tcp',
|
||||
] if matrix_coturn_enabled and matrix_coturn_tls_enabled and matrix_ssl_retrieval_method != 'lets-encrypt' else []
|
||||
+
|
||||
[
|
||||
'turn:' + matrix_server_fqn_matrix + '?transport=udp',
|
||||
'turn:' + matrix_server_fqn_matrix + '?transport=tcp',
|
||||
]
|
||||
if matrix_coturn_enabled
|
||||
else []
|
||||
] if matrix_coturn_enabled else []
|
||||
}}
|
||||
|
||||
matrix_synapse_turn_shared_secret: "{{ matrix_coturn_turn_static_auth_secret if matrix_coturn_enabled else '' }}"
|
||||
@ -1813,6 +1906,7 @@ matrix_prometheus_container_http_host_bind_port: "{{ '' if matrix_nginx_proxy_en
|
||||
|
||||
matrix_prometheus_scraper_synapse_enabled: "{{ matrix_synapse_enabled and matrix_synapse_metrics_enabled }}"
|
||||
matrix_prometheus_scraper_synapse_targets: ['matrix-synapse:{{ matrix_synapse_metrics_port }}']
|
||||
matrix_prometheus_scraper_synapse_workers_enabled_list: "{{ matrix_synapse_workers_enabled_list }}"
|
||||
matrix_prometheus_scraper_synapse_rules_synapse_tag: "{{ matrix_synapse_docker_image_tag }}"
|
||||
|
||||
matrix_prometheus_scraper_node_enabled: "{{ matrix_prometheus_node_exporter_enabled }}"
|
||||
|
||||
@ -11,7 +11,6 @@ echo "# Install it before using this script, or simply create your own passwords
|
||||
|
||||
echo ""
|
||||
|
||||
JICOFO_COMPONENT_SECRET=$(generatePassword)
|
||||
JICOFO_AUTH_PASSWORD=$(generatePassword)
|
||||
JVB_AUTH_PASSWORD=$(generatePassword)
|
||||
JIBRI_RECORDER_PASSWORD=$(generatePassword)
|
||||
@ -19,7 +18,6 @@ JIBRI_XMPP_PASSWORD=$(generatePassword)
|
||||
|
||||
echo "# Paste these variables into your inventory/host_vars/matrix.DOMAIN/vars.yml file:"
|
||||
echo ""
|
||||
echo "matrix_jitsi_jicofo_component_secret: "$JICOFO_COMPONENT_SECRET
|
||||
echo "matrix_jitsi_jicofo_auth_password: "$JICOFO_AUTH_PASSWORD
|
||||
echo "matrix_jitsi_jvb_auth_password: "$JVB_AUTH_PASSWORD
|
||||
echo "matrix_jitsi_jibri_recorder_password: "$JIBRI_RECORDER_PASSWORD
|
||||
|
||||
@ -8,10 +8,10 @@
|
||||
"required": true,
|
||||
"min": null,
|
||||
"max": null,
|
||||
"default": "{{ sftp_auth_method | string }}",
|
||||
"default": "{{ awx_sftp_auth_method | string }}",
|
||||
"choices": "Disabled\nPassword\nSSH Key",
|
||||
"new_question": true,
|
||||
"variable": "sftp_auth_method",
|
||||
"variable": "awx_sftp_auth_method",
|
||||
"type": "multiplechoice"
|
||||
},
|
||||
{
|
||||
@ -20,10 +20,10 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 64,
|
||||
"default": "{{ sftp_password }}",
|
||||
"default": "{{ awx_sftp_password }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "sftp_password",
|
||||
"variable": "awx_sftp_password",
|
||||
"type": "password"
|
||||
},
|
||||
{
|
||||
@ -32,10 +32,10 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 16384,
|
||||
"default": "{{ sftp_public_key }}",
|
||||
"default": "{{ awx_sftp_public_key }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "sftp_public_key",
|
||||
"variable": "awx_sftp_public_key",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
|
||||
@ -8,12 +8,11 @@
|
||||
"required": false,
|
||||
"min": null,
|
||||
"max": null,
|
||||
"default": "{{ matrix_awx_backup_enabled | string | lower }}",
|
||||
"default": "{{ awx_backup_enabled | string | lower }}",
|
||||
"choices": "true\nfalse",
|
||||
"new_question": true,
|
||||
"variable": "matrix_awx_backup_enabled",
|
||||
"variable": "awx_backup_enabled",
|
||||
"type": "multiplechoice"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
66
roles/matrix-awx/surveys/bridge_discord_appservice.json.j2
Normal file
66
roles/matrix-awx/surveys/bridge_discord_appservice.json.j2
Normal file
@ -0,0 +1,66 @@
|
||||
{
|
||||
"name": "Bridge Discord Appservice",
|
||||
"description": "Enables a private bridge you can use to connect Matrix rooms to Discord.",
|
||||
"spec": [
|
||||
{
|
||||
"question_name": "Enable Discord AppService Bridge",
|
||||
"question_description": "Enables a private bridge you can use to connect Matrix rooms to Discord.",
|
||||
"required": true,
|
||||
"min": null,
|
||||
"max": null,
|
||||
"default": "{{ matrix_appservice_discord_enabled | string | lower }}",
|
||||
"choices": "true\nfalse",
|
||||
"new_question": true,
|
||||
"variable": "matrix_appservice_discord_enabled",
|
||||
"type": "multiplechoice"
|
||||
},
|
||||
{
|
||||
"question_name": "Discord Client ID",
|
||||
"question_description": "The OAuth2 'CLIENT ID' which can be found in the 'OAuth2' tab of your new discord application: https://discord.com/developers/applications",
|
||||
"required": true,
|
||||
"min": 0,
|
||||
"max": 128,
|
||||
"default": "{{ matrix_appservice_discord_client_id | trim }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "matrix_appservice_discord_client_id",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"question_name": "Discord Bot Token",
|
||||
"question_description": "The Bot 'TOKEN' which can be found in the 'Bot' tab of your new discord application: https://discord.com/developers/applications",
|
||||
"required": true,
|
||||
"min": 0,
|
||||
"max": 256,
|
||||
"default": "{{ matrix_appservice_discord_bot_token | trim }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "matrix_appservice_discord_bot_token",
|
||||
"type": "password"
|
||||
},
|
||||
{
|
||||
"question_name": "Auto-Admin Matrix User",
|
||||
"question_description": "The username you would like to be automatically joined and promoted to administrator (PL100) in bridged rooms. Exclude the '@' and server name postfix. So to create @stevo:example.org just enter 'stevo'.",
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 1024,
|
||||
"default": "",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "awx_appservice_discord_admin_user",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"question_name": "Auto-Admin Rooms",
|
||||
"question_description": "A list of rooms you want the user to be automatically joined and promoted to administrator (PL100) in. These should be the internal IDs (for example '!axfBUsKhfAjSMBdjKX:example.org') separated by newlines.",
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 4096,
|
||||
"default": "",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "awx_appservice_discord_admin_rooms",
|
||||
"type": "textarea"
|
||||
}
|
||||
]
|
||||
}
|
||||
@ -20,10 +20,10 @@
|
||||
"required": true,
|
||||
"min": null,
|
||||
"max": null,
|
||||
"default": "{{ matrix_corporal_policy_provider_mode }}",
|
||||
"default": "{{ awx_corporal_policy_provider_mode }}",
|
||||
"choices": "Simple Static File\nHTTP Pull Mode (API Enabled)\nHTTP Push Mode (API Enabled)",
|
||||
"new_question": true,
|
||||
"variable": "matrix_corporal_policy_provider_mode",
|
||||
"variable": "awx_corporal_policy_provider_mode",
|
||||
"type": "multiplechoice"
|
||||
},
|
||||
{
|
||||
@ -34,7 +34,7 @@
|
||||
"max": 65536,
|
||||
"default": "",
|
||||
"new_question": true,
|
||||
"variable": "matrix_corporal_simple_static_config",
|
||||
"variable": "awx_corporal_simple_static_config",
|
||||
"type": "textarea"
|
||||
},
|
||||
{
|
||||
@ -43,9 +43,9 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 4096,
|
||||
"default": "{{ matrix_corporal_pull_mode_uri }}",
|
||||
"default": "{{ awx_corporal_pull_mode_uri }}",
|
||||
"new_question": true,
|
||||
"variable": "matrix_corporal_pull_mode_uri",
|
||||
"variable": "awx_corporal_pull_mode_uri",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
@ -54,10 +54,10 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 256,
|
||||
"default": "{{ matrix_corporal_pull_mode_token }}",
|
||||
"default": "{{ awx_corporal_pull_mode_token }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "matrix_corporal_pull_mode_token",
|
||||
"variable": "awx_corporal_pull_mode_token",
|
||||
"type": "password"
|
||||
},
|
||||
{
|
||||
@ -78,10 +78,10 @@
|
||||
"required": false,
|
||||
"min": null,
|
||||
"max": null,
|
||||
"default": "{{ matrix_corporal_raise_ratelimits }}",
|
||||
"default": "{{ awx_corporal_raise_ratelimits }}",
|
||||
"choices": "Normal\nRaised",
|
||||
"new_question": true,
|
||||
"variable": "matrix_corporal_raise_ratelimits",
|
||||
"variable": "awx_corporal_raise_ratelimits",
|
||||
"type": "multiplechoice"
|
||||
}
|
||||
]
|
||||
|
||||
@ -20,10 +20,10 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 65536,
|
||||
"default": {{ ext_dimension_users_raw_final | to_json }},
|
||||
"default": {{ awx_dimension_users_final | to_json }},
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "ext_dimension_users_raw",
|
||||
"variable": "awx_dimension_users",
|
||||
"type": "textarea"
|
||||
}
|
||||
]
|
||||
|
||||
@ -14,18 +14,6 @@
|
||||
"variable": "matrix_client_element_enabled",
|
||||
"type": "multiplechoice"
|
||||
},
|
||||
{
|
||||
"question_name": "Set Branding for Web Client",
|
||||
"question_description": "Sets the 'branding' seen in the tab and on the welcome page to a custom value.",
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 256,
|
||||
"default": "{{ matrix_client_element_brand }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "matrix_client_element_brand",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"question_name": "Set Theme for Web Client",
|
||||
"question_description": "Sets the default theme for the web client, can be changed later by individual users.",
|
||||
@ -38,18 +26,78 @@
|
||||
"variable": "matrix_client_element_default_theme",
|
||||
"type": "multiplechoice"
|
||||
},
|
||||
{
|
||||
"question_name": "Set Branding for Web Client",
|
||||
"question_description": "Sets the 'branding' seen in the tab and on the welcome page to a custom value.Leaving this field blank will cause the default branding will be used: 'Element'",
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 256,
|
||||
"default": "{{ matrix_client_element_brand | trim }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "matrix_client_element_brand",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"question_name": "Set Welcome Page Background",
|
||||
"question_description": "URL to Wallpaper, shown in background of the welcome page. Must be a 'https' link, otherwise it won't be set.",
|
||||
"question_description": "Sets the background image on the welcome page, you should enter a URL to the image you want to use. Must be a 'https' link, otherwise it won't be set. Leaving this field blank will cause the default background to be used.",
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 1024,
|
||||
"default": "{{ matrix_client_element_branding_welcomeBackgroundUrl }}",
|
||||
"default": "{{ matrix_client_element_branding_welcomeBackgroundUrl | trim }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "matrix_client_element_branding_welcomeBackgroundUrl",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"question_name": "Set Welcome Page Logo",
|
||||
"question_description": "Sets the logo found on the welcome and login page, must be a valid https link to your logo, the logo itself should be a square vector image (SVG). Leaving this field blank will cause the default Element logo to be used.",
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 1024,
|
||||
"default": "{{ matrix_client_element_welcome_logo | trim }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "matrix_client_element_welcome_logo",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"question_name": "Set Welcome Page Logo URL",
|
||||
"question_description": "Sets the URL link the welcome page logo leads to, must be a valid https link. Leaving this field blank will cause this default link to be used: 'https://element.io'",
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 1024,
|
||||
"default": "{{ matrix_client_element_welcome_logo_link | trim }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "matrix_client_element_welcome_logo_link",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"question_name": "Set Welcome Page Headline",
|
||||
"question_description": "Sets the headline seen on the welcome page. Leaving this field blank will cause this default headline to be used: 'Welcome to Element!'",
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 512,
|
||||
"default": "{{ awx_matrix_client_element_welcome_headline | trim }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "awx_matrix_client_element_welcome_headline",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"question_name": "Set Welcome Page Text",
|
||||
"question_description": "Sets the text seen on the welcome page. Leaving this field blank will cause this default headline to be used: 'Decentralised, encrypted chat & collaboration powered by [Matrix]'",
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 2048,
|
||||
"default": "{{ awx_matrix_client_element_welcome_text | trim }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "awx_matrix_client_element_welcome_text",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"question_name": "Show Registration Button",
|
||||
"question_description": "If you show the registration button on the welcome page.",
|
||||
|
||||
@ -8,10 +8,10 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 2048,
|
||||
"default": "{{ element_subdomain }}",
|
||||
"default": "{{ awx_element_subdomain }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "element_subdomain",
|
||||
"variable": "awx_element_subdomain",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
|
||||
19
roles/matrix-awx/surveys/configure_email_relay.json.j2
Normal file
19
roles/matrix-awx/surveys/configure_email_relay.json.j2
Normal file
@ -0,0 +1,19 @@
|
||||
{
|
||||
"name": "Configure Email Relay",
|
||||
"description": "Enable MailGun relay to increase verification email reliability.",
|
||||
"spec": [
|
||||
{
|
||||
"question_name": "Enable Email Relay",
|
||||
"question_description": "Enables the MailGun email relay server, enabling this will increase the reliability of your email verification.",
|
||||
"required": false,
|
||||
"min": null,
|
||||
"max": null,
|
||||
"default": "{{ matrix_mailer_relay_use | string | lower }}",
|
||||
"choices": "true\nfalse",
|
||||
"new_question": true,
|
||||
"variable": "matrix_mailer_relay_use",
|
||||
"type": "multiplechoice"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@ -20,10 +20,10 @@
|
||||
"required": false,
|
||||
"min": null,
|
||||
"max": null,
|
||||
"default": "{{ ext_matrix_ma1sd_auth_store }}",
|
||||
"default": "{{ awx_matrix_ma1sd_auth_store }}",
|
||||
"choices": "Synapse Internal\nLDAP/AD",
|
||||
"new_question": true,
|
||||
"variable": "ext_matrix_ma1sd_auth_store",
|
||||
"variable": "awx_matrix_ma1sd_auth_store",
|
||||
"type": "multiplechoice"
|
||||
},
|
||||
{
|
||||
@ -32,9 +32,9 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 65536,
|
||||
"default": {{ ext_matrix_ma1sd_configuration_extension_yaml | to_json }},
|
||||
"default": {{ awx_matrix_ma1sd_configuration_extension_yaml | to_json }},
|
||||
"new_question": true,
|
||||
"variable": "ext_matrix_ma1sd_configuration_extension_yaml",
|
||||
"variable": "awx_matrix_ma1sd_configuration_extension_yaml",
|
||||
"type": "textarea"
|
||||
}
|
||||
]
|
||||
|
||||
@ -92,10 +92,10 @@
|
||||
"required": false,
|
||||
"min": null,
|
||||
"max": null,
|
||||
"default": "{{ ext_registrations_require_3pid | string | lower }}",
|
||||
"default": "{{ awx_registrations_require_3pid | string | lower }}",
|
||||
"choices": "true\nfalse",
|
||||
"new_question": true,
|
||||
"variable": "ext_registrations_require_3pid",
|
||||
"variable": "awx_registrations_require_3pid",
|
||||
"type": "multiplechoice"
|
||||
},
|
||||
{
|
||||
@ -107,7 +107,7 @@
|
||||
"default": "",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "ext_matrix_synapse_registration_shared_secret",
|
||||
"variable": "awx_matrix_synapse_registration_shared_secret",
|
||||
"type": "password"
|
||||
},
|
||||
{
|
||||
@ -119,7 +119,7 @@
|
||||
"default": "{{ matrix_synapse_max_upload_size_mb }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "matrix_synapse_max_upload_size_mb_raw",
|
||||
"variable": "awx_synapse_max_upload_size_mb",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
@ -128,10 +128,10 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 65536,
|
||||
"default": {{ ext_url_preview_accept_language_default | to_json }},
|
||||
"default": {{ awx_url_preview_accept_language_default | to_json }},
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "ext_url_preview_accept_language_raw",
|
||||
"variable": "awx_url_preview_accept_language",
|
||||
"type": "textarea"
|
||||
},
|
||||
{
|
||||
@ -140,10 +140,10 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 65536,
|
||||
"default": {{ ext_federation_whitelist_raw | to_json }},
|
||||
"default": {{ awx_federation_whitelist | to_json }},
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "ext_federation_whitelist_raw",
|
||||
"variable": "awx_federation_whitelist",
|
||||
"type": "textarea"
|
||||
},
|
||||
{
|
||||
@ -152,10 +152,10 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 65536,
|
||||
"default": {{ matrix_synapse_auto_join_rooms_raw | to_json }},
|
||||
"default": {{ awx_synapse_auto_join_rooms | to_json }},
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "matrix_synapse_auto_join_rooms_raw",
|
||||
"variable": "awx_synapse_auto_join_rooms",
|
||||
"type": "textarea"
|
||||
},
|
||||
{
|
||||
@ -164,10 +164,10 @@
|
||||
"required": false,
|
||||
"min": null,
|
||||
"max": null,
|
||||
"default": "{{ ext_enable_registration_captcha | string | lower }}",
|
||||
"default": "{{ awx_enable_registration_captcha | string | lower }}",
|
||||
"choices": "true\nfalse",
|
||||
"new_question": true,
|
||||
"variable": "ext_enable_registration_captcha",
|
||||
"variable": "awx_enable_registration_captcha",
|
||||
"type": "multiplechoice"
|
||||
},
|
||||
{
|
||||
@ -176,10 +176,10 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 40,
|
||||
"default": "{{ ext_recaptcha_public_key }}",
|
||||
"default": "{{ awx_recaptcha_public_key }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "ext_recaptcha_public_key",
|
||||
"variable": "awx_recaptcha_public_key",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
@ -188,10 +188,10 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 40,
|
||||
"default": "{{ ext_recaptcha_private_key }}",
|
||||
"default": "{{ awx_recaptcha_private_key }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "ext_recaptcha_private_key",
|
||||
"variable": "awx_recaptcha_private_key",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
|
||||
@ -8,10 +8,10 @@
|
||||
"required": true,
|
||||
"min": null,
|
||||
"max": null,
|
||||
"default": "{{ customise_base_domain_website | string | lower }}",
|
||||
"default": "{{ awx_customise_base_domain_website | string | lower }}",
|
||||
"choices": "true\nfalse",
|
||||
"new_question": true,
|
||||
"variable": "customise_base_domain_website",
|
||||
"variable": "awx_customise_base_domain_website",
|
||||
"type": "multiplechoice"
|
||||
},
|
||||
{
|
||||
@ -20,10 +20,10 @@
|
||||
"required": true,
|
||||
"min": null,
|
||||
"max": null,
|
||||
"default": "{{ sftp_auth_method | string }}",
|
||||
"default": "{{ awx_sftp_auth_method | string }}",
|
||||
"choices": "Disabled\nPassword\nSSH Key",
|
||||
"new_question": true,
|
||||
"variable": "sftp_auth_method",
|
||||
"variable": "awx_sftp_auth_method",
|
||||
"type": "multiplechoice"
|
||||
},
|
||||
{
|
||||
@ -32,10 +32,10 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 64,
|
||||
"default": "{{ sftp_password }}",
|
||||
"default": "{{ awx_sftp_password }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "sftp_password",
|
||||
"variable": "awx_sftp_password",
|
||||
"type": "password"
|
||||
},
|
||||
{
|
||||
@ -44,10 +44,10 @@
|
||||
"required": false,
|
||||
"min": 0,
|
||||
"max": 16384,
|
||||
"default": "{{ sftp_public_key }}",
|
||||
"default": "{{ awx_sftp_public_key }}",
|
||||
"choices": "",
|
||||
"new_question": true,
|
||||
"variable": "sftp_public_key",
|
||||
"variable": "awx_sftp_public_key",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
|
||||
@ -7,7 +7,7 @@
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertafter: '# AWX Settings Start'
|
||||
with_dict:
|
||||
'matrix_awx_backup_enabled': '{{ matrix_awx_backup_enabled }}'
|
||||
'awx_backup_enabled': '{{ awx_backup_enabled }}'
|
||||
tags: use-survey
|
||||
|
||||
- name: Save new 'Backup Server' survey.json to the AWX tower, template
|
||||
@ -24,14 +24,6 @@
|
||||
mode: '0660'
|
||||
tags: use-survey
|
||||
|
||||
- name: Collect AWX admin token the hard way!
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
|
||||
register: tower_token
|
||||
no_log: True
|
||||
tags: use-survey
|
||||
|
||||
- name: Recreate 'Backup Server' job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_template:
|
||||
@ -49,15 +41,11 @@
|
||||
become_enabled: yes
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
tags: use-survey
|
||||
|
||||
- name: Run export.sh if this job template is run by the client
|
||||
command: /bin/sh /root/export.sh
|
||||
tags: use-survey
|
||||
|
||||
- name: Include vars in matrix_vars.yml
|
||||
include_vars:
|
||||
file: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml'
|
||||
@ -70,14 +58,43 @@
|
||||
mode: '0660'
|
||||
tags: use-survey
|
||||
|
||||
- name: Perform the borg backup
|
||||
command: borgmatic
|
||||
when: matrix_awx_backup_enabled|bool
|
||||
- name: Run initial backup of /matrix/ and snapshot the database simultaneously
|
||||
command: "{{ item }}"
|
||||
with_items:
|
||||
- borgmatic -c /root/.config/borgmatic/config_1.yaml
|
||||
- /bin/sh /usr/local/bin/awx-export-service.sh 1 0
|
||||
register: _create_instances
|
||||
async: 3600 # Maximum runtime in seconds.
|
||||
poll: 0 # Fire and continue (never poll)
|
||||
when: awx_backup_enabled|bool
|
||||
|
||||
- name: Wait for both of these jobs to finish
|
||||
async_status:
|
||||
jid: "{{ item.ansible_job_id }}"
|
||||
register: _jobs
|
||||
until: _jobs.finished
|
||||
delay: 5 # Check every 5 seconds.
|
||||
retries: 720 # Retry for a full hour.
|
||||
with_items: "{{ _create_instances.results }}"
|
||||
when: awx_backup_enabled|bool
|
||||
|
||||
- name: Perform borg backup of postgres dump
|
||||
command: borgmatic -c /root/.config/borgmatic/config_2.yaml
|
||||
when: awx_backup_enabled|bool
|
||||
|
||||
- name: Delete the AWX session token for executing modules
|
||||
awx.awx.tower_token:
|
||||
description: 'AWX Session Token'
|
||||
scope: "write"
|
||||
state: absent
|
||||
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
|
||||
- name: Set boolean value to exit playbook
|
||||
set_fact:
|
||||
end_playbook: true
|
||||
awx_end_playbook: true
|
||||
|
||||
- name: End playbook if this task list is called.
|
||||
meta: end_play
|
||||
when: end_playbook is defined and end_playbook|bool
|
||||
when: awx_end_playbook is defined and awx_end_playbook|bool
|
||||
|
||||
57
roles/matrix-awx/tasks/bridge_discord_appservice.yml
Normal file
57
roles/matrix-awx/tasks/bridge_discord_appservice.yml
Normal file
@ -0,0 +1,57 @@
|
||||
|
||||
- name: Record Bridge Discord AppService variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertafter: '# Bridge Discord AppService Start'
|
||||
with_dict:
|
||||
'matrix_appservice_discord_enabled': '{{ matrix_appservice_discord_enabled }}'
|
||||
'matrix_appservice_discord_client_id': '{{ matrix_appservice_discord_client_id }}'
|
||||
'matrix_appservice_discord_bot_token': '{{ matrix_appservice_discord_bot_token }}'
|
||||
|
||||
- name: If the raw inputs is not empty start constructing parsed awx_appservice_discord_admin_rooms list
|
||||
set_fact:
|
||||
awx_appservice_discord_admin_rooms_array: |-
|
||||
{{ awx_appservice_discord_admin_rooms.splitlines() | to_json }}
|
||||
when: awx_appservice_discord_admin_rooms | trim | length > 0
|
||||
|
||||
- name: Promote user to administer (PL100) of each room
|
||||
command: |
|
||||
docker exec -i matrix-appservice-discord /bin/sh -c 'cp /cfg/registration.yaml /tmp/discord-registration.yaml && cd /tmp && node /build/tools/adminme.js -c /cfg/config.yaml -m "{{ item.1 }}" -u "@{{ awx_appservice_discord_admin_user }}:{{ matrix_domain }}" -p 100'
|
||||
with_indexed_items:
|
||||
- "{{ awx_appservice_discord_admin_rooms_array }}"
|
||||
when: ( awx_appservice_discord_admin_rooms | trim | length > 0 ) and ( awx_appservice_discord_admin_user is defined )
|
||||
|
||||
- name: Save new 'Bridge Discord Appservice' survey.json to the AWX tower, template
|
||||
delegate_to: 127.0.0.1
|
||||
template:
|
||||
src: 'roles/matrix-awx/surveys/bridge_discord_appservice.json.j2'
|
||||
dest: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}//bridge_discord_appservice.json'
|
||||
|
||||
- name: Copy new 'Bridge Discord Appservice' survey.json to target machine
|
||||
copy:
|
||||
src: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/bridge_discord_appservice.json'
|
||||
dest: '/matrix/awx/bridge_discord_appservice.json'
|
||||
mode: '0660'
|
||||
|
||||
- name: Recreate 'Bridge Discord Appservice' job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_template:
|
||||
name: "{{ matrix_domain }} - 3 - Bridge Discord AppService"
|
||||
description: "Enables a private bridge you can use to connect Matrix rooms to Discord."
|
||||
extra_vars: "{{ lookup('file', '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/extra_vars.json') }}"
|
||||
job_type: run
|
||||
job_tags: "start,setup-all,bridge-discord-appservice"
|
||||
inventory: "{{ member_id }}"
|
||||
project: "{{ member_id }} - Matrix Docker Ansible Deploy"
|
||||
playbook: setup.yml
|
||||
credential: "{{ member_id }} - AWX SSH Key"
|
||||
survey_enabled: true
|
||||
survey_spec: "{{ lookup('file', '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/bridge_discord_appservice.json') }}"
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
10
roles/matrix-awx/tasks/create_session_token.yml
Normal file
10
roles/matrix-awx/tasks/create_session_token.yml
Normal file
@ -0,0 +1,10 @@
|
||||
|
||||
- name: Create a AWX session token for executing modules
|
||||
awx.awx.tower_token:
|
||||
description: 'AWX Session Token'
|
||||
scope: "write"
|
||||
state: present
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_master_token }}"
|
||||
register: awx_session_token
|
||||
no_log: True
|
||||
@ -6,26 +6,35 @@
|
||||
|
||||
- name: Set admin bool to zero
|
||||
set_fact:
|
||||
admin_bool: 0
|
||||
when: admin_access == 'false'
|
||||
awx_admin_bool: 0
|
||||
when: awx_admin_access == 'false'
|
||||
|
||||
- name: Examine if server admin set
|
||||
set_fact:
|
||||
admin_bool: 1
|
||||
when: admin_access == 'true'
|
||||
|
||||
- name: Set boolean value to exit playbook
|
||||
set_fact:
|
||||
end_playbook: true
|
||||
awx_admin_bool: 1
|
||||
when: awx_admin_access == 'true'
|
||||
|
||||
- name: Create user account
|
||||
command: |
|
||||
/usr/local/bin/matrix-synapse-register-user {{ new_username | quote }} {{ new_password | quote }} {{ admin_bool }}
|
||||
register: cmd
|
||||
/usr/local/bin/matrix-synapse-register-user {{ awx_new_username | quote }} {{ awx_new_password | quote }} {{ awx_admin_bool }}
|
||||
register: awx_cmd_output
|
||||
|
||||
- name: Delete the AWX session token for executing modules
|
||||
awx.awx.tower_token:
|
||||
description: 'AWX Session Token'
|
||||
scope: "write"
|
||||
state: absent
|
||||
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
|
||||
- name: Set boolean value to exit playbook
|
||||
set_fact:
|
||||
awx_end_playbook: true
|
||||
|
||||
- name: Result
|
||||
debug: msg="{{ cmd.stdout }}"
|
||||
debug: msg="{{ awx_cmd_output.stdout }}"
|
||||
|
||||
- name: End playbook if this task list is called.
|
||||
meta: end_play
|
||||
when: end_playbook is defined and end_playbook|bool
|
||||
when: awx_end_playbook is defined and awx_end_playbook|bool
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Enable index.html creation if user doesn't wish to customise base domain
|
||||
delegate_to: 127.0.0.1
|
||||
@ -8,7 +9,7 @@
|
||||
insertafter: '# Base Domain Settings Start'
|
||||
with_dict:
|
||||
'matrix_nginx_proxy_base_domain_homepage_enabled': 'true'
|
||||
when: (customise_base_domain_website is defined) and not customise_base_domain_website|bool
|
||||
when: (awx_customise_base_domain_website is defined) and not awx_customise_base_domain_website|bool
|
||||
|
||||
- name: Disable index.html creation to allow multi-file site if user does wish to customise base domain
|
||||
delegate_to: 127.0.0.1
|
||||
@ -19,7 +20,7 @@
|
||||
insertafter: '# Base Domain Settings Start'
|
||||
with_dict:
|
||||
'matrix_nginx_proxy_base_domain_homepage_enabled': 'false'
|
||||
when: (customise_base_domain_website is defined) and customise_base_domain_website|bool
|
||||
when: (awx_customise_base_domain_website is defined) and awx_customise_base_domain_website|bool
|
||||
|
||||
- name: Record custom 'Customise Website + Access Export' variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -29,9 +30,9 @@
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertafter: '# Custom Settings Start'
|
||||
with_dict:
|
||||
'sftp_auth_method': '"{{ sftp_auth_method }}"'
|
||||
'sftp_password': '"{{ sftp_password }}"'
|
||||
'sftp_public_key': '"{{ sftp_public_key }}"'
|
||||
'awx_sftp_auth_method': '"{{ awx_sftp_auth_method }}"'
|
||||
'awx_sftp_password': '"{{ awx_sftp_password }}"'
|
||||
'awx_sftp_public_key': '"{{ awx_sftp_public_key }}"'
|
||||
|
||||
- name: Record custom 'Customise Website + Access Export' variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -41,8 +42,8 @@
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertafter: '# Custom Settings Start'
|
||||
with_dict:
|
||||
'customise_base_domain_website': '{{ customise_base_domain_website }}'
|
||||
when: customise_base_domain_website is defined
|
||||
'awx_customise_base_domain_website': '{{ awx_customise_base_domain_website }}'
|
||||
when: awx_customise_base_domain_website is defined
|
||||
|
||||
- name: Reload vars in matrix_vars.yml
|
||||
include_vars:
|
||||
@ -54,35 +55,28 @@
|
||||
template:
|
||||
src: './roles/matrix-awx/surveys/configure_website_access_export.json.j2'
|
||||
dest: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/configure_website_access_export.json'
|
||||
when: customise_base_domain_website is defined
|
||||
when: awx_customise_base_domain_website is defined
|
||||
|
||||
- name: Copy new 'Customise Website + Access Export' survey.json to target machine
|
||||
copy:
|
||||
src: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/configure_website_access_export.json'
|
||||
dest: '/matrix/awx/configure_website_access_export.json'
|
||||
mode: '0660'
|
||||
when: customise_base_domain_website is defined
|
||||
when: awx_customise_base_domain_website is defined
|
||||
|
||||
- name: Save new 'Customise Website + Access Export' survey.json to the AWX tower, template
|
||||
delegate_to: 127.0.0.1
|
||||
template:
|
||||
src: './roles/matrix-awx/surveys/access_export.json.j2'
|
||||
dest: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/access_export.json'
|
||||
when: customise_base_domain_website is undefined
|
||||
when: awx_customise_base_domain_website is undefined
|
||||
|
||||
- name: Copy new 'Customise Website + Access Export' survey.json to target machine
|
||||
copy:
|
||||
src: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/access_export.json'
|
||||
dest: '/matrix/awx/access_export.json'
|
||||
mode: '0660'
|
||||
when: customise_base_domain_website is undefined
|
||||
|
||||
- name: Collect AWX admin token the hard way!
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
|
||||
register: tower_token
|
||||
no_log: True
|
||||
when: awx_customise_base_domain_website is undefined
|
||||
|
||||
- name: Recreate 'Configure Website + Access Export' job template
|
||||
delegate_to: 127.0.0.1
|
||||
@ -101,10 +95,10 @@
|
||||
become_enabled: yes
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
when: customise_base_domain_website is defined
|
||||
when: awx_customise_base_domain_website is defined
|
||||
|
||||
- name: Recreate 'Access Export' job template
|
||||
delegate_to: 127.0.0.1
|
||||
@ -123,44 +117,44 @@
|
||||
become_enabled: yes
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
when: customise_base_domain_website is undefined
|
||||
when: awx_customise_base_domain_website is undefined
|
||||
|
||||
- name: If user doesn't define a awx_sftp_password, create a disabled 'sftp' account
|
||||
user:
|
||||
name: sftp
|
||||
comment: SFTP user to set custom web files and access servers export
|
||||
shell: /bin/false
|
||||
home: /home/sftp
|
||||
group: matrix
|
||||
password: '*'
|
||||
update_password: always
|
||||
when: awx_sftp_password|length == 0
|
||||
|
||||
- name: If user defines awx_sftp_password, enable account and set password on 'stfp' account
|
||||
user:
|
||||
name: sftp
|
||||
comment: SFTP user to set custom web files and access servers export
|
||||
shell: /bin/false
|
||||
home: /home/sftp
|
||||
group: matrix
|
||||
password: "{{ awx_sftp_password | password_hash('sha512') }}"
|
||||
update_password: always
|
||||
when: awx_sftp_password|length > 0
|
||||
|
||||
- name: Ensure group "sftp" exists
|
||||
group:
|
||||
name: sftp
|
||||
state: present
|
||||
|
||||
- name: If user doesn't define a sftp_password, create a disabled 'sftp' account
|
||||
user:
|
||||
name: sftp
|
||||
comment: SFTP user to set custom web files and access servers export
|
||||
shell: /bin/false
|
||||
home: /home/sftp
|
||||
group: sftp
|
||||
password: '*'
|
||||
update_password: always
|
||||
when: sftp_password|length == 0
|
||||
|
||||
- name: If user defines sftp_password, enable account and set password on 'stfp' account
|
||||
user:
|
||||
name: sftp
|
||||
comment: SFTP user to set custom web files and access servers export
|
||||
shell: /bin/false
|
||||
home: /home/sftp
|
||||
group: sftp
|
||||
password: "{{ sftp_password | password_hash('sha512') }}"
|
||||
update_password: always
|
||||
when: sftp_password|length > 0
|
||||
|
||||
- name: adding existing user 'sftp' to group matrix
|
||||
user:
|
||||
name: sftp
|
||||
groups: matrix
|
||||
groups: sftp
|
||||
append: yes
|
||||
when: customise_base_domain_website is defined
|
||||
when: awx_customise_base_domain_website is defined
|
||||
|
||||
- name: Create the ro /chroot directory with sticky bit if it doesn't exist. (/chroot/website has matrix:matrix permissions and is mounted to nginx container)
|
||||
file:
|
||||
@ -176,8 +170,8 @@
|
||||
state: directory
|
||||
owner: matrix
|
||||
group: matrix
|
||||
mode: '0574'
|
||||
when: customise_base_domain_website is defined
|
||||
mode: '0770'
|
||||
when: awx_customise_base_domain_website is defined
|
||||
|
||||
- name: Ensure /chroot/export location exists
|
||||
file:
|
||||
@ -209,19 +203,19 @@
|
||||
- name: Insert public SSH key into authorized_keys file
|
||||
lineinfile:
|
||||
path: /home/sftp/.ssh/authorized_keys
|
||||
line: "{{ sftp_public_key }}"
|
||||
line: "{{ awx_sftp_public_key }}"
|
||||
owner: sftp
|
||||
group: sftp
|
||||
mode: '0644'
|
||||
when: (sftp_public_key | length > 0) and (sftp_auth_method == "SSH Key")
|
||||
|
||||
- name: Alter SSH Subsystem State 1
|
||||
when: (awx_sftp_public_key | length > 0) and (awx_sftp_auth_method == "SSH Key")
|
||||
|
||||
- name: Remove any existing Subsystem lines
|
||||
lineinfile:
|
||||
path: /etc/ssh/sshd_config
|
||||
line: "Subsystem sftp /usr/lib/openssh/sftp-server"
|
||||
state: absent
|
||||
regexp: '^Subsystem'
|
||||
|
||||
- name: Alter SSH Subsystem State 2
|
||||
- name: Set SSH Subsystem State
|
||||
lineinfile:
|
||||
path: /etc/ssh/sshd_config
|
||||
insertafter: "^# override default of no subsystems"
|
||||
@ -239,7 +233,7 @@
|
||||
AllowTcpForwarding no
|
||||
PasswordAuthentication yes
|
||||
AuthorizedKeysFile /home/sftp/.ssh/authorized_keys
|
||||
when: sftp_auth_method == "Disabled"
|
||||
when: awx_sftp_auth_method == "Disabled"
|
||||
|
||||
- name: Add SSH Match User section for password auth
|
||||
blockinfile:
|
||||
@ -252,7 +246,7 @@
|
||||
X11Forwarding no
|
||||
AllowTcpForwarding no
|
||||
PasswordAuthentication yes
|
||||
when: sftp_auth_method == "Password"
|
||||
when: awx_sftp_auth_method == "Password"
|
||||
|
||||
- name: Add SSH Match User section for publickey auth
|
||||
blockinfile:
|
||||
@ -265,7 +259,7 @@
|
||||
X11Forwarding no
|
||||
AllowTcpForwarding no
|
||||
AuthorizedKeysFile /home/sftp/.ssh/authorized_keys
|
||||
when: sftp_auth_method == "SSH Key"
|
||||
when: awx_sftp_auth_method == "SSH Key"
|
||||
|
||||
- name: Restart service ssh.service
|
||||
service:
|
||||
|
||||
10
roles/matrix-awx/tasks/delete_session_token.yml
Normal file
10
roles/matrix-awx/tasks/delete_session_token.yml
Normal file
@ -0,0 +1,10 @@
|
||||
---
|
||||
|
||||
- name: Delete the AWX session token for executing modules
|
||||
awx.awx.tower_token:
|
||||
description: 'AWX Session Token'
|
||||
scope: "write"
|
||||
state: absent
|
||||
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
43
roles/matrix-awx/tasks/export_server.yml
Normal file
43
roles/matrix-awx/tasks/export_server.yml
Normal file
@ -0,0 +1,43 @@
|
||||
---
|
||||
|
||||
- name: Run export of /matrix/ and snapshot the database simultaneously
|
||||
command: "{{ item }}"
|
||||
with_items:
|
||||
- /bin/sh /usr/local/bin/awx-export-service.sh 1 0
|
||||
- /bin/sh /usr/local/bin/awx-export-service.sh 0 1
|
||||
register: awx_create_instances
|
||||
async: 3600 # Maximum runtime in seconds.
|
||||
poll: 0 # Fire and continue (never poll)
|
||||
|
||||
- name: Wait for both of these jobs to finish
|
||||
async_status:
|
||||
jid: "{{ item.ansible_job_id }}"
|
||||
register: awx_jobs
|
||||
until: awx_jobs.finished
|
||||
delay: 5 # Check every 5 seconds.
|
||||
retries: 720 # Retry for a full hour.
|
||||
with_items: "{{ awx_create_instances.results }}"
|
||||
|
||||
- name: Schedule deletion of the export in 24 hours
|
||||
at:
|
||||
command: rm /chroot/export/matrix*
|
||||
count: 1
|
||||
units: days
|
||||
unique: yes
|
||||
|
||||
- name: Delete the AWX session token for executing modules
|
||||
awx.awx.tower_token:
|
||||
description: 'AWX Session Token'
|
||||
scope: "write"
|
||||
state: absent
|
||||
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
|
||||
- name: Set boolean value to exit playbook
|
||||
set_fact:
|
||||
awx_end_playbook: true
|
||||
|
||||
- name: End playbook if this task list is called.
|
||||
meta: end_play
|
||||
when: awx_end_playbook is defined and awx_end_playbook|bool
|
||||
@ -1,18 +1,7 @@
|
||||
|
||||
- name: Ensure /matrix/awx is empty
|
||||
shell: rm -r /matrix/awx/*
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Ensure /matrix/synapse is empty
|
||||
shell: rm -r /matrix/synapse/*
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Extract from /chroot/export
|
||||
shell: tar -xvzf /chroot/export/matrix.tar.gz -C /matrix/
|
||||
---
|
||||
|
||||
- name: Ensure correct ownership of /matrix/awx
|
||||
shell: chown -R matrix:matrix /matrix/awx
|
||||
|
||||
- name: Ensure correct ownership of /matrix/synapse
|
||||
shell: chown -R matrix:matrix /matrix/synapse
|
||||
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Include vars in organisation.yml
|
||||
include_vars:
|
||||
@ -9,3 +10,7 @@
|
||||
file: '/var/lib/awx/projects/hosting/hosting_vars.yml'
|
||||
no_log: True
|
||||
|
||||
- name: Include AWX master token from awx_tokens.yml
|
||||
include_vars:
|
||||
file: /var/lib/awx/projects/hosting/awx_tokens.yml
|
||||
no_log: True
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Include new vars in matrix_vars.yml
|
||||
include_vars:
|
||||
|
||||
@ -17,6 +17,15 @@
|
||||
tags:
|
||||
- always
|
||||
|
||||
# Create AWX session token
|
||||
- include_tasks:
|
||||
file: "create_session_token.yml"
|
||||
apply:
|
||||
tags: always
|
||||
when: run_setup|bool and matrix_awx_enabled|bool
|
||||
tags:
|
||||
- always
|
||||
|
||||
# Perform a backup of the server
|
||||
- include_tasks:
|
||||
file: "backup_server.yml"
|
||||
@ -26,6 +35,15 @@
|
||||
tags:
|
||||
- backup-server
|
||||
|
||||
# Perform a export of the server
|
||||
- include_tasks:
|
||||
file: "export_server.yml"
|
||||
apply:
|
||||
tags: export-server
|
||||
when: run_setup|bool and matrix_awx_enabled|bool
|
||||
tags:
|
||||
- export-server
|
||||
|
||||
# Create a user account if called
|
||||
- include_tasks:
|
||||
file: "create_user.yml"
|
||||
@ -53,6 +71,15 @@
|
||||
tags:
|
||||
- purge-database
|
||||
|
||||
# Rotate SSH key if called
|
||||
- include_tasks:
|
||||
file: "rotate_ssh.yml"
|
||||
apply:
|
||||
tags: rotate-ssh
|
||||
when: run_setup|bool and matrix_awx_enabled|bool
|
||||
tags:
|
||||
- rotate-ssh
|
||||
|
||||
# Import configs, media repo from /chroot/backup import
|
||||
- include_tasks:
|
||||
file: "import_awx.yml"
|
||||
@ -98,6 +125,15 @@
|
||||
tags:
|
||||
- setup-client-element
|
||||
|
||||
# Additional playbook to set the variable file during Mailer configuration
|
||||
- include_tasks:
|
||||
file: "set_variables_mailer.yml"
|
||||
apply:
|
||||
tags: setup-mailer
|
||||
when: run_setup|bool and matrix_awx_enabled|bool
|
||||
tags:
|
||||
- setup-mailer
|
||||
|
||||
# Additional playbook to set the variable file during Element configuration
|
||||
- include_tasks:
|
||||
file: "set_variables_element_subdomain.yml"
|
||||
@ -161,6 +197,24 @@
|
||||
tags:
|
||||
- setup-synapse-admin
|
||||
|
||||
# Additional playbook to set the variable file during Discord Appservice Bridge configuration
|
||||
- include_tasks:
|
||||
file: "bridge_discord_appservice.yml"
|
||||
apply:
|
||||
tags: bridge-discord-appservice
|
||||
when: run_setup|bool and matrix_awx_enabled|bool
|
||||
tags:
|
||||
- bridge-discord-appservice
|
||||
|
||||
# Delete AWX session token
|
||||
- include_tasks:
|
||||
file: "delete_session_token.yml"
|
||||
apply:
|
||||
tags: always
|
||||
when: run_setup|bool and matrix_awx_enabled|bool
|
||||
tags:
|
||||
- always
|
||||
|
||||
# Load newly formed matrix variables from AWX volume
|
||||
- include_tasks:
|
||||
file: "load_matrix_variables.yml"
|
||||
|
||||
@ -1,10 +1,11 @@
|
||||
---
|
||||
|
||||
- name: Collect entire room list into stdout
|
||||
shell: |
|
||||
curl -X GET --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" '{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/rooms?from={{ item }}'
|
||||
register: rooms_output
|
||||
register: awx_rooms_output
|
||||
|
||||
- name: Print stdout to file
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
echo '{{ rooms_output.stdout }}' >> /tmp/{{ subscription_id }}_room_list_complete.json
|
||||
echo '{{ awx_rooms_output.stdout }}' >> /tmp/{{ subscription_id }}_room_list_complete.json
|
||||
|
||||
@ -1,12 +1,13 @@
|
||||
---
|
||||
|
||||
- name: Purge all rooms with more then N events
|
||||
shell: |
|
||||
curl --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "delete_local_events": false, "purge_up_to_ts": {{ purge_epoche_time.stdout }}000 }' "{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_history/{{ item[1:-1] }}"
|
||||
register: purge_command
|
||||
curl --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "delete_local_events": false, "purge_up_to_ts": {{ awx_purge_epoche_time.stdout }}000 }' "{{ awx_synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_history/{{ item[1:-1] }}"
|
||||
register: awx_purge_command
|
||||
|
||||
- name: Print output of purge command
|
||||
debug:
|
||||
msg: "{{ purge_command.stdout }}"
|
||||
msg: "{{ awx_purge_command.stdout }}"
|
||||
|
||||
- name: Pause for 5 seconds to let Synapse breathe
|
||||
pause:
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Ensure dateutils and curl is installed in AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -5,34 +6,34 @@
|
||||
name: dateutils
|
||||
state: latest
|
||||
|
||||
- name: Ensure dateutils, curl and jq intalled on target machine
|
||||
- name: Include vars in matrix_vars.yml
|
||||
include_vars:
|
||||
file: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml'
|
||||
no_log: True
|
||||
|
||||
- name: Ensure curl and jq intalled on target machine
|
||||
apt:
|
||||
pkg:
|
||||
- curl
|
||||
- jq
|
||||
state: present
|
||||
|
||||
- name: Include vars in matrix_vars.yml
|
||||
include_vars:
|
||||
file: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml'
|
||||
no_log: True
|
||||
|
||||
- name: Collect before shrink size of Synapse database
|
||||
shell: du -sh /matrix/postgres/data
|
||||
register: db_size_before_stat
|
||||
when: (purge_mode.find("Perform final shrink") != -1)
|
||||
register: awx_db_size_before_stat
|
||||
when: (awx_purge_mode.find("Perform final shrink") != -1)
|
||||
no_log: True
|
||||
|
||||
- name: Collect the internal IP of the matrix-synapse container
|
||||
shell: "/usr/bin/docker inspect --format '{''{range.NetworkSettings.Networks}''}{''{.IPAddress}''}{''{end}''}' matrix-synapse"
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
register: synapse_container_ip
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
register: awx_synapse_container_ip
|
||||
|
||||
- name: Collect access token for janitor user
|
||||
shell: |
|
||||
curl -X POST -d '{"type":"m.login.password", "user":"janitor", "password":"{{ matrix_awx_janitor_user_password }}"}' "{{ synapse_container_ip.stdout }}:8008/_matrix/client/r0/login" | jq '.access_token'
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
register: janitors_token
|
||||
curl -X POST -d '{"type":"m.login.password", "user":"janitor", "password":"{{ awx_janitor_user_password }}"}' "{{ awx_synapse_container_ip.stdout }}:8008/_matrix/client/r0/login" | jq '.access_token'
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
register: awx_janitors_token
|
||||
no_log: True
|
||||
|
||||
- name: Copy build_room_list.py script to target machine
|
||||
@ -42,114 +43,107 @@
|
||||
owner: matrix
|
||||
group: matrix
|
||||
mode: '0755'
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
|
||||
- name: Run build_room_list.py script
|
||||
shell: |
|
||||
runuser -u matrix -- python3 /usr/local/bin/matrix_build_room_list.py {{ janitors_token.stdout[1:-1] }} {{ synapse_container_ip.stdout }}
|
||||
register: rooms_total
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
runuser -u matrix -- python3 /usr/local/bin/matrix_build_room_list.py {{ awx_janitors_token.stdout[1:-1] }} {{ awx_synapse_container_ip.stdout }}
|
||||
register: awx_rooms_total
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
|
||||
- name: Fetch complete room list from target machine
|
||||
fetch:
|
||||
src: /tmp/room_list_complete.json
|
||||
dest: "/tmp/{{ subscription_id }}_room_list_complete.json"
|
||||
flat: yes
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
|
||||
- name: Remove complete room list from target machine
|
||||
file:
|
||||
path: /tmp/room_list_complete.json
|
||||
state: absent
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
|
||||
- name: Generate list of rooms with no local users
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
jq 'try .rooms[] | select(.joined_local_members == 0) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_no_local_users.txt
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
|
||||
- name: Count number of rooms with no local users
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
wc -l /tmp/{{ subscription_id }}_room_list_no_local_users.txt | awk '{ print $1 }'
|
||||
register: rooms_no_local_total
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
register: awx_rooms_no_local_total
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
|
||||
- name: Setting host fact room_list_no_local_users
|
||||
- name: Setting host fact awx_room_list_no_local_users
|
||||
set_fact:
|
||||
room_list_no_local_users: "{{ lookup('file', '/tmp/{{ subscription_id }}_room_list_no_local_users.txt') }}"
|
||||
awx_room_list_no_local_users: "{{ lookup('file', '/tmp/{{ subscription_id }}_room_list_no_local_users.txt') }}"
|
||||
no_log: True
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
|
||||
- name: Purge all rooms with no local users
|
||||
include_tasks: purge_database_no_local.yml
|
||||
loop: "{{ room_list_no_local_users.splitlines() | flatten(levels=1) }}"
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
loop: "{{ awx_room_list_no_local_users.splitlines() | flatten(levels=1) }}"
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
|
||||
- name: Collect epoche time from date
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
date -d '{{ purge_date }}' +"%s"
|
||||
when: (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
register: purge_epoche_time
|
||||
date -d '{{ awx_purge_date }}' +"%s"
|
||||
when: (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
register: awx_purge_epoche_time
|
||||
|
||||
- name: Generate list of rooms with more then N users
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
jq 'try .rooms[] | select(.joined_members > {{ purge_metric_value }}) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_joined_members.txt
|
||||
when: purge_mode.find("Number of users [slower]") != -1
|
||||
jq 'try .rooms[] | select(.joined_members > {{ awx_purge_metric_value }}) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_joined_members.txt
|
||||
when: awx_purge_mode.find("Number of users [slower]") != -1
|
||||
|
||||
- name: Count number of rooms with more then N users
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
wc -l /tmp/{{ subscription_id }}_room_list_joined_members.txt | awk '{ print $1 }'
|
||||
register: rooms_join_members_total
|
||||
when: purge_mode.find("Number of users [slower]") != -1
|
||||
register: awx_rooms_join_members_total
|
||||
when: awx_purge_mode.find("Number of users [slower]") != -1
|
||||
|
||||
- name: Setting host fact room_list_joined_members
|
||||
- name: Setting host fact awx_room_list_joined_members
|
||||
delegate_to: 127.0.0.1
|
||||
set_fact:
|
||||
room_list_joined_members: "{{ lookup('file', '/tmp/{{ subscription_id }}_room_list_joined_members.txt') }}"
|
||||
when: purge_mode.find("Number of users [slower]") != -1
|
||||
awx_room_list_joined_members: "{{ lookup('file', '/tmp/{{ subscription_id }}_room_list_joined_members.txt') }}"
|
||||
when: awx_purge_mode.find("Number of users [slower]") != -1
|
||||
no_log: True
|
||||
|
||||
- name: Purge all rooms with more then N users
|
||||
include_tasks: purge_database_users.yml
|
||||
loop: "{{ room_list_joined_members.splitlines() | flatten(levels=1) }}"
|
||||
when: purge_mode.find("Number of users [slower]") != -1
|
||||
loop: "{{ awx_room_list_joined_members.splitlines() | flatten(levels=1) }}"
|
||||
when: awx_purge_mode.find("Number of users [slower]") != -1
|
||||
|
||||
- name: Generate list of rooms with more then N events
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
jq 'try .rooms[] | select(.state_events > {{ purge_metric_value }}) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_state_events.txt
|
||||
when: purge_mode.find("Number of events [slower]") != -1
|
||||
jq 'try .rooms[] | select(.state_events > {{ awx_purge_metric_value }}) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_state_events.txt
|
||||
when: awx_purge_mode.find("Number of events [slower]") != -1
|
||||
|
||||
- name: Count number of rooms with more then N events
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
wc -l /tmp/{{ subscription_id }}_room_list_state_events.txt | awk '{ print $1 }'
|
||||
register: rooms_state_events_total
|
||||
when: purge_mode.find("Number of events [slower]") != -1
|
||||
register: awx_rooms_state_events_total
|
||||
when: awx_purge_mode.find("Number of events [slower]") != -1
|
||||
|
||||
- name: Setting host fact room_list_state_events
|
||||
- name: Setting host fact awx_room_list_state_events
|
||||
delegate_to: 127.0.0.1
|
||||
set_fact:
|
||||
room_list_state_events: "{{ lookup('file', '/tmp/{{ subscription_id }}_room_list_state_events.txt') }}"
|
||||
when: purge_mode.find("Number of events [slower]") != -1
|
||||
awx_room_list_state_events: "{{ lookup('file', '/tmp/{{ subscription_id }}_room_list_state_events.txt') }}"
|
||||
when: awx_purge_mode.find("Number of events [slower]") != -1
|
||||
no_log: True
|
||||
|
||||
- name: Purge all rooms with more then N events
|
||||
include_tasks: purge_database_events.yml
|
||||
loop: "{{ room_list_state_events.splitlines() | flatten(levels=1) }}"
|
||||
when: purge_mode.find("Number of events [slower]") != -1
|
||||
|
||||
- name: Collect AWX admin token the hard way!
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
|
||||
register: tower_token
|
||||
no_log: True
|
||||
loop: "{{ awx_room_list_state_events.splitlines() | flatten(levels=1) }}"
|
||||
when: awx_purge_mode.find("Number of events [slower]") != -1
|
||||
|
||||
- name: Adjust 'Deploy/Update a Server' job template
|
||||
delegate_to: 127.0.0.1
|
||||
@ -165,20 +159,20 @@
|
||||
credential: "{{ member_id }} - AWX SSH Key"
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1) or (purge_mode.find("Skip purging rooms [faster]") != -1)
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1) or (awx_purge_mode.find("Skip purging rooms [faster]") != -1)
|
||||
|
||||
- name: Execute rust-synapse-compress-state job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_launch:
|
||||
job_template: "{{ matrix_domain }} - 0 - Deploy/Update a Server"
|
||||
wait: yes
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1) or (purge_mode.find("Skip purging rooms [faster]") != -1)
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1) or (awx_purge_mode.find("Skip purging rooms [faster]") != -1)
|
||||
|
||||
- name: Revert 'Deploy/Update a Server' job template
|
||||
delegate_to: 127.0.0.1
|
||||
@ -194,28 +188,28 @@
|
||||
credential: "{{ member_id }} - AWX SSH Key"
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1) or (purge_mode.find("Skip purging rooms [faster]") != -1)
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1) or (awx_purge_mode.find("Skip purging rooms [faster]") != -1)
|
||||
|
||||
- name: Ensure matrix-synapse is stopped
|
||||
service:
|
||||
name: matrix-synapse
|
||||
state: stopped
|
||||
daemon_reload: yes
|
||||
when: (purge_mode.find("Perform final shrink") != -1)
|
||||
when: (awx_purge_mode.find("Perform final shrink") != -1)
|
||||
|
||||
- name: Re-index Synapse database
|
||||
shell: docker exec -i matrix-postgres psql "host=127.0.0.1 port=5432 dbname=synapse user=synapse password={{ matrix_synapse_connection_password }}" -c 'REINDEX (VERBOSE) DATABASE synapse'
|
||||
when: (purge_mode.find("Perform final shrink") != -1)
|
||||
when: (awx_purge_mode.find("Perform final shrink") != -1)
|
||||
|
||||
- name: Ensure matrix-synapse is started
|
||||
service:
|
||||
name: matrix-synapse
|
||||
state: started
|
||||
daemon_reload: yes
|
||||
when: (purge_mode.find("Perform final shrink") != -1)
|
||||
when: (awx_purge_mode.find("Perform final shrink") != -1)
|
||||
|
||||
- name: Adjust 'Deploy/Update a Server' job template
|
||||
delegate_to: 127.0.0.1
|
||||
@ -231,20 +225,20 @@
|
||||
credential: "{{ member_id }} - AWX SSH Key"
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
when: (purge_mode.find("Perform final shrink") != -1)
|
||||
when: (awx_purge_mode.find("Perform final shrink") != -1)
|
||||
|
||||
- name: Execute run-postgres-vacuum job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_launch:
|
||||
job_template: "{{ matrix_domain }} - 0 - Deploy/Update a Server"
|
||||
wait: yes
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
when: (purge_mode.find("Perform final shrink") != -1)
|
||||
when: (awx_purge_mode.find("Perform final shrink") != -1)
|
||||
|
||||
- name: Revert 'Deploy/Update a Server' job template
|
||||
delegate_to: 127.0.0.1
|
||||
@ -260,58 +254,67 @@
|
||||
credential: "{{ member_id }} - AWX SSH Key"
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
when: (purge_mode.find("Perform final shrink") != -1)
|
||||
when: (awx_purge_mode.find("Perform final shrink") != -1)
|
||||
|
||||
- name: Cleanup room_list files
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
rm /tmp/{{ subscription_id }}_room_list*
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Collect after shrink size of Synapse database
|
||||
shell: du -sh /matrix/postgres/data
|
||||
register: db_size_after_stat
|
||||
when: (purge_mode.find("Perform final shrink") != -1)
|
||||
register: awx_db_size_after_stat
|
||||
when: (awx_purge_mode.find("Perform final shrink") != -1)
|
||||
no_log: True
|
||||
|
||||
- name: Print total number of rooms processed
|
||||
debug:
|
||||
msg: '{{ rooms_total.stdout }}'
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
msg: '{{ awx_rooms_total.stdout }}'
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
|
||||
- name: Print the number of rooms purged with no local users
|
||||
debug:
|
||||
msg: '{{ rooms_no_local_total.stdout }}'
|
||||
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
|
||||
msg: '{{ awx_rooms_no_local_total.stdout }}'
|
||||
when: (awx_purge_mode.find("No local users [recommended]") != -1) or (awx_purge_mode.find("Number of users [slower]") != -1) or (awx_purge_mode.find("Number of events [slower]") != -1)
|
||||
|
||||
- name: Print the number of rooms purged with more then N users
|
||||
debug:
|
||||
msg: '{{ rooms_join_members_total.stdout }}'
|
||||
when: purge_mode.find("Number of users") != -1
|
||||
msg: '{{ awx_rooms_join_members_total.stdout }}'
|
||||
when: awx_purge_mode.find("Number of users") != -1
|
||||
|
||||
- name: Print the number of rooms purged with more then N events
|
||||
debug:
|
||||
msg: '{{ rooms_state_events_total.stdout }}'
|
||||
when: purge_mode.find("Number of events") != -1
|
||||
msg: '{{ awx_rooms_state_events_total.stdout }}'
|
||||
when: awx_purge_mode.find("Number of events") != -1
|
||||
|
||||
- name: Print before purge size of Synapse database
|
||||
debug:
|
||||
msg: "{{ db_size_before_stat.stdout.split('\n') }}"
|
||||
when: (db_size_before_stat is defined) and (purge_mode.find("Perform final shrink") != -1)
|
||||
msg: "{{ awx_db_size_before_stat.stdout.split('\n') }}"
|
||||
when: ( awx_db_size_before_stat is defined ) and ( awx_purge_mode.find("Perform final shrink" ) != -1 )
|
||||
|
||||
- name: Print after purge size of Synapse database
|
||||
debug:
|
||||
msg: "{{ db_size_after_stat.stdout.split('\n') }}"
|
||||
when: (db_size_after_stat is defined) and (purge_mode.find("Perform final shrink") != -1)
|
||||
msg: "{{ awx_db_size_after_stat.stdout.split('\n') }}"
|
||||
when: (awx_db_size_after_stat is defined) and (awx_purge_mode.find("Perform final shrink") != -1)
|
||||
|
||||
- name: Delete the AWX session token for executing modules
|
||||
awx.awx.tower_token:
|
||||
description: 'AWX Session Token'
|
||||
scope: "write"
|
||||
state: absent
|
||||
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
|
||||
- name: Set boolean value to exit playbook
|
||||
set_fact:
|
||||
end_playbook: true
|
||||
awx_end_playbook: true
|
||||
|
||||
- name: End playbook early if this task is called.
|
||||
meta: end_play
|
||||
when: end_playbook is defined and end_playbook|bool
|
||||
when: awx_end_playbook is defined and awx_end_playbook|bool
|
||||
|
||||
@ -1,12 +1,13 @@
|
||||
---
|
||||
|
||||
- name: Purge all rooms with no local users
|
||||
shell: |
|
||||
curl --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "room_id": {{ item }} }' '{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_room'
|
||||
register: purge_command
|
||||
curl --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "room_id": {{ item }} }' '{{ awx_synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_room'
|
||||
register: awx_purge_command
|
||||
|
||||
- name: Print output of purge command
|
||||
debug:
|
||||
msg: "{{ purge_command.stdout }}"
|
||||
msg: "{{ awx_purge_command.stdout }}"
|
||||
|
||||
- name: Pause for 5 seconds to let Synapse breathe
|
||||
pause:
|
||||
|
||||
@ -1,12 +1,13 @@
|
||||
---
|
||||
|
||||
- name: Purge all rooms with more then N users
|
||||
shell: |
|
||||
curl --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "delete_local_events": false, "purge_up_to_ts": {{ purge_epoche_time.stdout }}000 }' "{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_history/{{ item[1:-1] }}"
|
||||
register: purge_command
|
||||
curl --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" -X POST -H "Content-Type: application/json" -d '{ "delete_local_events": false, "purge_up_to_ts": {{ awx_purge_epoche_time.stdout }}000 }' "{{ awx_synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_history/{{ item[1:-1] }}"
|
||||
register: awx_purge_command
|
||||
|
||||
- name: Print output of purge command
|
||||
debug:
|
||||
msg: "{{ purge_command.stdout }}"
|
||||
msg: "{{ awx_purge_command.stdout }}"
|
||||
|
||||
- name: Pause for 5 seconds to let Synapse breathe
|
||||
pause:
|
||||
|
||||
@ -1,17 +1,18 @@
|
||||
---
|
||||
|
||||
- name: Collect epoche time from date
|
||||
shell: |
|
||||
date -d '{{ item }}' +"%s"
|
||||
register: epoche_time
|
||||
register: awx_epoche_time
|
||||
|
||||
- name: Purge local media to specific date
|
||||
shell: |
|
||||
curl -X POST --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" '{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/media/matrix.{{ matrix_domain }}/delete?before_ts={{ epoche_time.stdout }}'
|
||||
register: purge_command
|
||||
curl -X POST --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" '{{ awx_synapse_container_ip.stdout }}:8008/_synapse/admin/v1/media/matrix.{{ matrix_domain }}/delete?before_ts={{ awx_epoche_time.stdout }}000'
|
||||
register: awx_purge_command
|
||||
|
||||
- name: Print output of purge command
|
||||
debug:
|
||||
msg: "{{ purge_command.stdout }}"
|
||||
msg: "{{ awx_purge_command.stdout }}"
|
||||
|
||||
- name: Pause for 5 seconds to let Synapse breathe
|
||||
pause:
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
|
||||
- name: Ensure dateutils and curl is installed in AWX
|
||||
- name: Ensure dateutils is installed in AWX
|
||||
delegate_to: 127.0.0.1
|
||||
yum:
|
||||
name: dateutils
|
||||
@ -17,82 +17,92 @@
|
||||
- jq
|
||||
state: present
|
||||
|
||||
- name: Collect access token for janitor user
|
||||
shell: |
|
||||
curl -XPOST -d '{"type":"m.login.password", "user":"janitor", "password":"{{ matrix_awx_janitor_user_password }}"}' "https://matrix.{{ matrix_domain }}/_matrix/client/r0/login" | jq '.access_token'
|
||||
register: janitors_token
|
||||
|
||||
- name: Collect the internal IP of the matrix-synapse container
|
||||
shell: "/usr/bin/docker inspect --format '{''{range.NetworkSettings.Networks}''}{''{.IPAddress}''}{''{end}''}' matrix-synapse"
|
||||
register: synapse_container_ip
|
||||
|
||||
register: awx_synapse_container_ip
|
||||
|
||||
- name: Collect access token for janitor user
|
||||
shell: |
|
||||
curl -XPOST -d '{"type":"m.login.password", "user":"janitor", "password":"{{ awx_janitor_user_password }}"}' "{{ awx_synapse_container_ip.stdout }}:8008/_matrix/client/r0/login" | jq '.access_token'
|
||||
register: awx_janitors_token
|
||||
no_log: True
|
||||
|
||||
- name: Generate list of dates to purge to
|
||||
delegate_to: 127.0.0.1
|
||||
shell: "dateseq {{ matrix_purge_from_date }} {{ matrix_purge_to_date }}"
|
||||
register: purge_dates
|
||||
register: awx_purge_dates
|
||||
|
||||
- name: Calculate initial size of local media repository
|
||||
shell: du -sh /matrix/synapse/storage/media-store/local*
|
||||
register: local_media_size_before
|
||||
when: matrix_purge_media_type == "Local Media"
|
||||
register: awx_local_media_size_before
|
||||
when: awx_purge_media_type == "Local Media"
|
||||
ignore_errors: yes
|
||||
no_log: True
|
||||
|
||||
- name: Calculate initial size of remote media repository
|
||||
shell: du -sh /matrix/synapse/storage/media-store/remote*
|
||||
register: remote_media_size_before
|
||||
when: matrix_purge_media_type == "Remote Media"
|
||||
register: awx_remote_media_size_before
|
||||
when: awx_purge_media_type == "Remote Media"
|
||||
ignore_errors: yes
|
||||
no_log: True
|
||||
|
||||
- name: Purge local media with loop
|
||||
include_tasks: purge_media_local.yml
|
||||
loop: "{{ purge_dates.stdout_lines | flatten(levels=1) }}"
|
||||
when: matrix_purge_media_type == "Local Media"
|
||||
loop: "{{ awx_purge_dates.stdout_lines | flatten(levels=1) }}"
|
||||
when: awx_purge_media_type == "Local Media"
|
||||
|
||||
- name: Purge remote media with loop
|
||||
include_tasks: purge_media_remote.yml
|
||||
loop: "{{ purge_dates.stdout_lines | flatten(levels=1) }}"
|
||||
when: matrix_purge_media_type == "Remote Media"
|
||||
loop: "{{ awx_purge_dates.stdout_lines | flatten(levels=1) }}"
|
||||
when: awx_purge_media_type == "Remote Media"
|
||||
|
||||
- name: Calculate final size of local media repository
|
||||
shell: du -sh /matrix/synapse/storage/media-store/local*
|
||||
register: local_media_size_after
|
||||
when: matrix_purge_media_type == "Local Media"
|
||||
register: awx_local_media_size_after
|
||||
when: awx_purge_media_type == "Local Media"
|
||||
ignore_errors: yes
|
||||
no_log: True
|
||||
|
||||
- name: Calculate final size of remote media repository
|
||||
shell: du -sh /matrix/synapse/storage/media-store/remote*
|
||||
register: remote_media_size_after
|
||||
when: matrix_purge_media_type == "Remote Media"
|
||||
register: awx_remote_media_size_after
|
||||
when: awx_purge_media_type == "Remote Media"
|
||||
ignore_errors: yes
|
||||
no_log: True
|
||||
|
||||
- name: Print size of local media repository before purge
|
||||
debug:
|
||||
msg: "{{ local_media_size_before.stdout.split('\n') }}"
|
||||
when: matrix_purge_media_type == "Local Media"
|
||||
msg: "{{ awx_local_media_size_before.stdout.split('\n') }}"
|
||||
when: awx_purge_media_type == "Local Media"
|
||||
|
||||
- name: Print size of local media repository after purge
|
||||
debug:
|
||||
msg: "{{ local_media_size_after.stdout.split('\n') }}"
|
||||
when: matrix_purge_media_type == "Local Media"
|
||||
msg: "{{ awx_local_media_size_after.stdout.split('\n') }}"
|
||||
when: awx_purge_media_type == "Local Media"
|
||||
|
||||
- name: Print size of remote media repository before purge
|
||||
debug:
|
||||
msg: "{{ remote_media_size_before.stdout.split('\n') }}"
|
||||
when: matrix_purge_media_type == "Remote Media"
|
||||
msg: "{{ awx_remote_media_size_before.stdout.split('\n') }}"
|
||||
when: awx_purge_media_type == "Remote Media"
|
||||
|
||||
- name: Print size of remote media repository after purge
|
||||
debug:
|
||||
msg: "{{ remote_media_size_after.stdout.split('\n') }}"
|
||||
when: matrix_purge_media_type == "Remote Media"
|
||||
msg: "{{ awx_remote_media_size_after.stdout.split('\n') }}"
|
||||
when: awx_purge_media_type == "Remote Media"
|
||||
|
||||
- name: Delete the AWX session token for executing modules
|
||||
awx.awx.tower_token:
|
||||
description: 'AWX Session Token'
|
||||
scope: "write"
|
||||
state: absent
|
||||
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
|
||||
- name: Set boolean value to exit playbook
|
||||
set_fact:
|
||||
end_playbook: true
|
||||
awx_end_playbook: true
|
||||
|
||||
- name: End playbook early if this task is called.
|
||||
meta: end_play
|
||||
when: end_playbook is defined and end_playbook|bool
|
||||
when: awx_end_playbook is defined and awx_end_playbook|bool
|
||||
|
||||
@ -1,17 +1,18 @@
|
||||
---
|
||||
|
||||
- name: Collect epoche time from date
|
||||
shell: |
|
||||
date -d '{{ item }}' +"%s"
|
||||
register: epoche_time
|
||||
register: awx_epoche_time
|
||||
|
||||
- name: Purge remote media to specific date
|
||||
shell: |
|
||||
curl -X POST --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" '{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_media_cache?before_ts={{ epoche_time.stdout }}'
|
||||
register: purge_command
|
||||
curl -X POST --header "Authorization: Bearer {{ awx_janitors_token.stdout[1:-1] }}" '{{ awx_synapse_container_ip.stdout }}:8008/_synapse/admin/v1/purge_media_cache?before_ts={{ awx_epoche_time.stdout }}000'
|
||||
register: awx_purge_command
|
||||
|
||||
- name: Print output of purge command
|
||||
debug:
|
||||
msg: "{{ purge_command.stdout }}"
|
||||
msg: "{{ awx_purge_command.stdout }}"
|
||||
|
||||
- name: Pause for 5 seconds to let Synapse breathe
|
||||
pause:
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Rename synapse presence variable
|
||||
delegate_to: 127.0.0.1
|
||||
@ -5,4 +6,3 @@
|
||||
path: "/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml"
|
||||
regexp: 'matrix_synapse_use_presence'
|
||||
replace: 'matrix_synapse_presence_enabled'
|
||||
|
||||
|
||||
25
roles/matrix-awx/tasks/rotate_ssh.yml
Normal file
25
roles/matrix-awx/tasks/rotate_ssh.yml
Normal file
@ -0,0 +1,25 @@
|
||||
---
|
||||
|
||||
- name: Set the new authorized key taken from file
|
||||
authorized_key:
|
||||
user: root
|
||||
state: present
|
||||
exclusive: yes
|
||||
key: "{{ lookup('file', '/var/lib/awx/projects/hosting/client_public.key') }}"
|
||||
|
||||
- name: Delete the AWX session token for executing modules
|
||||
awx.awx.tower_token:
|
||||
description: 'AWX Session Token'
|
||||
scope: "write"
|
||||
state: absent
|
||||
existing_token_id: "{{ awx_session_token.ansible_facts.tower_token.id }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
|
||||
- name: Set boolean value to exit playbook
|
||||
set_fact:
|
||||
end_playbook: true
|
||||
|
||||
- name: End playbook if this task list is called.
|
||||
meta: end_play
|
||||
when: end_playbook is defined and end_playbook|bool
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Install prerequisite apt packages on target
|
||||
apt:
|
||||
@ -23,83 +24,83 @@
|
||||
- name: Calculate MAU value
|
||||
shell: |
|
||||
curl -s localhost:9000 | grep "^synapse_admin_mau_current "
|
||||
register: mau_stat
|
||||
register: awx_mau_stat
|
||||
no_log: True
|
||||
|
||||
- name: Print MAU value
|
||||
debug:
|
||||
msg: "{{ mau_stat.stdout.split('\n') }}"
|
||||
when: mau_stat is defined
|
||||
|
||||
- name: Calculate CPU usage statistics
|
||||
shell: iostat -c
|
||||
register: cpu_usage_stat
|
||||
register: awx_cpu_usage_stat
|
||||
no_log: True
|
||||
|
||||
- name: Print CPU usage statistics
|
||||
debug:
|
||||
msg: "{{ cpu_usage_stat.stdout.split('\n') }}"
|
||||
when: cpu_usage_stat is defined
|
||||
|
||||
- name: Calculate RAM usage statistics
|
||||
shell: free -mh
|
||||
register: ram_usage_stat
|
||||
register: awx_ram_usage_stat
|
||||
no_log: True
|
||||
|
||||
- name: Print RAM usage statistics
|
||||
debug:
|
||||
msg: "{{ ram_usage_stat.stdout.split('\n') }}"
|
||||
when: ram_usage_stat is defined
|
||||
|
||||
- name: Calculate free disk space
|
||||
shell: df -h
|
||||
register: disk_space_stat
|
||||
register: awx_disk_space_stat
|
||||
no_log: True
|
||||
|
||||
- name: Print free disk space
|
||||
debug:
|
||||
msg: "{{ disk_space_stat.stdout.split('\n') }}"
|
||||
when: disk_space_stat is defined
|
||||
|
||||
- name: Calculate size of Synapse database
|
||||
shell: du -sh /matrix/postgres/data
|
||||
register: db_size_stat
|
||||
register: awx_db_size_stat
|
||||
no_log: True
|
||||
|
||||
- name: Print size of Synapse database
|
||||
debug:
|
||||
msg: "{{ db_size_stat.stdout.split('\n') }}"
|
||||
when: db_size_stat is defined
|
||||
|
||||
- name: Calculate size of local media repository
|
||||
shell: du -sh /matrix/synapse/storage/media-store/local*
|
||||
register: local_media_size_stat
|
||||
register: awx_local_media_size_stat
|
||||
ignore_errors: yes
|
||||
no_log: True
|
||||
|
||||
- name: Print size of local media repository
|
||||
debug:
|
||||
msg: "{{ local_media_size_stat.stdout.split('\n') }}"
|
||||
when: local_media_size_stat is defined
|
||||
|
||||
- name: Calculate size of remote media repository
|
||||
shell: du -sh /matrix/synapse/storage/media-store/remote*
|
||||
register: remote_media_size_stat
|
||||
register: awx_remote_media_size_stat
|
||||
ignore_errors: yes
|
||||
no_log: True
|
||||
|
||||
- name: Calculate docker container statistics
|
||||
shell: docker stats --all --no-stream
|
||||
register: awx_docker_stats
|
||||
ignore_errors: yes
|
||||
no_log: True
|
||||
|
||||
- name: Print size of remote media repository
|
||||
debug:
|
||||
msg: "{{ remote_media_size_stat.stdout.split('\n') }}"
|
||||
when: remote_media_size_stat is defined
|
||||
msg: "{{ awx_remote_media_size_stat.stdout.split('\n') }}"
|
||||
when: awx_remote_media_size_stat is defined
|
||||
|
||||
- name: Print size of local media repository
|
||||
debug:
|
||||
msg: "{{ awx_local_media_size_stat.stdout.split('\n') }}"
|
||||
when: awx_local_media_size_stat is defined
|
||||
|
||||
- name: Calculate docker container statistics
|
||||
shell: docker stats --all --no-stream
|
||||
register: docker_stats
|
||||
ignore_errors: yes
|
||||
no_log: True
|
||||
- name: Print size of Synapse database
|
||||
debug:
|
||||
msg: "{{ awx_db_size_stat.stdout.split('\n') }}"
|
||||
when: awx_db_size_stat is defined
|
||||
|
||||
- name: Print free disk space
|
||||
debug:
|
||||
msg: "{{ awx_disk_space_stat.stdout.split('\n') }}"
|
||||
when: awx_disk_space_stat is defined
|
||||
|
||||
- name: Print RAM usage statistics
|
||||
debug:
|
||||
msg: "{{ awx_ram_usage_stat.stdout.split('\n') }}"
|
||||
when: awx_ram_usage_stat is defined
|
||||
|
||||
- name: Print CPU usage statistics
|
||||
debug:
|
||||
msg: "{{ awx_cpu_usage_stat.stdout.split('\n') }}"
|
||||
when: awx_cpu_usage_stat is defined
|
||||
|
||||
- name: Print MAU value
|
||||
debug:
|
||||
msg: "{{ awx_mau_stat.stdout.split('\n') }}"
|
||||
when: awx_mau_stat is defined
|
||||
|
||||
- name: Print docker container statistics
|
||||
debug:
|
||||
msg: "{{ docker_stats.stdout.split('\n') }}"
|
||||
when: docker_stats is defined
|
||||
msg: "{{ awx_docker_stats.stdout.split('\n') }}"
|
||||
when: awx_docker_stats is defined
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Record Corporal Enabled/Disabled variable
|
||||
delegate_to: 127.0.0.1
|
||||
@ -62,7 +63,7 @@
|
||||
insertafter: '# Corporal Settings Start'
|
||||
with_dict:
|
||||
'matrix_corporal_http_api_enabled': 'false'
|
||||
when: (matrix_corporal_policy_provider_mode == "Simple Static File") or (not matrix_corporal_enabled|bool)
|
||||
when: (awx_corporal_policy_provider_mode == "Simple Static File") or (not matrix_corporal_enabled|bool)
|
||||
|
||||
- name: Enable Corporal API if Push/Pull mode delected
|
||||
delegate_to: 127.0.0.1
|
||||
@ -73,7 +74,7 @@
|
||||
insertafter: '# Corporal Settings Start'
|
||||
with_dict:
|
||||
'matrix_corporal_http_api_enabled': 'true'
|
||||
when: (matrix_corporal_policy_provider_mode != "Simple Static File") and (matrix_corporal_enabled|bool)
|
||||
when: (awx_corporal_policy_provider_mode != "Simple Static File") and (matrix_corporal_enabled|bool)
|
||||
|
||||
- name: Record Corporal API Access Token if it's defined
|
||||
delegate_to: 127.0.0.1
|
||||
@ -84,20 +85,22 @@
|
||||
insertafter: '# Corporal Settings Start'
|
||||
with_dict:
|
||||
'matrix_corporal_http_api_auth_token': '{{ matrix_corporal_http_api_auth_token }}'
|
||||
when: matrix_corporal_http_api_auth_token|length > 0
|
||||
when: ( matrix_corporal_http_api_auth_token|length > 0 ) and ( awx_corporal_policy_provider_mode != "Simple Static File" )
|
||||
|
||||
- name: Record 'Simple Static File' configuration variables in matrix_vars.yml
|
||||
delegate_to: 127.0.0.1
|
||||
blockinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
insertafter: "# Corporal Policy Provider Settings Start"
|
||||
insertbefore: "# Corporal Policy Provider Settings End"
|
||||
marker_begin: "Corporal"
|
||||
marker_end: "Corporal"
|
||||
block: |
|
||||
matrix_corporal_policy_provider_config: |
|
||||
{
|
||||
"Type": "static_file",
|
||||
"Path": "/etc/matrix-corporal/corporal-policy.json"
|
||||
}
|
||||
when: matrix_corporal_policy_provider_mode == "Simple Static File"
|
||||
when: awx_corporal_policy_provider_mode == "Simple Static File"
|
||||
|
||||
- name: Touch the /matrix/corporal/ directory
|
||||
file:
|
||||
@ -141,12 +144,12 @@
|
||||
|
||||
- name: Record 'Simple Static File' configuration content in corporal-policy.json
|
||||
copy:
|
||||
content: "{{ matrix_corporal_simple_static_config | string }}"
|
||||
content: "{{ awx_corporal_simple_static_config | string }}"
|
||||
dest: "/matrix/corporal/config/corporal-policy.json"
|
||||
owner: matrix
|
||||
group: matrix
|
||||
mode: '660'
|
||||
when: (matrix_corporal_policy_provider_mode == "Simple Static File") and (matrix_corporal_simple_static_config|length > 0)
|
||||
when: (awx_corporal_policy_provider_mode == "Simple Static File") and (awx_corporal_simple_static_config|length > 0)
|
||||
|
||||
- name: Record 'HTTP Pull Mode' configuration variables in matrix_vars.yml
|
||||
delegate_to: 127.0.0.1
|
||||
@ -157,13 +160,13 @@
|
||||
matrix_corporal_policy_provider_config: |
|
||||
{
|
||||
"Type": "http",
|
||||
"Uri": "{{ matrix_corporal_pull_mode_uri }}",
|
||||
"AuthorizationBearerToken": "{{ matrix_corporal_pull_mode_token }}",
|
||||
"Uri": "{{ awx_corporal_pull_mode_uri }}",
|
||||
"AuthorizationBearerToken": "{{ awx_corporal_pull_mode_token }}",
|
||||
"CachePath": "/var/cache/matrix-corporal/last-policy.json",
|
||||
"ReloadIntervalSeconds": 1800,
|
||||
"TimeoutMilliseconds": 30000
|
||||
}
|
||||
when: (matrix_corporal_policy_provider_mode == "HTTP Pull Mode (API Enabled)") and (matrix_corporal_pull_mode_uri|length > 0) and (matrix_corporal_pull_mode_token|length > 0)
|
||||
when: (awx_corporal_policy_provider_mode == "HTTP Pull Mode (API Enabled)") and (matrix_corporal_pull_mode_uri|length > 0) and (awx_corporal_pull_mode_token|length > 0)
|
||||
|
||||
- name: Record 'HTTP Push Mode' configuration variables in matrix_vars.yml
|
||||
delegate_to: 127.0.0.1
|
||||
@ -176,7 +179,7 @@
|
||||
"Type": "last_seen_store_policy",
|
||||
"CachePath": "/var/cache/matrix-corporal/last-policy.json"
|
||||
}
|
||||
when: (matrix_corporal_policy_provider_mode == "HTTP Push Mode (API Enabled)")
|
||||
when: (awx_corporal_policy_provider_mode == "HTTP Push Mode (API Enabled)")
|
||||
|
||||
- name: Lower RateLimit if set to 'Normal'
|
||||
delegate_to: 127.0.0.1
|
||||
@ -184,7 +187,7 @@
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: ' address:\n per_second: 50\n burst_count: 300\n account:\n per_second: 0.17\n burst_count: 300'
|
||||
replace: ' address:\n per_second: 0.17\n burst_count: 3\n account:\n per_second: 0.17\n burst_count: 3'
|
||||
when: matrix_corporal_raise_ratelimits == "Normal"
|
||||
when: awx_corporal_raise_ratelimits == "Normal"
|
||||
|
||||
- name: Raise RateLimit if set to 'Raised'
|
||||
delegate_to: 127.0.0.1
|
||||
@ -192,7 +195,7 @@
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: ' address:\n per_second: 0.17\n burst_count: 3\n account:\n per_second: 0.17\n burst_count: 3'
|
||||
replace: ' address:\n per_second: 50\n burst_count: 300\n account:\n per_second: 0.17\n burst_count: 300'
|
||||
when: matrix_corporal_raise_ratelimits == "Raised"
|
||||
when: awx_corporal_raise_ratelimits == "Raised"
|
||||
|
||||
- name: Save new 'Configure Corporal' survey.json to the AWX tower
|
||||
delegate_to: 127.0.0.1
|
||||
@ -218,13 +221,6 @@
|
||||
- debug:
|
||||
msg: "matrix_corporal_matrix_registration_shared_secret: {{ matrix_corporal_matrix_registration_shared_secret }}"
|
||||
|
||||
- name: Collect AWX admin token the hard way!
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
|
||||
register: tower_token
|
||||
no_log: True
|
||||
|
||||
- name: Recreate 'Configure Corporal (Advanced)' job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_template:
|
||||
@ -242,6 +238,6 @@
|
||||
become_enabled: yes
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Include vars in matrix_vars.yml
|
||||
include_vars:
|
||||
@ -13,8 +14,8 @@
|
||||
|
||||
- name: Collect access token of Dimension user
|
||||
shell: |
|
||||
curl -X POST --header 'Content-Type: application/json' -d '{ "identifier": { "type": "m.id.user","user": "dimension" }, "password": "{{ matrix_awx_dimension_user_password }}", "type": "m.login.password"}' 'https://matrix.{{ matrix_domain }}/_matrix/client/r0/login' | jq -c '. | {access_token}' | sed 's/.*\":\"//' | sed 's/\"}//'
|
||||
register: dimension_user_access_token
|
||||
curl -X POST --header 'Content-Type: application/json' -d '{ "identifier": { "type": "m.id.user","user": "dimension" }, "password": "{{ awx_dimension_user_password }}", "type": "m.login.password"}' 'https://matrix.{{ matrix_domain }}/_matrix/client/r0/login' | jq -c '. | {access_token}' | sed 's/.*\":\"//' | sed 's/\"}//'
|
||||
register: awx_dimension_user_access_token
|
||||
|
||||
- name: Record Synapse variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -25,17 +26,17 @@
|
||||
insertafter: '# Dimension Settings Start'
|
||||
with_dict:
|
||||
'matrix_dimension_enabled': '{{ matrix_dimension_enabled }}'
|
||||
'matrix_dimension_access_token': '"{{ dimension_user_access_token.stdout }}"'
|
||||
'matrix_dimension_access_token': '"{{ awx_dimension_user_access_token.stdout }}"'
|
||||
|
||||
- name: Set final users list if users are defined
|
||||
set_fact:
|
||||
ext_dimension_users_raw_final: "{{ ext_dimension_users_raw }}"
|
||||
when: ext_dimension_users_raw|length > 0
|
||||
awx_dimension_users_final: "{{ awx_dimension_users }}"
|
||||
when: awx_dimension_users | length > 0
|
||||
|
||||
- name: Set final users list if no users are defined
|
||||
set_fact:
|
||||
ext_dimension_users_raw_final: '@dimension:{{ matrix_domain }}'
|
||||
when: ext_dimension_users_raw|length == 0
|
||||
awx_dimension_users_final: '@dimension:{{ matrix_domain }}'
|
||||
when: awx_dimension_users | length == 0
|
||||
|
||||
- name: Remove Dimension Users
|
||||
delegate_to: 127.0.0.1
|
||||
@ -58,7 +59,7 @@
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
insertafter: '^matrix_dimension_admins:'
|
||||
line: ' - "{{ item }}"'
|
||||
with_items: "{{ ext_dimension_users_raw_final.splitlines() }}"
|
||||
with_items: "{{ awx_dimension_users_final.splitlines() }}"
|
||||
|
||||
- name: Record Dimension Custom variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -66,9 +67,9 @@
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertafter: '# Custom Settings Start'
|
||||
insertbefore: '# Dimension Settings End'
|
||||
with_dict:
|
||||
'ext_dimension_users_raw': '{{ ext_dimension_users_raw.splitlines() | to_json }}'
|
||||
'awx_dimension_users': '{{ awx_dimension_users.splitlines() | to_json }}'
|
||||
|
||||
- name: Save new 'Configure Dimension' survey.json to the AWX tower, template
|
||||
delegate_to: 127.0.0.1
|
||||
@ -82,13 +83,6 @@
|
||||
dest: '/matrix/awx/configure_dimension.json'
|
||||
mode: '0660'
|
||||
|
||||
- name: Collect AWX admin token the hard way!
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
|
||||
register: tower_token
|
||||
no_log: True
|
||||
|
||||
- name: Recreate 'Configure Dimension' job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_template:
|
||||
@ -106,6 +100,6 @@
|
||||
become_enabled: yes
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Record Element-Web variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -8,25 +9,142 @@
|
||||
insertafter: '# Element Settings Start'
|
||||
with_dict:
|
||||
'matrix_client_element_enabled': '{{ matrix_client_element_enabled }}'
|
||||
'matrix_client_element_jitsi_preferredDomain': '{{ matrix_client_element_jitsi_preferredDomain }}'
|
||||
'matrix_client_element_brand': '{{ matrix_client_element_brand }}'
|
||||
'matrix_client_element_jitsi_preferredDomain': 'jitsi.{{ matrix_domain }}'
|
||||
'matrix_client_element_default_theme': '{{ matrix_client_element_default_theme }}'
|
||||
'matrix_client_element_registration_enabled': '{{ matrix_client_element_registration_enabled }}'
|
||||
'matrix_client_element_brand': '{{ matrix_client_element_brand | trim }}'
|
||||
'matrix_client_element_branding_welcomeBackgroundUrl': '{{ matrix_client_element_branding_welcomeBackgroundUrl | trim }}'
|
||||
'matrix_client_element_welcome_logo': '{{ matrix_client_element_welcome_logo | trim }}'
|
||||
'matrix_client_element_welcome_logo_link': '{{ matrix_client_element_welcome_logo_link | trim }}'
|
||||
|
||||
- name: Record Element-Web custom variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: '{{ item.value }}'"
|
||||
insertbefore: '# Element Settings End'
|
||||
with_dict:
|
||||
'awx_matrix_client_element_welcome_headline': '{{ awx_matrix_client_element_welcome_headline | trim }}'
|
||||
'awx_matrix_client_element_welcome_text': '{{ awx_matrix_client_element_welcome_text | trim }}'
|
||||
|
||||
- name: Set Element-Web custom branding locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: '{{ item.value }}'"
|
||||
insertafter: '# Element Settings Start'
|
||||
with_dict:
|
||||
'matrix_client_element_brand': "{{ matrix_client_element_brand }}"
|
||||
when: matrix_client_element_brand | trim | length > 0
|
||||
|
||||
- name: Remove Element-Web custom branding locally on AWX if not defined
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^matrix_client_element_brand: "
|
||||
state: absent
|
||||
when: matrix_client_element_brand | trim | length == 0
|
||||
|
||||
- name: Set fact for 'https' string
|
||||
set_fact:
|
||||
awx_https_string: "https"
|
||||
|
||||
- name: Record Element-Web Background variable locally on AWX
|
||||
- name: Set Element-Web custom logo locally on AWX if defined
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
line: "{{ item.key }}: '{{ item.value }}'"
|
||||
insertafter: '# Element Settings Start'
|
||||
with_dict:
|
||||
'matrix_client_element_welcome_logo': '{{ matrix_client_element_welcome_logo }}'
|
||||
when: ( awx_https_string in matrix_client_element_welcome_logo ) and ( matrix_client_element_welcome_logo | trim | length > 0 )
|
||||
|
||||
- name: Remove Element-Web custom logo locally on AWX if not defined
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^matrix_client_element_welcome_logo: "
|
||||
state: absent
|
||||
when: matrix_client_element_welcome_logo | trim | length == 0
|
||||
|
||||
- name: Set Element-Web custom logo link locally on AWX if defined
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: '{{ item.value }}'"
|
||||
insertafter: '# Element Settings Start'
|
||||
with_dict:
|
||||
'matrix_client_element_welcome_logo_link': '{{ matrix_client_element_welcome_logo_link }}'
|
||||
when: ( awx_https_string in matrix_client_element_welcome_logo_link ) and ( matrix_client_element_welcome_logo_link | trim | length > 0 )
|
||||
|
||||
- name: Remove Element-Web custom logo link locally on AWX if not defined
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^matrix_client_element_welcome_logo_link: "
|
||||
state: absent
|
||||
when: matrix_client_element_welcome_logo_link | trim | length == 0
|
||||
|
||||
- name: Set Element-Web custom headline locally on AWX if defined
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: '{{ item.value }}'"
|
||||
insertafter: '# Element Settings Start'
|
||||
with_dict:
|
||||
'matrix_client_element_welcome_headline': '{{ awx_matrix_client_element_welcome_headline }}'
|
||||
when: awx_matrix_client_element_welcome_headline | trim | length > 0
|
||||
|
||||
- name: Remove Element-Web custom headline locally on AWX if not defined
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^matrix_client_element_welcome_headline: "
|
||||
state: absent
|
||||
when: awx_matrix_client_element_welcome_headline | trim | length == 0
|
||||
|
||||
- name: Set Element-Web custom text locally on AWX if defined
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: '{{ item.value }}'"
|
||||
insertafter: '# Element Settings Start'
|
||||
with_dict:
|
||||
'matrix_client_element_welcome_text': '{{ awx_matrix_client_element_welcome_text }}'
|
||||
when: awx_matrix_client_element_welcome_text | trim | length > 0
|
||||
|
||||
- name: Remove Element-Web custom text locally on AWX if not defined
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^matrix_client_element_welcome_text: "
|
||||
state: absent
|
||||
when: awx_matrix_client_element_welcome_text | trim | length == 0
|
||||
|
||||
- name: Set Element-Web background locally on AWX if defined
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: '{{ item.value }}'"
|
||||
insertafter: '# Element Settings Start'
|
||||
with_dict:
|
||||
'matrix_client_element_branding_welcomeBackgroundUrl': '{{ matrix_client_element_branding_welcomeBackgroundUrl }}'
|
||||
when: (awx_https_string in matrix_client_element_branding_welcomeBackgroundUrl) and ( matrix_client_element_branding_welcomeBackgroundUrl|length > 0 )
|
||||
when: matrix_client_element_branding_welcomeBackgroundUrl | trim | length > 0
|
||||
|
||||
- name: Remove Element-Web background locally on AWX if not defined
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^matrix_client_element_branding_welcomeBackgroundUrl: "
|
||||
state: absent
|
||||
when: matrix_client_element_branding_welcomeBackgroundUrl | trim | length == 0
|
||||
|
||||
- name: Save new 'Configure Element' survey.json to the AWX tower, template
|
||||
delegate_to: 127.0.0.1
|
||||
@ -40,13 +158,6 @@
|
||||
dest: '/matrix/awx/configure_element.json'
|
||||
mode: '0660'
|
||||
|
||||
- name: Collect AWX admin token the hard way!
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
|
||||
register: tower_token
|
||||
no_log: True
|
||||
|
||||
- name: Recreate 'Configure Element' job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_template:
|
||||
@ -64,6 +175,6 @@
|
||||
become_enabled: yes
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Record Element-Web variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -7,7 +8,7 @@
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertafter: '# Element Settings Start'
|
||||
with_dict:
|
||||
'matrix_server_fqn_element': "{{ element_subdomain }}.{{ matrix_domain }}"
|
||||
'matrix_server_fqn_element': "{{ awx_element_subdomain | trim }}.{{ matrix_domain }}"
|
||||
|
||||
- name: Save new 'Configure Element Subdomain' survey.json to the AWX tower, template
|
||||
delegate_to: 127.0.0.1
|
||||
@ -21,13 +22,6 @@
|
||||
dest: '/matrix/awx/configure_element_subdomain.json'
|
||||
mode: '0660'
|
||||
|
||||
- name: Collect AWX admin token the hard way!
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
|
||||
register: tower_token
|
||||
no_log: True
|
||||
|
||||
- name: Recreate 'Configure Element Subdomain' job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_template:
|
||||
@ -44,6 +38,6 @@
|
||||
survey_spec: "{{ lookup('file', '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/configure_element_subdomain.json') }}"
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Record Jitsi variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -8,7 +9,7 @@
|
||||
insertafter: '# Jitsi Settings Start'
|
||||
with_dict:
|
||||
'matrix_jitsi_enabled': '{{ matrix_jitsi_enabled }}'
|
||||
'matrix_jitsi_web_config_defaultLanguage': '{{ matrix_jitsi_web_config_defaultLanguage }}'
|
||||
'matrix_jitsi_web_config_defaultLanguage': '{{ matrix_jitsi_web_config_defaultLanguage | trim }}'
|
||||
|
||||
- name: Save new 'Configure Jitsi' survey.json to the AWX tower, template
|
||||
delegate_to: 127.0.0.1
|
||||
@ -22,13 +23,6 @@
|
||||
dest: '/matrix/awx/configure_jitsi.json'
|
||||
mode: '0660'
|
||||
|
||||
- name: Collect AWX admin token the hard way!
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
|
||||
register: tower_token
|
||||
no_log: True
|
||||
|
||||
- name: Recreate 'Configure Jitsi' job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_template:
|
||||
@ -46,6 +40,6 @@
|
||||
become_enabled: yes
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Record ma1sd variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -17,8 +18,8 @@
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertafter: '# Synapse Extension Start'
|
||||
with_dict:
|
||||
'matrix_synapse_ext_password_provider_rest_auth_enabled': 'false'
|
||||
when: ext_matrix_ma1sd_auth_store == 'Synapse Internal'
|
||||
'matrix_synapse_awx_password_provider_rest_auth_enabled': 'false'
|
||||
when: awx_matrix_ma1sd_auth_store == 'Synapse Internal'
|
||||
|
||||
- name: Enable REST auth if using external LDAP/AD with ma1sd
|
||||
delegate_to: 127.0.0.1
|
||||
@ -28,14 +29,9 @@
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertafter: '# Synapse Extension Start'
|
||||
with_dict:
|
||||
'matrix_synapse_ext_password_provider_rest_auth_enabled': 'true'
|
||||
'matrix_synapse_ext_password_provider_rest_auth_endpoint': 'http://matrix-ma1sd:8090'
|
||||
when: ext_matrix_ma1sd_auth_store == 'LDAP/AD'
|
||||
|
||||
- name: Strip header from ma1sd configuration extension if using internal auth
|
||||
set_fact:
|
||||
ext_matrix_ma1sd_configuration_extension_yaml_parsed: "{{ ext_matrix_ma1sd_configuration_extension_yaml.splitlines() | reject('search', '^matrix_client_element_configuration_extension_json:') | list }}"
|
||||
when: ext_matrix_ma1sd_auth_store == 'LDAP/AD'
|
||||
'matrix_synapse_awx_password_provider_rest_auth_enabled': 'true'
|
||||
'matrix_synapse_awx_password_provider_rest_auth_endpoint': '"http://matrix-ma1sd:8090"'
|
||||
when: awx_matrix_ma1sd_auth_store == 'LDAP/AD'
|
||||
|
||||
- name: Remove entire ma1sd configuration extension
|
||||
delegate_to: 127.0.0.1
|
||||
@ -52,22 +48,13 @@
|
||||
regexp: '^# Start ma1sd Extension# End ma1sd Extension'
|
||||
replace: '# Start ma1sd Extension\n# End ma1sd Extension'
|
||||
|
||||
- name: Insert ma1sd configuration extension header if using external LDAP/AD with ma1sd
|
||||
- name: Insert/Update ma1sd configuration extension variables
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
blockinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
line: "matrix_ma1sd_configuration_extension_yaml: |"
|
||||
marker: "# {mark} ma1sd ANSIBLE MANAGED BLOCK"
|
||||
insertafter: '# Start ma1sd Extension'
|
||||
when: ext_matrix_ma1sd_auth_store == 'LDAP/AD'
|
||||
|
||||
- name: Set ma1sd configuration extension if using external LDAP/AD with ma1sd
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
insertbefore: '# End ma1sd Extension'
|
||||
line: '{{ item }}'
|
||||
with_items: "{{ ext_matrix_ma1sd_configuration_extension_yaml_parsed }}"
|
||||
when: ext_matrix_ma1sd_auth_store == 'LDAP/AD'
|
||||
block: '{{ awx_matrix_ma1sd_configuration_extension_yaml }}'
|
||||
|
||||
- name: Record ma1sd Custom variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -75,10 +62,11 @@
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertbefore: '# Custom Settings Start'
|
||||
insertbefore: '# ma1sd Settings End'
|
||||
with_dict:
|
||||
'ext_matrix_ma1sd_auth_store': '{{ ext_matrix_ma1sd_auth_store }}'
|
||||
'ext_matrix_ma1sd_configuration_extension_yaml': '{{ ext_matrix_ma1sd_configuration_extension_yaml.splitlines() | to_json }}'
|
||||
'awx_matrix_ma1sd_auth_store': '{{ awx_matrix_ma1sd_auth_store }}'
|
||||
'awx_matrix_ma1sd_configuration_extension_yaml': '{{ awx_matrix_ma1sd_configuration_extension_yaml.splitlines() | to_json }}'
|
||||
no_log: True
|
||||
|
||||
- name: Save new 'Configure ma1sd' survey.json to the AWX tower, template
|
||||
delegate_to: 127.0.0.1
|
||||
@ -92,13 +80,6 @@
|
||||
dest: '/matrix/awx/configure_ma1sd.json'
|
||||
mode: '0660'
|
||||
|
||||
- name: Collect AWX admin token the hard way!
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
|
||||
register: tower_token
|
||||
no_log: True
|
||||
|
||||
- name: Recreate 'Configure ma1sd (Advanced)' job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_template:
|
||||
@ -116,7 +97,7 @@
|
||||
become_enabled: yes
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
|
||||
|
||||
44
roles/matrix-awx/tasks/set_variables_mailer.yml
Normal file
44
roles/matrix-awx/tasks/set_variables_mailer.yml
Normal file
@ -0,0 +1,44 @@
|
||||
---
|
||||
|
||||
- name: Record Mailer variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertafter: '# Email Settings Start'
|
||||
with_dict:
|
||||
'matrix_mailer_relay_use': '{{ matrix_mailer_relay_use }}'
|
||||
|
||||
- name: Save new 'Configure Email Relay' survey.json to the AWX tower, template
|
||||
delegate_to: 127.0.0.1
|
||||
template:
|
||||
src: 'roles/matrix-awx/surveys/configure_email_relay.json.j2'
|
||||
dest: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/configure_email_relay.json'
|
||||
|
||||
- name: Copy new 'Configure Email Relay' survey.json to target machine
|
||||
copy:
|
||||
src: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/configure_email_relay.json'
|
||||
dest: '/matrix/awx/configure_email_relay.json'
|
||||
mode: '0660'
|
||||
|
||||
- name: Recreate 'Configure Email Relay' job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_template:
|
||||
name: "{{ matrix_domain }} - 1 - Configure Email Relay"
|
||||
description: "Enable MailGun relay to increase verification email reliability."
|
||||
extra_vars: "{{ lookup('file', '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/extra_vars.json') }}"
|
||||
job_type: run
|
||||
job_tags: "start,setup-mailer"
|
||||
inventory: "{{ member_id }}"
|
||||
project: "{{ member_id }} - Matrix Docker Ansible Deploy"
|
||||
playbook: setup.yml
|
||||
credential: "{{ member_id }} - AWX SSH Key"
|
||||
survey_enabled: true
|
||||
survey_spec: "{{ lookup('file', '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/configure_email_relay.json') }}"
|
||||
become_enabled: yes
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
@ -1,13 +1,13 @@
|
||||
|
||||
- name: Limit max upload size to 100MB part 1
|
||||
- name: Limit max upload size to 200MB part 1
|
||||
set_fact:
|
||||
matrix_synapse_max_upload_size_mb: "100"
|
||||
when: matrix_synapse_max_upload_size_mb_raw|int >= 100
|
||||
matrix_synapse_max_upload_size_mb: "200"
|
||||
when: awx_synapse_max_upload_size_mb | int >= 200
|
||||
|
||||
- name: Limit max upload size to 100MB part 2
|
||||
- name: Limit max upload size to 200MB part 2
|
||||
set_fact:
|
||||
matrix_synapse_max_upload_size_mb: "{{ matrix_synapse_max_upload_size_mb_raw }}"
|
||||
when: matrix_synapse_max_upload_size_mb_raw|int < 100
|
||||
matrix_synapse_max_upload_size_mb: "{{ awx_synapse_max_upload_size_mb }}"
|
||||
when: awx_synapse_max_upload_size_mb | int < 200
|
||||
|
||||
- name: Record Synapse variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -32,13 +32,13 @@
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^matrix_synapse_auto_join_rooms: .*$"
|
||||
replace: "matrix_synapse_auto_join_rooms: []"
|
||||
when: matrix_synapse_auto_join_rooms_raw|length == 0
|
||||
when: awx_synapse_auto_join_rooms | length == 0
|
||||
|
||||
- name: If the raw inputs is not empty start constructing parsed auto_join_rooms list
|
||||
set_fact:
|
||||
matrix_synapse_auto_join_rooms_array: |-
|
||||
{{ matrix_synapse_auto_join_rooms_raw.splitlines() | to_json }}
|
||||
when: matrix_synapse_auto_join_rooms_raw|length > 0
|
||||
awx_synapse_auto_join_rooms_array: |-
|
||||
{{ awx_synapse_auto_join_rooms.splitlines() | to_json }}
|
||||
when: awx_synapse_auto_join_rooms | length > 0
|
||||
|
||||
- name: Record Synapse variable 'matrix_synapse_auto_join_rooms' locally on AWX, if it's not blank
|
||||
delegate_to: 127.0.0.1
|
||||
@ -48,8 +48,8 @@
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertafter: '# Synapse Settings Start'
|
||||
with_dict:
|
||||
"matrix_synapse_auto_join_rooms": "{{ matrix_synapse_auto_join_rooms_array }}"
|
||||
when: matrix_synapse_auto_join_rooms_raw|length > 0
|
||||
"matrix_synapse_auto_join_rooms": "{{ awx_synapse_auto_join_rooms_array }}"
|
||||
when: awx_synapse_auto_join_rooms | length > 0
|
||||
|
||||
- name: Record Synapse Shared Secret if it's defined
|
||||
delegate_to: 127.0.0.1
|
||||
@ -59,33 +59,33 @@
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertafter: '# Synapse Settings Start'
|
||||
with_dict:
|
||||
'matrix_synapse_registration_shared_secret': '{{ ext_matrix_synapse_registration_shared_secret }}'
|
||||
when: ext_matrix_synapse_registration_shared_secret|length > 0
|
||||
'matrix_synapse_registration_shared_secret': '{{ awx_matrix_synapse_registration_shared_secret }}'
|
||||
when: awx_matrix_synapse_registration_shared_secret | length > 0
|
||||
|
||||
- name: Record registations_require_3pid extra variable if true
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "{{ item }}:"
|
||||
regexp: "{{ item }}"
|
||||
line: "{{ item }}"
|
||||
insertbefore: '# Synapse Extension End'
|
||||
with_items:
|
||||
- " registrations_require_3pid:"
|
||||
- " - email"
|
||||
when: ext_registrations_require_3pid|bool
|
||||
when: awx_registrations_require_3pid | bool
|
||||
|
||||
- name: Remove registrations_require_3pid extra variable if false
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "{{ item }}:"
|
||||
regexp: "{{ item }}"
|
||||
line: "{{ item }}"
|
||||
insertbefore: '# Synapse Extension End'
|
||||
state: absent
|
||||
with_items:
|
||||
- " registrations_require_3pid:"
|
||||
- " - email"
|
||||
when: not ext_registrations_require_3pid|bool
|
||||
when: not awx_registrations_require_3pid | bool
|
||||
|
||||
- name: Remove URL Languages
|
||||
delegate_to: 127.0.0.1
|
||||
@ -97,21 +97,21 @@
|
||||
|
||||
- name: Set URL languages default if raw inputs empty
|
||||
set_fact:
|
||||
ext_url_preview_accept_language_default: 'en'
|
||||
when: ext_url_preview_accept_language_raw|length == 0
|
||||
awx_url_preview_accept_language_default: 'en'
|
||||
when: awx_url_preview_accept_language | length == 0
|
||||
|
||||
- name: Set URL languages default if raw inputs not empty
|
||||
set_fact:
|
||||
ext_url_preview_accept_language_default: "{{ ext_url_preview_accept_language_raw }}"
|
||||
when: ext_url_preview_accept_language_raw|length > 0
|
||||
awx_url_preview_accept_language_default: "{{ awx_url_preview_accept_language }}"
|
||||
when: awx_url_preview_accept_language|length > 0
|
||||
|
||||
- name: Set URL languages if raw inputs empty
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
insertafter: '^ url_preview_accept_language:'
|
||||
line: " - {{ ext_url_preview_accept_language_default }}"
|
||||
when: ext_url_preview_accept_language_raw|length == 0
|
||||
line: " - {{ awx_url_preview_accept_language_default }}"
|
||||
when: awx_url_preview_accept_language|length == 0
|
||||
|
||||
- name: Set URL languages if raw inputs not empty
|
||||
delegate_to: 127.0.0.1
|
||||
@ -119,8 +119,8 @@
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
insertafter: '^ url_preview_accept_language:'
|
||||
line: " - {{ item }}"
|
||||
with_items: "{{ ext_url_preview_accept_language_raw.splitlines() }}"
|
||||
when: ext_url_preview_accept_language_raw|length > 0
|
||||
with_items: "{{ awx_url_preview_accept_language.splitlines() }}"
|
||||
when: awx_url_preview_accept_language | length > 0
|
||||
|
||||
- name: Remove Federation Whitelisting 1
|
||||
delegate_to: 127.0.0.1
|
||||
@ -143,7 +143,7 @@
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
insertafter: '^matrix_synapse_configuration_extension_yaml: \|'
|
||||
line: " federation_domain_whitelist:"
|
||||
when: ext_federation_whitelist_raw|length > 0
|
||||
when: awx_federation_whitelist | length > 0
|
||||
|
||||
- name: Set Federation Whitelisting 2
|
||||
delegate_to: 127.0.0.1
|
||||
@ -151,27 +151,16 @@
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
insertafter: '^ federation_domain_whitelist:'
|
||||
line: " - {{ item }}"
|
||||
with_items: "{{ ext_federation_whitelist_raw.splitlines() }}"
|
||||
when: ext_federation_whitelist_raw|length > 0
|
||||
with_items: "{{ awx_federation_whitelist.splitlines() }}"
|
||||
when: awx_federation_whitelist | length > 0
|
||||
|
||||
- name: Record Synapse Custom variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertafter: '# Custom Settings Start'
|
||||
with_dict:
|
||||
'ext_federation_whitelist_raw': '{{ ext_federation_whitelist_raw.splitlines() | to_json }}'
|
||||
'ext_url_preview_accept_language_default': '{{ ext_url_preview_accept_language_default.splitlines() | to_json }}'
|
||||
- name: Set awx_recaptcha_public_key to a 'public-key' if undefined
|
||||
set_fact: awx_recaptcha_public_key="public-key"
|
||||
when: (awx_recaptcha_public_key is not defined) or (awx_recaptcha_public_key|length == 0)
|
||||
|
||||
- name: Set ext_recaptcha_public_key to a 'public-key' if undefined
|
||||
set_fact: ext_recaptcha_public_key="public-key"
|
||||
when: (ext_recaptcha_public_key is not defined) or (ext_recaptcha_public_key|length == 0)
|
||||
|
||||
- name: Set ext_recaptcha_private_key to a 'private-key' if undefined
|
||||
set_fact: ext_recaptcha_private_key="private-key"
|
||||
when: (ext_recaptcha_private_key is not defined) or (ext_recaptcha_private_key|length == 0)
|
||||
- name: Set awx_recaptcha_private_key to a 'private-key' if undefined
|
||||
set_fact: awx_recaptcha_private_key="private-key"
|
||||
when: (awx_recaptcha_private_key is not defined) or (awx_recaptcha_private_key|length == 0)
|
||||
|
||||
- name: Record Synapse Extension variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -181,9 +170,23 @@
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertbefore: '# Synapse Extension End'
|
||||
with_dict:
|
||||
' enable_registration_captcha': '{{ ext_enable_registration_captcha }}'
|
||||
' recaptcha_public_key': '{{ ext_recaptcha_public_key }}'
|
||||
' recaptcha_private_key': '{{ ext_recaptcha_private_key }}'
|
||||
' enable_registration_captcha': '{{ awx_enable_registration_captcha }}'
|
||||
' recaptcha_public_key': '{{ awx_recaptcha_public_key }}'
|
||||
' recaptcha_private_key': '{{ awx_recaptcha_private_key }}'
|
||||
|
||||
- name: Record Synapse Custom variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
lineinfile:
|
||||
path: '{{ awx_cached_matrix_vars }}'
|
||||
regexp: "^#? *{{ item.key | regex_escape() }}:"
|
||||
line: "{{ item.key }}: {{ item.value }}"
|
||||
insertbefore: '# Synapse Settings End'
|
||||
with_dict:
|
||||
'awx_federation_whitelist': '{{ awx_federation_whitelist.splitlines() | to_json }}'
|
||||
'awx_url_preview_accept_language_default': '{{ awx_url_preview_accept_language_default.splitlines() | to_json }}'
|
||||
'awx_enable_registration_captcha': '{{ awx_enable_registration_captcha }}'
|
||||
'awx_recaptcha_public_key': '"{{ awx_recaptcha_public_key }}"'
|
||||
'awx_recaptcha_private_key': '"{{ awx_recaptcha_private_key }}"'
|
||||
|
||||
- name: Save new 'Configure Synapse' survey.json to the AWX tower, template
|
||||
delegate_to: 127.0.0.1
|
||||
@ -197,13 +200,6 @@
|
||||
dest: '/matrix/awx/configure_synapse.json'
|
||||
mode: '0660'
|
||||
|
||||
- name: Collect AWX admin token the hard way!
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
|
||||
register: tower_token
|
||||
no_log: True
|
||||
|
||||
- name: Recreate 'Configure Synapse' job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_template:
|
||||
@ -221,6 +217,6 @@
|
||||
become_enabled: yes
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
---
|
||||
|
||||
- name: Record Synapse Admin variables locally on AWX
|
||||
delegate_to: 127.0.0.1
|
||||
@ -21,13 +22,6 @@
|
||||
dest: '/matrix/awx/configure_synapse_admin.json'
|
||||
mode: '0660'
|
||||
|
||||
- name: Collect AWX admin token the hard way!
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
curl -sku {{ tower_username }}:{{ tower_password }} -H "Content-Type: application/json" -X POST -d '{"description":"Tower CLI", "application":null, "scope":"write"}' https://{{ tower_host }}/api/v2/users/1/personal_tokens/ | jq '.token' | sed -r 's/\"//g'
|
||||
register: tower_token
|
||||
no_log: True
|
||||
|
||||
- name: Recreate 'Configure Synapse Admin' job template
|
||||
delegate_to: 127.0.0.1
|
||||
awx.awx.tower_job_template:
|
||||
@ -45,6 +39,6 @@
|
||||
become_enabled: yes
|
||||
state: present
|
||||
verbosity: 1
|
||||
tower_host: "https://{{ tower_host }}"
|
||||
tower_oauthtoken: "{{ tower_token.stdout }}"
|
||||
tower_host: "https://{{ awx_host }}"
|
||||
tower_oauthtoken: "{{ awx_session_token.ansible_facts.tower_token.token }}"
|
||||
validate_certs: yes
|
||||
|
||||
@ -11,6 +11,9 @@ matrix_domain: ~
|
||||
# This and the Element FQN (see below) are expected to be on the same server.
|
||||
matrix_server_fqn_matrix: "matrix.{{ matrix_domain }}"
|
||||
|
||||
# This is where you access federation API.
|
||||
matrix_server_fqn_matrix_federation: '{{ matrix_server_fqn_matrix }}'
|
||||
|
||||
# This is where you access the Element web UI from (if enabled via matrix_client_element_enabled; enabled by default).
|
||||
# This and the Matrix FQN (see above) are expected to be on the same server.
|
||||
matrix_server_fqn_element: "element.{{ matrix_domain }}"
|
||||
@ -83,8 +86,8 @@ matrix_host_command_openssl: "/usr/bin/env openssl"
|
||||
matrix_host_command_systemctl: "/usr/bin/env systemctl"
|
||||
matrix_host_command_sh: "/usr/bin/env sh"
|
||||
|
||||
matrix_ntpd_package: "ntp"
|
||||
matrix_ntpd_service: "{{ 'ntpd' if ansible_os_family == 'RedHat' or ansible_distribution == 'Archlinux' else 'ntp' }}"
|
||||
matrix_ntpd_package: "{{ 'systemd-timesyncd' if (ansible_distribution == 'CentOS' and ansible_distribution_major_version > '7') or (ansible_distribution == 'Ubuntu' and ansible_distribution_major_version > '18') else ( 'systemd' if ansible_os_family == 'Suse' else 'ntp' ) }}"
|
||||
matrix_ntpd_service: "{{ 'systemd-timesyncd' if (ansible_distribution == 'CentOS' and ansible_distribution_major_version > '7') or (ansible_distribution == 'Ubuntu' and ansible_distribution_major_version > '18') or ansible_distribution == 'Archlinux' or ansible_os_family == 'Suse' else ('ntpd' if ansible_os_family == 'RedHat' else 'ntp') }}"
|
||||
|
||||
matrix_homeserver_url: "https://{{ matrix_server_fqn_matrix }}"
|
||||
|
||||
|
||||
@ -1,7 +1,10 @@
|
||||
---
|
||||
|
||||
- include_tasks: "{{ role_path }}/tasks/server_base/setup_centos.yml"
|
||||
when: ansible_distribution == 'CentOS'
|
||||
when: ansible_distribution == 'CentOS' and ansible_distribution_major_version < '8'
|
||||
|
||||
- include_tasks: "{{ role_path }}/tasks/server_base/setup_centos8.yml"
|
||||
when: ansible_distribution == 'CentOS' and ansible_distribution_major_version > '7'
|
||||
|
||||
- block:
|
||||
# ansible_lsb is only available if lsb-release is installed.
|
||||
|
||||
@ -4,7 +4,6 @@
|
||||
pacman:
|
||||
name:
|
||||
- python-docker
|
||||
- "{{ matrix_ntpd_package }}"
|
||||
# TODO This needs to be verified. Which version do we need?
|
||||
- fuse3
|
||||
- python-dnspython
|
||||
|
||||
47
roles/matrix-base/tasks/server_base/setup_centos8.yml
Normal file
47
roles/matrix-base/tasks/server_base/setup_centos8.yml
Normal file
@ -0,0 +1,47 @@
|
||||
---
|
||||
|
||||
- name: Ensure Docker repository is enabled
|
||||
template:
|
||||
src: "{{ role_path }}/files/yum.repos.d/{{ item }}"
|
||||
dest: "/etc/yum.repos.d/{{ item }}"
|
||||
owner: "root"
|
||||
group: "root"
|
||||
mode: 0644
|
||||
with_items:
|
||||
- docker-ce.repo
|
||||
when: matrix_docker_installation_enabled|bool and matrix_docker_package_name == 'docker-ce'
|
||||
|
||||
- name: Ensure Docker's RPM key is trusted
|
||||
rpm_key:
|
||||
state: present
|
||||
key: https://download.docker.com/linux/centos/gpg
|
||||
when: matrix_docker_installation_enabled|bool and matrix_docker_package_name == 'docker-ce'
|
||||
|
||||
- name: Ensure EPEL is installed
|
||||
yum:
|
||||
name:
|
||||
- epel-release
|
||||
state: latest
|
||||
update_cache: yes
|
||||
|
||||
- name: Ensure yum packages are installed
|
||||
yum:
|
||||
name:
|
||||
- "{{ matrix_ntpd_package }}"
|
||||
- fuse
|
||||
state: latest
|
||||
update_cache: yes
|
||||
|
||||
- name: Ensure Docker is installed
|
||||
yum:
|
||||
name:
|
||||
- "{{ matrix_docker_package_name }}"
|
||||
- python3-pip
|
||||
state: latest
|
||||
when: matrix_docker_installation_enabled|bool
|
||||
|
||||
- name: Ensure Docker-Py is installed
|
||||
pip:
|
||||
name: docker-py
|
||||
state: latest
|
||||
when: matrix_docker_installation_enabled|bool
|
||||
@ -23,14 +23,7 @@
|
||||
repo: "deb [arch={{ matrix_debian_arch }}] https://download.docker.com/linux/{{ ansible_distribution|lower }} {{ ansible_distribution_release }} stable"
|
||||
state: present
|
||||
update_cache: yes
|
||||
when: matrix_docker_installation_enabled|bool and matrix_docker_package_name == 'docker-ce' and not ansible_distribution_release == 'bullseye'
|
||||
|
||||
- name: Ensure Docker repository is enabled (using Debian Buster on Debian Bullseye, for which there is no Docker yet)
|
||||
apt_repository:
|
||||
repo: "deb [arch={{ matrix_debian_arch }}] https://download.docker.com/linux/{{ ansible_distribution|lower }} buster stable"
|
||||
state: present
|
||||
update_cache: yes
|
||||
when: matrix_docker_installation_enabled|bool and matrix_docker_package_name == 'docker-ce' and ansible_distribution_release == 'bullseye'
|
||||
when: matrix_docker_installation_enabled|bool and matrix_docker_package_name == 'docker-ce'
|
||||
|
||||
- name: Ensure APT packages are installed
|
||||
apt:
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
#jinja2: lstrip_blocks: "True"
|
||||
{
|
||||
"m.server": "{{ matrix_server_fqn_matrix }}:{{ matrix_federation_public_port }}"
|
||||
"m.server": "{{ matrix_server_fqn_matrix_federation }}:{{ matrix_federation_public_port }}"
|
||||
}
|
||||
|
||||
@ -2,7 +2,12 @@
|
||||
# See: https://github.com/anoadragon453/matrix-reminder-bot
|
||||
|
||||
matrix_bot_matrix_reminder_bot_enabled: true
|
||||
matrix_bot_matrix_reminder_bot_version: release-v0.2.0
|
||||
|
||||
matrix_bot_matrix_reminder_bot_container_self_build: false
|
||||
matrix_bot_matrix_reminder_bot_docker_repo: "https://github.com/anoadragon453/matrix-reminder-bot.git"
|
||||
matrix_bot_matrix_reminder_bot_docker_src_files_path: "{{ matrix_base_data_path }}/matrix-reminder-bot/docker-src"
|
||||
|
||||
matrix_bot_matrix_reminder_bot_version: release-v0.2.1
|
||||
matrix_bot_matrix_reminder_bot_docker_image: "{{ matrix_container_global_registry_prefix }}anoa/matrix-reminder-bot:{{ matrix_bot_matrix_reminder_bot_version }}"
|
||||
matrix_bot_matrix_reminder_bot_docker_image_force_pull: "{{ matrix_bot_matrix_reminder_bot_docker_image.endswith(':latest') }}"
|
||||
|
||||
|
||||
@ -37,6 +37,7 @@
|
||||
- { path: "{{ matrix_bot_matrix_reminder_bot_config_path }}", when: true }
|
||||
- { path: "{{ matrix_bot_matrix_reminder_bot_data_path }}", when: true }
|
||||
- { path: "{{ matrix_bot_matrix_reminder_bot_data_store_path }}", when: true }
|
||||
- { path: "{{ matrix_bot_matrix_reminder_bot_docker_src_files_path }}", when: true}
|
||||
when: "item.when|bool"
|
||||
|
||||
- name: Ensure matrix-reminder-bot image is pulled
|
||||
@ -45,6 +46,27 @@
|
||||
source: "{{ 'pull' if ansible_version.major > 2 or ansible_version.minor > 7 else omit }}"
|
||||
force_source: "{{ matrix_bot_matrix_reminder_bot_docker_image_force_pull if ansible_version.major > 2 or ansible_version.minor >= 8 else omit }}"
|
||||
force: "{{ omit if ansible_version.major > 2 or ansible_version.minor >= 8 else matrix_bot_matrix_reminder_bot_docker_image_force_pull }}"
|
||||
when: "not matrix_bot_matrix_reminder_bot_container_self_build|bool"
|
||||
|
||||
- name: Ensure matrix-reminder-bot repository is present on self-build
|
||||
git:
|
||||
repo: "{{ matrix_bot_matrix_reminder_bot_docker_repo }}"
|
||||
dest: "{{ matrix_bot_matrix_reminder_bot_docker_src_files_path }}"
|
||||
force: "yes"
|
||||
register: matrix_bot_matrix_reminder_bot_git_pull_results
|
||||
when: "matrix_bot_matrix_reminder_bot_container_self_build|bool"
|
||||
|
||||
- name: Ensure matrix-reminder-bot image is built
|
||||
docker_image:
|
||||
name: "{{ matrix_bot_matrix_reminder_bot_docker_image }}"
|
||||
source: build
|
||||
force_source: "{{ matrix_bot_matrix_reminder_bot_git_pull_results.changed if ansible_version.major > 2 or ansible_version.minor >= 8 else omit }}"
|
||||
force: "{{ omit if ansible_version.major > 2 or ansible_version.minor >= 8 else matrix_mailer_git_pull_results.changed }}"
|
||||
build:
|
||||
dockerfile: docker/Dockerfile
|
||||
path: "{{ matrix_bot_matrix_reminder_bot_docker_src_files_path }}"
|
||||
pull: yes
|
||||
when: "matrix_bot_matrix_reminder_bot_container_self_build|bool"
|
||||
|
||||
- name: Ensure matrix-reminder-bot config installed
|
||||
copy:
|
||||
|
||||
@ -2,13 +2,21 @@
|
||||
# See: https://github.com/matrix-org/mjolnir
|
||||
|
||||
matrix_bot_mjolnir_enabled: true
|
||||
matrix_bot_mjolnir_version: "v0.1.17"
|
||||
matrix_bot_mjolnir_docker_image: "{{ matrix_container_global_registry_prefix }}matrixdotorg/mjolnir:{{ matrix_bot_mjolnir_version }}"
|
||||
|
||||
matrix_bot_mjolnir_version: "v1.1.20"
|
||||
|
||||
matrix_bot_mjolnir_container_image_self_build: false
|
||||
matrix_bot_mjolnir_container_image_self_build_repo: "https://github.com/matrix-org/mjolnir.git"
|
||||
|
||||
matrix_bot_mjolnir_docker_image: "{{ matrix_bot_mjolnir_docker_image_name_prefix }}matrixdotorg/mjolnir:{{ matrix_bot_mjolnir_version }}"
|
||||
matrix_bot_mjolnir_docker_image_name_prefix: "{{ 'localhost/' if matrix_bot_mjolnir_container_image_self_build else matrix_container_global_registry_prefix }}"
|
||||
|
||||
matrix_bot_mjolnir_docker_image_force_pull: "{{ matrix_bot_mjolnir_docker_image.endswith(':latest') }}"
|
||||
|
||||
matrix_bot_mjolnir_base_path: "{{ matrix_base_data_path }}/mjolnir"
|
||||
matrix_bot_mjolnir_config_path: "{{ matrix_bot_mjolnir_base_path }}/config"
|
||||
matrix_bot_mjolnir_data_path: "{{ matrix_bot_mjolnir_base_path }}/data"
|
||||
matrix_bot_mjolnir_docker_src_files_path: "{{ matrix_bot_mjolnir_base_path }}/docker-src"
|
||||
|
||||
# A list of extra arguments to pass to the container
|
||||
matrix_bot_mjolnir_container_extra_arguments: []
|
||||
|
||||
@ -1,3 +1,10 @@
|
||||
# See https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/1070
|
||||
# and https://github.com/spantaleev/matrix-docker-ansible-deploy/commit/1ab507349c752042d26def3e95884f6df8886b74#commitcomment-51108407
|
||||
- name: Fail if trying to self-build on Ansible < 2.8
|
||||
fail:
|
||||
msg: "To self-build the Mjolnir image, you should use Ansible 2.8 or higher. See docs/ansible.md"
|
||||
when: "ansible_version.major == 2 and ansible_version.minor < 8 and matrix_bot_mjolnir_container_image_self_build and matrix_bot_mjolnir_enabled"
|
||||
|
||||
- set_fact:
|
||||
matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-bot-mjolnir.service'] }}"
|
||||
when: matrix_bot_mjolnir_enabled|bool
|
||||
|
||||
@ -14,14 +14,36 @@
|
||||
- { path: "{{ matrix_bot_mjolnir_base_path }}", when: true }
|
||||
- { path: "{{ matrix_bot_mjolnir_config_path }}", when: true }
|
||||
- { path: "{{ matrix_bot_mjolnir_data_path }}", when: true }
|
||||
- { path: "{{ matrix_bot_mjolnir_docker_src_files_path }}", when: "{{ matrix_bot_mjolnir_container_image_self_build }}" }
|
||||
when: "item.when|bool"
|
||||
|
||||
- name: Ensure mjolnir image is pulled
|
||||
- name: Ensure mjolnir Docker image is pulled
|
||||
docker_image:
|
||||
name: "{{ matrix_bot_mjolnir_docker_image }}"
|
||||
source: "{{ 'pull' if ansible_version.major > 2 or ansible_version.minor > 7 else omit }}"
|
||||
force_source: "{{ matrix_bot_mjolnir_docker_image_force_pull if ansible_version.major > 2 or ansible_version.minor >= 8 else omit }}"
|
||||
force: "{{ omit if ansible_version.major > 2 or ansible_version.minor >= 8 else matrix_bot_mjolnir_docker_image_force_pull }}"
|
||||
when: "not matrix_bot_mjolnir_container_image_self_build|bool"
|
||||
|
||||
- name: Ensure mjolnir repository is present on self-build
|
||||
git:
|
||||
repo: "{{ matrix_bot_mjolnir_container_image_self_build_repo }}"
|
||||
dest: "{{ matrix_bot_mjolnir_docker_src_files_path }}"
|
||||
version: "{{ matrix_bot_mjolnir_docker_image.split(':')[1] }}"
|
||||
force: "yes"
|
||||
register: matrix_bot_mjolnir_git_pull_results
|
||||
when: "matrix_bot_mjolnir_container_image_self_build|bool"
|
||||
|
||||
- name: Ensure mjolnir Docker image is built
|
||||
docker_image:
|
||||
name: "{{ matrix_bot_mjolnir_docker_image }}"
|
||||
source: build
|
||||
force_source: "{{ matrix_bot_mjolnir_git_pull_results.changed }}"
|
||||
build:
|
||||
dockerfile: Dockerfile
|
||||
path: "{{ matrix_bot_mjolnir_docker_src_files_path }}"
|
||||
pull: yes
|
||||
when: "matrix_bot_mjolnir_container_image_self_build|bool"
|
||||
|
||||
- name: Ensure matrix-bot-mjolnir config installed
|
||||
copy:
|
||||
|
||||
@ -7,7 +7,7 @@ matrix_appservice_irc_container_self_build: false
|
||||
matrix_appservice_irc_docker_repo: "https://github.com/matrix-org/matrix-appservice-irc.git"
|
||||
matrix_appservice_irc_docker_src_files_path: "{{ matrix_base_data_path }}/appservice-irc/docker-src"
|
||||
|
||||
matrix_appservice_irc_version: release-0.26.0
|
||||
matrix_appservice_irc_version: release-0.31.0
|
||||
matrix_appservice_irc_docker_image: "{{ matrix_container_global_registry_prefix }}matrixdotorg/matrix-appservice-irc:{{ matrix_appservice_irc_version }}"
|
||||
matrix_appservice_irc_docker_image_force_pull: "{{ matrix_appservice_irc_docker_image.endswith(':latest') }}"
|
||||
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
- name: Fail if trying to self-build on Ansible < 2.8
|
||||
fail:
|
||||
msg: "To self-build the Element image, you should use Ansible 2.8 or higher. See docs/ansible.md"
|
||||
when: "ansible_version.major == 2 and ansible_version.minor < 8 and matrix_appservice_irc_container_self_build"
|
||||
when: "ansible_version.major == 2 and ansible_version.minor < 8 and matrix_appservice_irc_container_self_build and matrix_appservice_irc_enabled"
|
||||
|
||||
# If the matrix-synapse role is not used, `matrix_synapse_role_executed` won't exist.
|
||||
# We don't want to fail in such cases.
|
||||
|
||||
@ -7,7 +7,7 @@ matrix_appservice_slack_container_self_build: false
|
||||
matrix_appservice_slack_docker_repo: "https://github.com/matrix-org/matrix-appservice-slack.git"
|
||||
matrix_appservice_slack_docker_src_files_path: "{{ matrix_base_data_path }}/appservice-slack/docker-src"
|
||||
|
||||
matrix_appservice_slack_version: release-1.5.0
|
||||
matrix_appservice_slack_version: release-1.8.0
|
||||
matrix_appservice_slack_docker_image: "{{ matrix_container_global_registry_prefix }}matrixdotorg/matrix-appservice-slack:{{ matrix_appservice_slack_version }}"
|
||||
matrix_appservice_slack_docker_image_force_pull: "{{ matrix_appservice_slack_docker_image.endswith(':latest') }}"
|
||||
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
- name: Fail if trying to self-build on Ansible < 2.8
|
||||
fail:
|
||||
msg: "To self-build the Element image, you should use Ansible 2.8 or higher. See docs/ansible.md"
|
||||
when: "ansible_version.major == 2 and ansible_version.minor < 8 and matrix_appservice_slack_container_self_build"
|
||||
when: "ansible_version.major == 2 and ansible_version.minor < 8 and matrix_appservice_slack_container_self_build and matrix_appservice_slack_enabled"
|
||||
|
||||
# If the matrix-synapse role is not used, `matrix_synapse_role_executed` won't exist.
|
||||
# We don't want to fail in such cases.
|
||||
|
||||
@ -3,13 +3,20 @@
|
||||
|
||||
matrix_appservice_webhooks_enabled: true
|
||||
|
||||
matrix_appservice_webhooks_container_image_self_build: false
|
||||
matrix_appservice_webhooks_container_image_self_build_repo: "https://github.com/turt2live/matrix-appservice-webhooks"
|
||||
matrix_appservice_webhooks_container_image_self_build_repo_version: "{{ 'master' if matrix_appservice_webhooks_version == 'latest' else matrix_appservice_webhooks_version }}"
|
||||
matrix_appservice_webhooks_container_image_self_build_repo_dockerfile_path: "Dockerfile"
|
||||
|
||||
matrix_appservice_webhooks_version: latest
|
||||
matrix_appservice_webhooks_docker_image: "{{ matrix_container_global_registry_prefix }}turt2live/matrix-appservice-webhooks:{{ matrix_appservice_webhooks_version }}"
|
||||
matrix_appservice_webhooks_docker_image: "{{ matrix_appservice_webhooks_docker_image_name_prefix }}turt2live/matrix-appservice-webhooks:{{ matrix_appservice_webhooks_version }}"
|
||||
matrix_appservice_webhooks_docker_image_name_prefix: "{{ 'localhost/' if matrix_appservice_webhooks_container_image_self_build else matrix_container_global_registry_prefix }}"
|
||||
matrix_appservice_webhooks_docker_image_force_pull: "{{ matrix_appservice_webhooks_docker_image.endswith(':latest') }}"
|
||||
|
||||
matrix_appservice_webhooks_base_path: "{{ matrix_base_data_path }}/appservice-webhooks"
|
||||
matrix_appservice_webhooks_config_path: "{{ matrix_appservice_webhooks_base_path }}/config"
|
||||
matrix_appservice_webhooks_data_path: "{{ matrix_appservice_webhooks_base_path }}/data"
|
||||
matrix_appservice_webhooks_docker_src_files_path: "{{ matrix_appservice_webhooks_base_path }}/docker-src"
|
||||
|
||||
# If nginx-proxy is disabled, the bridge itself expects its endpoint to be on its own domain (e.g. "localhost:6789")
|
||||
matrix_appservice_webhooks_public_endpoint: /appservice-webhooks
|
||||
|
||||
@ -1,23 +1,47 @@
|
||||
---
|
||||
|
||||
- name: Ensure AppService webhooks paths exist
|
||||
file:
|
||||
path: "{{ item.path }}"
|
||||
state: directory
|
||||
mode: 0750
|
||||
owner: "{{ matrix_user_username }}"
|
||||
group: "{{ matrix_user_groupname }}"
|
||||
with_items:
|
||||
- { path: "{{ matrix_appservice_webhooks_base_path }}", when: true }
|
||||
- { path: "{{ matrix_appservice_webhooks_config_path }}", when: true }
|
||||
- { path: "{{ matrix_appservice_webhooks_data_path }}", when: true }
|
||||
- { path: "{{ matrix_appservice_webhooks_docker_src_files_path }}", when: "{{ matrix_appservice_webhooks_container_image_self_build }}"}
|
||||
when: "item.when|bool"
|
||||
|
||||
- name: Ensure Appservice webhooks image is pulled
|
||||
docker_image:
|
||||
name: "{{ matrix_appservice_webhooks_docker_image }}"
|
||||
source: "{{ 'pull' if ansible_version.major > 2 or ansible_version.minor > 7 else omit }}"
|
||||
force_source: "{{ matrix_appservice_webhooks_docker_image_force_pull if ansible_version.major > 2 or ansible_version.minor >= 8 else omit }}"
|
||||
force: "{{ omit if ansible_version.major > 2 or ansible_version.minor >= 8 else matrix_appservice_webhooks_docker_image_force_pull }}"
|
||||
when: "not matrix_appservice_webhooks_container_image_self_build|bool"
|
||||
|
||||
- name: Ensure AppService webhooks paths exist
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
mode: 0750
|
||||
owner: "{{ matrix_user_username }}"
|
||||
group: "{{ matrix_user_groupname }}"
|
||||
with_items:
|
||||
- "{{ matrix_appservice_webhooks_base_path }}"
|
||||
- "{{ matrix_appservice_webhooks_config_path }}"
|
||||
- "{{ matrix_appservice_webhooks_data_path }}"
|
||||
- block:
|
||||
- name: Ensure Appservice webhooks repository is present on self-build
|
||||
git:
|
||||
repo: "{{ matrix_appservice_webhooks_container_image_self_build_repo }}"
|
||||
dest: "{{ matrix_appservice_webhooks_docker_src_files_path }}"
|
||||
version: "{{ matrix_appservice_webhooks_container_image_self_build_repo_version }}"
|
||||
force: "yes"
|
||||
register: matrix_appservice_webhooks_git_pull_results
|
||||
|
||||
- name: Ensure Appservice webhooks Docker image is built
|
||||
docker_image:
|
||||
name: "{{ matrix_appservice_webhooks_docker_image }}"
|
||||
source: build
|
||||
force_source: "{{ matrix_appservice_webhooks_git_pull_results.changed if ansible_version.major > 2 or ansible_version.minor >= 8 else omit }}"
|
||||
force: "{{ omit if ansible_version.major > 2 or ansible_version.minor >= 8 else matrix_appservice_webhooks_git_pull_results.changed }}"
|
||||
build:
|
||||
dockerfile: "{{ matrix_appservice_webhooks_container_image_self_build_repo_dockerfile_path }}"
|
||||
path: "{{ matrix_appservice_webhooks_docker_src_files_path }}"
|
||||
pull: yes
|
||||
when: "matrix_appservice_webhooks_container_image_self_build|bool"
|
||||
|
||||
- name: Ensure Matrix Appservice webhooks config is installed
|
||||
copy:
|
||||
|
||||
100
roles/matrix-bridge-beeper-linkedin/defaults/main.yml
Normal file
100
roles/matrix-bridge-beeper-linkedin/defaults/main.yml
Normal file
@ -0,0 +1,100 @@
|
||||
# beeper-linkedin is a Matrix <-> LinkedIn bridge
|
||||
# See: https://gitlab.com/beeper/linkedin
|
||||
|
||||
matrix_beeper_linkedin_enabled: true
|
||||
|
||||
matrix_beeper_linkedin_version: v0.5.1
|
||||
# See: https://gitlab.com/beeper/linkedin/container_registry
|
||||
matrix_beeper_linkedin_docker_image: "registry.gitlab.com/beeper/linkedin:{{ matrix_beeper_linkedin_version }}-amd64"
|
||||
matrix_beeper_linkedin_docker_image_force_pull: "{{ matrix_beeper_linkedin_docker_image.endswith(':latest-amd64') }}"
|
||||
|
||||
matrix_beeper_linkedin_base_path: "{{ matrix_base_data_path }}/beeper-linkedin"
|
||||
matrix_beeper_linkedin_config_path: "{{ matrix_beeper_linkedin_base_path }}/config"
|
||||
matrix_beeper_linkedin_data_path: "{{ matrix_beeper_linkedin_base_path }}/data"
|
||||
|
||||
matrix_beeper_linkedin_homeserver_address: "{{ matrix_homeserver_container_url }}"
|
||||
matrix_beeper_linkedin_homeserver_domain: "{{ matrix_domain }}"
|
||||
matrix_beeper_linkedin_appservice_address: "http://matrix-beeper-linkedin:29319"
|
||||
|
||||
# A list of extra arguments to pass to the container
|
||||
matrix_beeper_linkedin_container_extra_arguments: []
|
||||
|
||||
# List of systemd services that matrix-beeper-linkedin.service depends on.
|
||||
matrix_beeper_linkedin_systemd_required_services_list: ['docker.service']
|
||||
|
||||
# List of systemd services that matrix-beeper-linkedin.service wants
|
||||
matrix_beeper_linkedin_systemd_wanted_services_list: []
|
||||
|
||||
matrix_beeper_linkedin_appservice_token: ""
|
||||
matrix_beeper_linkedin_homeserver_token: ""
|
||||
|
||||
matrix_beeper_linkedin_appservice_bot_username: linkedinbot
|
||||
|
||||
|
||||
# Database-related configuration fields.
|
||||
# Only Postgres is supported.
|
||||
matrix_beeper_linkedin_database_engine: "postgres"
|
||||
|
||||
matrix_beeper_linkedin_database_username: 'matrix_beeper_linkedin'
|
||||
matrix_beeper_linkedin_database_password: ""
|
||||
matrix_beeper_linkedin_database_hostname: 'matrix-postgres'
|
||||
matrix_beeper_linkedin_database_port: 5432
|
||||
matrix_beeper_linkedin_database_name: 'matrix_beeper_linkedin'
|
||||
|
||||
matrix_beeper_linkedin_database_connection_string: 'postgresql://{{ matrix_beeper_linkedin_database_username }}:{{ matrix_beeper_linkedin_database_password }}@{{ matrix_beeper_linkedin_database_hostname }}:{{ matrix_beeper_linkedin_database_port }}/{{ matrix_beeper_linkedin_database_name }}?sslmode=disable'
|
||||
|
||||
matrix_beeper_linkedin_appservice_database_type: "{{
|
||||
{
|
||||
'postgres':'postgres',
|
||||
}[matrix_beeper_linkedin_database_engine]
|
||||
}}"
|
||||
|
||||
matrix_beeper_linkedin_appservice_database_uri: "{{
|
||||
{
|
||||
'postgres': matrix_beeper_linkedin_database_connection_string,
|
||||
}[matrix_beeper_linkedin_database_engine]
|
||||
}}"
|
||||
|
||||
|
||||
# Can be set to enable automatic double-puppeting via Shared Secret Auth (https://github.com/devture/matrix-synapse-shared-secret-auth).
|
||||
matrix_beeper_linkedin_login_shared_secret: ''
|
||||
|
||||
# Default beeper-linkedin configuration template which covers the generic use case.
|
||||
# You can customize it by controlling the various variables inside it.
|
||||
#
|
||||
# For a more advanced customization, you can extend the default (see `matrix_beeper_linkedin_configuration_extension_yaml`)
|
||||
# or completely replace this variable with your own template.
|
||||
matrix_beeper_linkedin_configuration_yaml: "{{ lookup('template', 'templates/config.yaml.j2') }}"
|
||||
|
||||
matrix_beeper_linkedin_configuration_extension_yaml: |
|
||||
# Your custom YAML configuration goes here.
|
||||
# This configuration extends the default starting configuration (`matrix_beeper_linkedin_configuration_yaml`).
|
||||
#
|
||||
# You can override individual variables from the default configuration, or introduce new ones.
|
||||
#
|
||||
# If you need something more special, you can take full control by
|
||||
# completely redefining `matrix_beeper_linkedin_configuration_yaml`.
|
||||
|
||||
matrix_beeper_linkedin_configuration_extension: "{{ matrix_beeper_linkedin_configuration_extension_yaml|from_yaml if matrix_beeper_linkedin_configuration_extension_yaml|from_yaml is mapping else {} }}"
|
||||
|
||||
# Holds the final configuration (a combination of the default and its extension).
|
||||
# You most likely don't need to touch this variable. Instead, see `matrix_beeper_linkedin_configuration_yaml`.
|
||||
matrix_beeper_linkedin_configuration: "{{ matrix_beeper_linkedin_configuration_yaml|from_yaml|combine(matrix_beeper_linkedin_configuration_extension, recursive=True) }}"
|
||||
|
||||
matrix_beeper_linkedin_registration_yaml: |
|
||||
id: linkedin
|
||||
url: {{ matrix_beeper_linkedin_appservice_address }}
|
||||
as_token: "{{ matrix_beeper_linkedin_appservice_token }}"
|
||||
hs_token: "{{ matrix_beeper_linkedin_homeserver_token }}"
|
||||
|
||||
sender_localpart: _bot_{{ matrix_beeper_linkedin_appservice_bot_username }}
|
||||
rate_limited: false
|
||||
namespaces:
|
||||
users:
|
||||
- regex: '^@linkedin_.+:{{ matrix_beeper_linkedin_homeserver_domain|regex_escape }}$'
|
||||
exclusive: true
|
||||
- exclusive: true
|
||||
regex: '^@{{ matrix_beeper_linkedin_appservice_bot_username|regex_escape }}:{{ matrix_beeper_linkedin_homeserver_domain|regex_escape }}$'
|
||||
de.sorunome.msc2409.push_ephemeral: true
|
||||
|
||||
matrix_beeper_linkedin_registration: "{{ matrix_beeper_linkedin_registration_yaml|from_yaml }}"
|
||||
16
roles/matrix-bridge-beeper-linkedin/tasks/init.yml
Normal file
16
roles/matrix-bridge-beeper-linkedin/tasks/init.yml
Normal file
@ -0,0 +1,16 @@
|
||||
- set_fact:
|
||||
matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-beeper-linkedin.service'] }}"
|
||||
when: matrix_beeper_linkedin_enabled|bool
|
||||
|
||||
# If the matrix-synapse role is not used, these variables may not exist.
|
||||
- set_fact:
|
||||
matrix_synapse_container_extra_arguments: >
|
||||
{{ matrix_synapse_container_extra_arguments|default([]) }}
|
||||
+
|
||||
["--mount type=bind,src={{ matrix_beeper_linkedin_config_path }}/registration.yaml,dst=/matrix-beeper-linkedin-registration.yaml,ro"]
|
||||
|
||||
matrix_synapse_app_service_config_files: >
|
||||
{{ matrix_synapse_app_service_config_files|default([]) }}
|
||||
+
|
||||
{{ ["/matrix-beeper-linkedin-registration.yaml"] }}
|
||||
when: matrix_beeper_linkedin_enabled|bool
|
||||
21
roles/matrix-bridge-beeper-linkedin/tasks/main.yml
Normal file
21
roles/matrix-bridge-beeper-linkedin/tasks/main.yml
Normal file
@ -0,0 +1,21 @@
|
||||
- import_tasks: "{{ role_path }}/tasks/init.yml"
|
||||
tags:
|
||||
- always
|
||||
|
||||
- import_tasks: "{{ role_path }}/tasks/validate_config.yml"
|
||||
when: "run_setup|bool and matrix_beeper_linkedin_enabled|bool"
|
||||
tags:
|
||||
- setup-all
|
||||
- setup-beeper-linkedin
|
||||
|
||||
- import_tasks: "{{ role_path }}/tasks/setup_install.yml"
|
||||
when: "run_setup and matrix_beeper_linkedin_enabled"
|
||||
tags:
|
||||
- setup-all
|
||||
- setup-beeper-linkedin
|
||||
|
||||
- import_tasks: "{{ role_path }}/tasks/setup_uninstall.yml"
|
||||
when: "run_setup and not matrix_beeper_linkedin_enabled"
|
||||
tags:
|
||||
- setup-all
|
||||
- setup-beeper-linkedin
|
||||
56
roles/matrix-bridge-beeper-linkedin/tasks/setup_install.yml
Normal file
56
roles/matrix-bridge-beeper-linkedin/tasks/setup_install.yml
Normal file
@ -0,0 +1,56 @@
|
||||
---
|
||||
|
||||
# If the matrix-synapse role is not used, `matrix_synapse_role_executed` won't exist.
|
||||
# We don't want to fail in such cases.
|
||||
- name: Fail if matrix-synapse role already executed
|
||||
fail:
|
||||
msg: >-
|
||||
The matrix-bridge-beeper-linkedin role needs to execute before the matrix-synapse role.
|
||||
when: "matrix_synapse_role_executed|default(False)"
|
||||
|
||||
- name: Ensure Beeper LinkedIn image is pulled
|
||||
docker_image:
|
||||
name: "{{ matrix_beeper_linkedin_docker_image }}"
|
||||
source: "{{ 'pull' if ansible_version.major > 2 or ansible_version.minor > 7 else omit }}"
|
||||
force_source: "{{ matrix_beeper_linkedin_docker_image_force_pull if ansible_version.major > 2 or ansible_version.minor >= 8 else omit }}"
|
||||
force: "{{ omit if ansible_version.major > 2 or ansible_version.minor >= 8 else matrix_beeper_linkedin_docker_image_force_pull }}"
|
||||
|
||||
- name: Ensure Beeper LinkedIn paths exists
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
mode: 0750
|
||||
owner: "{{ matrix_user_username }}"
|
||||
group: "{{ matrix_user_groupname }}"
|
||||
with_items:
|
||||
- "{{ matrix_beeper_linkedin_base_path }}"
|
||||
- "{{ matrix_beeper_linkedin_config_path }}"
|
||||
- "{{ matrix_beeper_linkedin_data_path }}"
|
||||
|
||||
- name: Ensure beeper-linkedin config.yaml installed
|
||||
copy:
|
||||
content: "{{ matrix_beeper_linkedin_configuration|to_nice_yaml }}"
|
||||
dest: "{{ matrix_beeper_linkedin_config_path }}/config.yaml"
|
||||
mode: 0644
|
||||
owner: "{{ matrix_user_username }}"
|
||||
group: "{{ matrix_user_groupname }}"
|
||||
|
||||
- name: Ensure beeper-linkedin registration.yaml installed
|
||||
copy:
|
||||
content: "{{ matrix_beeper_linkedin_registration|to_nice_yaml }}"
|
||||
dest: "{{ matrix_beeper_linkedin_config_path }}/registration.yaml"
|
||||
mode: 0644
|
||||
owner: "{{ matrix_user_username }}"
|
||||
group: "{{ matrix_user_groupname }}"
|
||||
|
||||
- name: Ensure matrix-beeper-linkedin.service installed
|
||||
template:
|
||||
src: "{{ role_path }}/templates/systemd/matrix-beeper-linkedin.service.j2"
|
||||
dest: "{{ matrix_systemd_path }}/matrix-beeper-linkedin.service"
|
||||
mode: 0644
|
||||
register: matrix_beeper_linkedin_systemd_service_result
|
||||
|
||||
- name: Ensure systemd reloaded after matrix-beeper-linkedin.service installation
|
||||
service:
|
||||
daemon_reload: yes
|
||||
when: "matrix_beeper_linkedin_systemd_service_result.changed"
|
||||
@ -0,0 +1,24 @@
|
||||
---
|
||||
|
||||
- name: Check existence of matrix-beeper-linkedin service
|
||||
stat:
|
||||
path: "{{ matrix_systemd_path }}/matrix-beeper-linkedin.service"
|
||||
register: matrix_beeper_linkedin_service_stat
|
||||
|
||||
- name: Ensure matrix-beeper-linkedin is stopped
|
||||
service:
|
||||
name: matrix-beeper-linkedin
|
||||
state: stopped
|
||||
daemon_reload: yes
|
||||
when: "matrix_beeper_linkedin_service_stat.stat.exists"
|
||||
|
||||
- name: Ensure matrix-beeper-linkedin.service doesn't exist
|
||||
file:
|
||||
path: "{{ matrix_systemd_path }}/matrix-beeper-linkedin.service"
|
||||
state: absent
|
||||
when: "matrix_beeper_linkedin_service_stat.stat.exists"
|
||||
|
||||
- name: Ensure systemd reloaded after matrix-beeper-linkedin.service removal
|
||||
service:
|
||||
daemon_reload: yes
|
||||
when: "matrix_beeper_linkedin_service_stat.stat.exists"
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user