CfgMgmtCamp 2024
Julien RIOU
February 6, 2024
Architecture of a playbook
tasks
directoryscope-action.yml
-- +migrate Up
create table author (
id bigserial primary key,
name text not null
);
create table talk (
id bigserial primary key,
title text not null,
author_id bigint not null references author(id)
);
-- +migrate Down
drop table author, talk;
sql-migrate up
sql-migrate down
- name: check arguments
hosts: all
run_once: true
delegate_to: localhost
tasks:
- name: check variable schema_url # fail fast
- name: check variable database_name # fail fast
- name: update database to the latest schema migration
hosts: "{{ database_name }}:&subrole_primary"
tasks:
- name: create sql-migrate directories
- name: create sql-migrate configuration file
- name: clone schema
- name: run migrations
- name: create sql-migrate directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop:
- /etc/sqlmigrate
- /var/lib/sqlmigrate
- name: create sql-migrate configuration file
ansible.builtin.template:
src: sqlmigrate/database.yml.j2
dest: "/etc/sqlmigrate/{{ database_name }}.yml"
- name: clone schema repository
ansible.builtin.git:
repo: "{{ schema_url }}"
dest: "/var/lib/sqlmigrate/{{ database_name }}"
version: "{{ branch|default('master') }}" # branch or tag
force: true
environment:
TMPDIR: /run
- name: run migrations
ansible.builtin.command:
cmd: sql-migrate up -config /etc/sqlmigrate/{{ database_name }}.yml
Just run CREATE DATABASE
.
Easy, right?
Well…
CREATE DATABASE
(using a module)Ensure softwares are up-to-date:
Move one or more databases from one cluster to another
How we use Ansible
How can we securely connect to remote hosts to perform actions?
“Ansible Wrapper”
[ssh_connection]
pipelining = True
private_key_file = ~/.ssh/id_ed25519
ssh_executable = /usr/share/ansible/plugins/bastion/sshwrapper.py
sftp_executable = /usr/share/ansible/plugins/bastion/sftpbastion.sh
transfer_method = sftp
retries = 3
Where can we find our hosts to perform operations?
server_type
postgresql
, mysql
, filer
, …role
node
, lb
, backup
, …cluster
identifierprimary
, replica
With a limit option
ansible server_type_postgresql -m ping
ansible-playbook -l server_type_postgresql playbook.yml
&
for intersection (AND
):
for multiple groups (OR
)!
for exclusion (NOT
)ansible-playbook -l 'test:&subrole_primary' playbook.yml
ansible-playbook -l 'server_type_postgresql:server_type_mysql' playbook.yml
ansible-playbook -l 'server_type_postgresql:!cluster_99' playbook.yml
Where Ansible runs?
awx -f human job_templates launch --monitor --extra_vars \
'{"database_name": "***", "branch": "master", "schema_url": "ssh://***.git"}' \
database-primary-schema-update
Part of the issues we have encountered are probably related to our internal implementation (internal services, internal Kubernetes).
Component | Type | cpu |
memory |
ephemeral-storage |
Quantity |
---|---|---|---|---|---|
web | request | 500m | 1Gi | 1 | |
limit | 2000m | 2Gi | |||
task | request | 1000m | 2Gi | 1 | |
limit | 1500m | 4Gi | |||
ee | request | 1000m | 256Mi | n | |
limit | 2000m | 2Gi | 1G |
PING
1 min 45 secs
scm_update_on_launch
(bool)scm_update_cache_timeout
(int)update_on_launch
(bool)update_cache_timeout
(int)GET /secrets
vault_secret
to read locally (application key)vault_secret_with_user
to bypass the cache (basic auth)configstore:
provider '***':
Post "https://***/auth/app":
dial tcp:
lookup *** on ***:53:
read udp ***->***:53:
i/o timeout
Replace iptables by nftables on Kubernetes workers
server-state-base /var/lib/haproxy/state
load-server-state-from-file local
ExecReload=/path/to/haproxy-create-state-files.sh
#!/bin/bash
sock=/run/haproxy/admin.sock
base=/var/lib/haproxy/state
backends=$(socat ${sock} - <<< "show backend" | fgrep -v '#')
for backend in ${backends}
do
statefile=${base}/${backend}
socat ${sock} - <<< "show servers state ${backend}" > ${statefile}
done
/etc/tower/conf.d/credentials.py
DATABASES = {
'default': {
"ENGINE": "awx.main.db.profiled_pg",
...
"OPTIONS": {
...
"keepalives": 1,
"keepalives_idle": 5,
"keepalives_interval": 5,
"keepalives_count": 5
},
}
}
Weekly CVE report on Docker images
With JFrog Xray*
Also available on Quay.io for base images
Also available on Quay.io for base images
Use community-ee-minimal image for execution environment
From 32 to 4 violations
0 critical, 1 high, 2 medium, 1 low
How do we work on playbooks?
ansible@admin.lab ~ $ tree -L 1
├── ansible-jriou
ansible@admin.lab ~ $ cd ansible-jriou/
ansible@admin.lab ~/ansible-jriou $ git branch
* master
ansible@admin.lab ~/ansible-jriou $ vi ping.yml
ansible@admin.lab ~/ansible-jriou $ ansible-playbook ping.yml
ansible@admin.lab ~/ansible-jriou $ git diff > feature.patch
Trust, but verify.
– Wilfried Roset
syntax
molecule/ping
├── molecule.yml (define the scenario)
└── converge.yml (run the playbook)
Define the scenario with molecule.yml
driver:
name: docker
platforms:
- name: debian11
image: "docker-registry/debian:bullseye"
scenario:
test_sequence:
- lint
- syntax
lint: |
set -e
yamllint ping.yml
ansible-lint ping.yml
Run the playbook with converge.yml
- name: Include playbook
ansible.builtin.import_playbook: ../../ping.yml
--> Found config file /path/to/run/.config/molecule/config.yml
--> Test matrix
└── ping
├── dependency
└── syntax
--> Scenario: 'ping'
--> Action: 'dependency'
--> Scenario: 'ping'
--> Action: 'syntax'
--> Sanity checks: 'docker'
playbook: /path/to/run/molecule/ping/converge.yml
CDS is an Enterprise-Grade Continuous Delivery & DevOps Automation Open Source Platform.
Confidence